Unnamed: 0 int64 0 350k | level_0 int64 0 351k | ApplicationNumber int64 9.75M 96.1M | ArtUnit int64 1.6k 3.99k | Abstract stringlengths 1 8.37k | Claims stringlengths 3 292k | abstract-claims stringlengths 68 293k | TechCenter int64 1.6k 3.9k |
|---|---|---|---|---|---|---|---|
10,500 | 10,500 | 16,011,298 | 2,647 | A case assembly includes a front portion and a back portion affixed with the front portion by an interface. The front and back portions are configured to move toward or away from each other about the interface. The assembly also includes one or more securing devices affixed with the back portion. The one or more securing devices configured to removably couple with one or more other securing devices on an electronic device. The one or more securing devices on the back portion are configured to hold the electronic device within the case assembly while allowing the electronic device to be separated from the case assembly. | 1. A case assembly comprising:
a front portion; a back portion affixed with the front portion by an interface, the front and back portions configured to move toward or away from each other about the interface; and one or more securing devices affixed with the back portion, the one or more securing devices configured to removably couple with one or more other securing devices on an electronic device, wherein the one or more securing devices on the back portion are configured to hold the electronic device within the case assembly while allowing the electronic device to be separated from the case assembly. 2. The case assembly of claim 1, wherein the front and back portions are configured to pivot relative to each other about or around the interface to open or close the case assembly. 3. The case assembly of claim 1, wherein the front and back portions are configured to move toward each other to close with the electronic device disposed between the front and back portions. 4. The case assembly of claim 1, wherein the one or more securing devices affixed with the back portion are configured to be magnetically coupled with the one or more other securing devices connected with the electronic device. 5. The case assembly of claim 1, wherein the one or more securing devices affixed with the back portion are configured to be coupled with the one or more other securing devices connected with the electronic device using a fabric hook-and-loop connection. 6. The case assembly of claim 1, wherein the one or more securing devices affixed with the back portion are configured to be removably coupled with the one or more other securing devices connected with the electronic device such that the electronic device can be separated from the case assembly without damaging the one or more securing devices or the one or more other securing devices. 7. The case assembly of claim 1, wherein the front and back portions are sized to receive a variety of different sized electronic devices that are removably coupled with the one or more securing devices. 8. The case assembly of claim 1, wherein the one or more securing devices are one or more elongated bars. 9. The case assembly of claim 1, wherein the one or more securing devices are one or more shapes with information one or more of embossed or printed thereon. 10. The case assembly of claim 1, wherein the one or more securing devices have one or more shapes of a team mascot. 11. A case assembly comprising:
a front portion; a back portion affixed with the front portion by an interface, the front and back portions configured to move toward or away from each other about the interface; and one or more magnetic devices affixed with the back portion, the one or more magnetic devices configured to magnetically couple with one or more securing devices on an electronic device, wherein the one or more magnetic devices on the back portion are configured to hold the electronic device within the case assembly while allowing the electronic device to be separated from the case assembly. 12. The case assembly of claim 11, wherein the front and back portions are configured to pivot relative to each other about or around the interface to open or close the case assembly. 13. The case assembly of claim 11, wherein the front and back portions are configured to move toward each other to close with the electronic device disposed between the front and back portions. 14. The case assembly of claim 11, wherein the one or more magnetic devices affixed with the back portion are configured to be removably coupled with the one or more securing devices connected with the electronic device such that the electronic device can be separated from the case assembly without damaging the one or more magnetic devices or the one or more securing devices. 15. The case assembly of claim 11, wherein the front and back portions are sized to receive a variety of different sized electronic devices that are removably coupled with the one or more magnetic devices. 16. The case assembly of claim 11, wherein the one or more magnetic devices are one or more elongated bars. 17. The case assembly of claim 11, wherein the one or more magnetic devices are one or more shapes with information one or more of embossed or printed thereon. 18. The case assembly of claim 11, wherein the one or more magnetic devices have one or more shapes of a team mascot. 19. The case assembly of claim 11, wherein the one or more magnetic devices are several spaced-apart circular discs. 20. A case assembly comprising:
a front portion; a back portion affixed with the front portion by an interface, the front and back portions configured to move toward or away from each other about the interface; and one or more securing devices affixed with the back portion, the one or more securing devices configured to removably couple with one or more other securing devices on an electronic device, wherein the one or more securing devices on the back portion are configured to hold the electronic device within the case assembly while allowing the electronic device to be separated from the case assembly, wherein the one or more securing devices affixed with the back portion are configured to be coupled with the one or more other securing devices connected with the electronic device using a fabric hook-and-loop connection. | A case assembly includes a front portion and a back portion affixed with the front portion by an interface. The front and back portions are configured to move toward or away from each other about the interface. The assembly also includes one or more securing devices affixed with the back portion. The one or more securing devices configured to removably couple with one or more other securing devices on an electronic device. The one or more securing devices on the back portion are configured to hold the electronic device within the case assembly while allowing the electronic device to be separated from the case assembly.1. A case assembly comprising:
a front portion; a back portion affixed with the front portion by an interface, the front and back portions configured to move toward or away from each other about the interface; and one or more securing devices affixed with the back portion, the one or more securing devices configured to removably couple with one or more other securing devices on an electronic device, wherein the one or more securing devices on the back portion are configured to hold the electronic device within the case assembly while allowing the electronic device to be separated from the case assembly. 2. The case assembly of claim 1, wherein the front and back portions are configured to pivot relative to each other about or around the interface to open or close the case assembly. 3. The case assembly of claim 1, wherein the front and back portions are configured to move toward each other to close with the electronic device disposed between the front and back portions. 4. The case assembly of claim 1, wherein the one or more securing devices affixed with the back portion are configured to be magnetically coupled with the one or more other securing devices connected with the electronic device. 5. The case assembly of claim 1, wherein the one or more securing devices affixed with the back portion are configured to be coupled with the one or more other securing devices connected with the electronic device using a fabric hook-and-loop connection. 6. The case assembly of claim 1, wherein the one or more securing devices affixed with the back portion are configured to be removably coupled with the one or more other securing devices connected with the electronic device such that the electronic device can be separated from the case assembly without damaging the one or more securing devices or the one or more other securing devices. 7. The case assembly of claim 1, wherein the front and back portions are sized to receive a variety of different sized electronic devices that are removably coupled with the one or more securing devices. 8. The case assembly of claim 1, wherein the one or more securing devices are one or more elongated bars. 9. The case assembly of claim 1, wherein the one or more securing devices are one or more shapes with information one or more of embossed or printed thereon. 10. The case assembly of claim 1, wherein the one or more securing devices have one or more shapes of a team mascot. 11. A case assembly comprising:
a front portion; a back portion affixed with the front portion by an interface, the front and back portions configured to move toward or away from each other about the interface; and one or more magnetic devices affixed with the back portion, the one or more magnetic devices configured to magnetically couple with one or more securing devices on an electronic device, wherein the one or more magnetic devices on the back portion are configured to hold the electronic device within the case assembly while allowing the electronic device to be separated from the case assembly. 12. The case assembly of claim 11, wherein the front and back portions are configured to pivot relative to each other about or around the interface to open or close the case assembly. 13. The case assembly of claim 11, wherein the front and back portions are configured to move toward each other to close with the electronic device disposed between the front and back portions. 14. The case assembly of claim 11, wherein the one or more magnetic devices affixed with the back portion are configured to be removably coupled with the one or more securing devices connected with the electronic device such that the electronic device can be separated from the case assembly without damaging the one or more magnetic devices or the one or more securing devices. 15. The case assembly of claim 11, wherein the front and back portions are sized to receive a variety of different sized electronic devices that are removably coupled with the one or more magnetic devices. 16. The case assembly of claim 11, wherein the one or more magnetic devices are one or more elongated bars. 17. The case assembly of claim 11, wherein the one or more magnetic devices are one or more shapes with information one or more of embossed or printed thereon. 18. The case assembly of claim 11, wherein the one or more magnetic devices have one or more shapes of a team mascot. 19. The case assembly of claim 11, wherein the one or more magnetic devices are several spaced-apart circular discs. 20. A case assembly comprising:
a front portion; a back portion affixed with the front portion by an interface, the front and back portions configured to move toward or away from each other about the interface; and one or more securing devices affixed with the back portion, the one or more securing devices configured to removably couple with one or more other securing devices on an electronic device, wherein the one or more securing devices on the back portion are configured to hold the electronic device within the case assembly while allowing the electronic device to be separated from the case assembly, wherein the one or more securing devices affixed with the back portion are configured to be coupled with the one or more other securing devices connected with the electronic device using a fabric hook-and-loop connection. | 2,600 |
10,501 | 10,501 | 15,338,821 | 2,626 | An electronic device, method and computer program product are provided. The electronic device comprises a memory to store program instructions, one or more processors to execute the program instructions, and a main body unit. The main body unit has a housing comprising a top side. The main body unit further includes a keyboard and a touchpad disposed along the top side of the housing. The keyboard and the touchpad are located in discrete areas. The touchpad includes a touch sensor and a touchpad display covering at least a portion of the touch sensor. The one or more processors display a graphical user interface (GUI) on the touchpad display. | 1. An electronic device, comprising:
a memory to store program instructions; one or more processors to execute the program instructions; and a main body unit having a housing comprising a top side, the main body unit further including a keyboard and a touchpad disposed along the top side of the housing, the keyboard and the touchpad located in discrete areas of the top side; the touchpad including a touch sensor and a touchpad display covering at least a portion of the touch sensor, wherein the one or more processors display a graphical user interface (GUI) having one or more virtual buttons on the touchpad display, the one or more processors to execute the program instructions based on a user touch input on the touchpad at the respective one or more virtual buttons. 2. (canceled) 3. The electronic device of claim 1, wherein the touchpad display comprises an electronic paper screen. 4. The electronic device of claim 1, further comprising a primary display moveably coupled to the main body unit. 5. The electronic device of claim 1, wherein the GUI is a notification GUI that displays a visual message notification. 6. The electronic device of claim 1, wherein, responsive to the one or more processors executing the program instructions, the one or more processors display, as the GUI, a virtual number pad on the touchpad display. 7. The electronic device of claim 1, wherein, responsive to the one or more processors executing the program instructions, the one or more processors display, as the GUI, a virtual dial pad on the touchpad display. 8. The electronic device of claim 1, wherein, responsive to the one or more processors executing the program instructions, the one or more processors display, as the GUI, a virtual calendar on the touchpad display. 9. The electronic device of claim 1, wherein, responsive to the one or more processors executing the program instructions, the one or more processors display, as the GUI, virtual media control buttons on the touchpad display. 10. (canceled) 11. The electronic device of claim 1, wherein the virtual buttons of the GUI include configurable hotkeys. 12. A method comprising:
providing an electronic device comprising main body unit including a keyboard and a touchpad on a top side of the main body unit, the keyboard and the touchpad located in discrete areas of the top side, the touchpad including a touch sensor and a touchpad display covering at least a portion of the touch sensor; and providing one or more processors that display a graphical user interface (GUI) on the touchpad display, the GUI on the touchpad display having one or more virtual buttons selectable via a user touch input on the touchpad, the one or more processors to execute program instructions based on the user touch input on the touchpad at the respective one or more virtual buttons. 13. The method of claim 12, further comprising, responsive to receiving the user touch input on the touchpad while the GUI is displayed on the touchpad display, modifying the GUI on the touchpad display based on the user touch input. 14. The method of claim 12, wherein the GUI displayed on the touchpad display is associated with an active program, and the method further comprises, responsive to receiving the user touch input on one of the one or more virtual buttons, executing program instructions of the active program based on the virtual button that is selected. 15. The method of claim 12, further comprising, responsive to receiving the user touch input on the touchpad while the GUI is displayed on the touchpad display, updating a visual representation of an active program displayed on a primary display of the electronic device, the primary display movably coupled to the main body unit. 16. The method of claim 12, wherein providing the electronic device comprises constructing the touchpad display to include an electronic paper screen. 17. The method of claim 12, wherein the GUI is a notification GUI that includes a visual message notification, the notification GUI being displayed on the touchpad display responsive to receiving data representative of the visual message notification. 18. (canceled) 19. A computer program product comprising a non-transitory computer readable storage medium comprising computer executable code to:
display a graphical user interface (GUI) on a touchpad display of a touchpad of an electronic device, the GUI having one or more virtual buttons selectable via a user touch input on the touchpad, the GUI being specific to a corresponding program; and responsive to receiving a user touch input on the touchpad at the one or more virtual buttons of the GUI, update a visual representation of the corresponding program displayed on a primary display of the electronic device. 20. The program product of claim 19, wherein the non-transitory computer readable storage medium further comprises executable code to display, as the GUI, at least one of a visual message notification, a virtual number pad, a virtual dial pad, a virtual calendar, virtual media control buttons, or virtual configurable hotkeys on the touchpad display. 21. The electronic device of claim 1, wherein the user touch input on the touchpad at a respective virtual button of the one or more virtual buttons is at a location of the touchpad that overlaps a location that the respective virtual button is displayed on the touchpad display. 22. The electronic device of claim 1, further comprising a primary display movably coupled to the housing of the main body unit, the primary display having one or more of a light emitting diode (LED), organic light emitting diode (OLED), liquid crystal display (LCD, or plasma screen, the touchpad display having an electronic paper screen that is monochromatic. 23. The electronic device of claim 22, wherein the GUI displayed by the one or more processors on the touchpad display is monochromatic and is associated with an active program, wherein the one or more processors are configured to display a polychromatic virtual representation of the active program on the primary display, wherein, responsive to receiving the user touch input on the touchpad selecting at least one of the one or more virtual buttons of the GUI, the one or more processors are configured to update the polychromatic visual representation of the active program that is displayed on the primary display. | An electronic device, method and computer program product are provided. The electronic device comprises a memory to store program instructions, one or more processors to execute the program instructions, and a main body unit. The main body unit has a housing comprising a top side. The main body unit further includes a keyboard and a touchpad disposed along the top side of the housing. The keyboard and the touchpad are located in discrete areas. The touchpad includes a touch sensor and a touchpad display covering at least a portion of the touch sensor. The one or more processors display a graphical user interface (GUI) on the touchpad display.1. An electronic device, comprising:
a memory to store program instructions; one or more processors to execute the program instructions; and a main body unit having a housing comprising a top side, the main body unit further including a keyboard and a touchpad disposed along the top side of the housing, the keyboard and the touchpad located in discrete areas of the top side; the touchpad including a touch sensor and a touchpad display covering at least a portion of the touch sensor, wherein the one or more processors display a graphical user interface (GUI) having one or more virtual buttons on the touchpad display, the one or more processors to execute the program instructions based on a user touch input on the touchpad at the respective one or more virtual buttons. 2. (canceled) 3. The electronic device of claim 1, wherein the touchpad display comprises an electronic paper screen. 4. The electronic device of claim 1, further comprising a primary display moveably coupled to the main body unit. 5. The electronic device of claim 1, wherein the GUI is a notification GUI that displays a visual message notification. 6. The electronic device of claim 1, wherein, responsive to the one or more processors executing the program instructions, the one or more processors display, as the GUI, a virtual number pad on the touchpad display. 7. The electronic device of claim 1, wherein, responsive to the one or more processors executing the program instructions, the one or more processors display, as the GUI, a virtual dial pad on the touchpad display. 8. The electronic device of claim 1, wherein, responsive to the one or more processors executing the program instructions, the one or more processors display, as the GUI, a virtual calendar on the touchpad display. 9. The electronic device of claim 1, wherein, responsive to the one or more processors executing the program instructions, the one or more processors display, as the GUI, virtual media control buttons on the touchpad display. 10. (canceled) 11. The electronic device of claim 1, wherein the virtual buttons of the GUI include configurable hotkeys. 12. A method comprising:
providing an electronic device comprising main body unit including a keyboard and a touchpad on a top side of the main body unit, the keyboard and the touchpad located in discrete areas of the top side, the touchpad including a touch sensor and a touchpad display covering at least a portion of the touch sensor; and providing one or more processors that display a graphical user interface (GUI) on the touchpad display, the GUI on the touchpad display having one or more virtual buttons selectable via a user touch input on the touchpad, the one or more processors to execute program instructions based on the user touch input on the touchpad at the respective one or more virtual buttons. 13. The method of claim 12, further comprising, responsive to receiving the user touch input on the touchpad while the GUI is displayed on the touchpad display, modifying the GUI on the touchpad display based on the user touch input. 14. The method of claim 12, wherein the GUI displayed on the touchpad display is associated with an active program, and the method further comprises, responsive to receiving the user touch input on one of the one or more virtual buttons, executing program instructions of the active program based on the virtual button that is selected. 15. The method of claim 12, further comprising, responsive to receiving the user touch input on the touchpad while the GUI is displayed on the touchpad display, updating a visual representation of an active program displayed on a primary display of the electronic device, the primary display movably coupled to the main body unit. 16. The method of claim 12, wherein providing the electronic device comprises constructing the touchpad display to include an electronic paper screen. 17. The method of claim 12, wherein the GUI is a notification GUI that includes a visual message notification, the notification GUI being displayed on the touchpad display responsive to receiving data representative of the visual message notification. 18. (canceled) 19. A computer program product comprising a non-transitory computer readable storage medium comprising computer executable code to:
display a graphical user interface (GUI) on a touchpad display of a touchpad of an electronic device, the GUI having one or more virtual buttons selectable via a user touch input on the touchpad, the GUI being specific to a corresponding program; and responsive to receiving a user touch input on the touchpad at the one or more virtual buttons of the GUI, update a visual representation of the corresponding program displayed on a primary display of the electronic device. 20. The program product of claim 19, wherein the non-transitory computer readable storage medium further comprises executable code to display, as the GUI, at least one of a visual message notification, a virtual number pad, a virtual dial pad, a virtual calendar, virtual media control buttons, or virtual configurable hotkeys on the touchpad display. 21. The electronic device of claim 1, wherein the user touch input on the touchpad at a respective virtual button of the one or more virtual buttons is at a location of the touchpad that overlaps a location that the respective virtual button is displayed on the touchpad display. 22. The electronic device of claim 1, further comprising a primary display movably coupled to the housing of the main body unit, the primary display having one or more of a light emitting diode (LED), organic light emitting diode (OLED), liquid crystal display (LCD, or plasma screen, the touchpad display having an electronic paper screen that is monochromatic. 23. The electronic device of claim 22, wherein the GUI displayed by the one or more processors on the touchpad display is monochromatic and is associated with an active program, wherein the one or more processors are configured to display a polychromatic virtual representation of the active program on the primary display, wherein, responsive to receiving the user touch input on the touchpad selecting at least one of the one or more virtual buttons of the GUI, the one or more processors are configured to update the polychromatic visual representation of the active program that is displayed on the primary display. | 2,600 |
10,502 | 10,502 | 15,756,421 | 2,622 | With a graphical user interface is displayed on a touch-sensitive display unit, a tactilely perceptible feedback is output on the display unit by an actuator as long as a finger sweeping over the display unit in the region of the displayed graphical user interface is detected. An intensity of the tactilely perceptible feedback is predefined depending on a speed at which the finger sweeps over the display unit in the region of the displayed graphical user interface. | 1-12. (canceled) 13. A method for operating an input device, comprising:
displaying a graphical user interface, including a sliding controller, in a region of a touch-sensitive display unit; and outputting a tactilely perceptible feedback on the touch-sensitive display unit by an actuator as long as a finger sweeping over the touch-sensitive display unit in the region of the graphical user interface is detected, an intensity of the tactilely perceptible feedback being predefined depending on a speed at which the finger sweeps over the touch-sensitive display unit in the region of the graphical user interface where the sliding controller is displayed, the intensity of the tactilely perceptible feedback being predefined to be smaller as the finger sweeps faster over the touch-sensitive display unit in the region of the graphical user interface. 14. The method as claimed in claim 13,
wherein said displaying displays the sliding controller as a plurality of elements arranged one beside another in a series, and wherein the intensity of the tactilely perceptible feedback is predefined depending on the elements swept over by the finger per unit time. 15. The method as claimed in claim 13, wherein the intensity of the tactilely perceptible feedback is predefined by predefining a stroke of the actuator which causes the touch-sensitive display unit to be deflected in said outputting of the tactilely perceptible feedback. 16. The method as claimed in claim 13, wherein the intensity of the tactilely perceptible feedback is predefined by predefining an acceleration of the actuator which causes the touch-sensitive display unit to be deflected in said outputting of the tactilely perceptible feedback. 17. The method as claimed in claim 13, wherein a first intensity of the tactilely perceptible feedback is predefined for when the speed with which the finger sweeps over the touch-sensitive display unit in the region of the graphical user interface is less than a predefined speed value, and a second intensity of the tactilely perceptible feedback is predefined for when the speed with which the finger sweeps over the touch-sensitive display unit in the region of the displayed graphical user interface is at least equal in magnitude to the predefined speed value. 18. The method as claimed in claim 13, wherein the intensity of the tactilely perceptible feedback is predefined in accordance with a family of characteristic curves in which different speed ranges of the finger over the touch-sensitive display unit in the region of the graphical user interface are assigned respective intensities of the tactilely perceptible feedback. 19. An input device, comprising:
a touch-sensitive display unit having a region displaying a graphical user interface with a sliding controller; an actuator configured to output a tactilely perceptible feedback on the touch-sensitive display unit; and a control unit configured to drive the actuator to output the tactilely perceptible feedback, as long as a finger sweeping over the touch-sensitive display unit in the region of the graphical user interface displayed by said touch-sensitive display unit is detected, with an intensity of the tactilely perceptible feedback depending on a speed at which the finger sweeps over the touch-sensitive display unit in the region of the graphical user interface where the sliding controller is displayed, the intensity of the tactilely perceptible feedback being predefined to be smaller as the finger sweeps faster over the touch-sensitive display unit in the region of the graphical user interface. 20. The input device as claimed in claim 19,
wherein said touch-sensitive display unit displays the sliding controller as a plurality of elements arranged one beside another in a series, and wherein said control unit is configured to predefine the intensity of the tactilely perceptible feedback depending on the elements swept over by the finger per unit time. 21. The input device as claimed in claim 19, wherein said control unit is configured to predefine the intensity of the tactilely perceptible feedback by predefining a stroke of said actuator which causes said touch-sensitive display unit to be deflected to output the tactilely perceptible feedback. 22. The input device as claimed in claim 19, wherein said control unit is configured to predefine the intensity of the tactilely perceptible feedback by predefining an acceleration of said actuator which causes said touch-sensitive display unit to be deflected in the tactilely perceptible feedback. 23. The input device as claimed in claim 19, wherein said control unit is configured to predefine the intensity of the tactilely perceptible feedback with a first intensity when the speed with which the finger sweeps over the display unit in the region of the graphical user interface is less than a predefined speed value, and a second intensity when the speed with which the finger sweeps over the display unit in the region of the displayed graphical user interface is at least equal in magnitude to the predefined speed value. 24. The input device as claimed in claim 19, wherein said control unit is configured to predefine the intensity of the tactilely perceptible feedback in accordance with a family of characteristic curves in which different speed ranges of the finger over said touch-sensitive display unit in the region of the graphical user interface are assigned respective intensities of the tactilely perceptible feedback. 25. An apparatus, comprising;
an enclosure; and an input device, including
a touch-sensitive display unit having a region displaying a graphical user interface with a sliding controller;
an actuator configured to output a tactilely perceptible feedback on the touch-sensitive display unit; and
a control unit configured to drive the actuator to output the tactilely perceptible feedback, as long as a finger sweeping over the touch-sensitive display unit in the region of the graphical user interface displayed by the touch-sensitive display unit is detected, with an intensity of the tactilely perceptible feedback depending on a speed at which the finger sweeps over the touch-sensitive display unit in the region of the graphical user interface where the sliding controller is displayed, the intensity of the tactilely perceptible feedback being predefined to be smaller as the finger sweeps faster over the touch-sensitive display unit in the region of the graphical user interface. 26. The apparatus as claimed in claim 25, wherein the enclosure is a motor vehicle. 27. The apparatus as claimed in claim 26,
wherein the touch-sensitive display unit displays the sliding controller as a plurality of elements arranged one beside another in a series, and wherein the control unit is configured to predefine the intensity of the tactilely perceptible feedback depending on the elements swept over by the finger per unit time. 28. The apparatus as claimed in claim 26, wherein the control unit is configured to predefine the intensity of the tactilely perceptible feedback by predefining an acceleration of said actuator which causes the touch-sensitive display unit to be deflected in outputting the tactilely perceptible feedback. 29. The apparatus as claimed in claim 26, wherein the control unit is configured to predefine the intensity of the tactilely perceptible feedback with a first intensity when the speed with which the finger sweeps over the display unit in the region of the graphical user interface is less than a predefined speed value, and a second intensity when the speed with which the finger sweeps over the display unit in the region of the displayed graphical user interface is at least equal in magnitude to the predefined speed value. 30. The apparatus as claimed in claim 26, wherein the control unit is configured to predefine the intensity of the tactilely perceptible feedback in accordance with a family of characteristic curves in which different speed ranges of the finger over the touch-sensitive display unit in the region of the graphical user interface are assigned respective intensities of the tactilely perceptible feedback. 31. The apparatus as claimed in claim 25, wherein the enclosure is a smartphone. 32. The apparatus as claimed in claim 25, wherein the enclosure is a tablet computer. | With a graphical user interface is displayed on a touch-sensitive display unit, a tactilely perceptible feedback is output on the display unit by an actuator as long as a finger sweeping over the display unit in the region of the displayed graphical user interface is detected. An intensity of the tactilely perceptible feedback is predefined depending on a speed at which the finger sweeps over the display unit in the region of the displayed graphical user interface.1-12. (canceled) 13. A method for operating an input device, comprising:
displaying a graphical user interface, including a sliding controller, in a region of a touch-sensitive display unit; and outputting a tactilely perceptible feedback on the touch-sensitive display unit by an actuator as long as a finger sweeping over the touch-sensitive display unit in the region of the graphical user interface is detected, an intensity of the tactilely perceptible feedback being predefined depending on a speed at which the finger sweeps over the touch-sensitive display unit in the region of the graphical user interface where the sliding controller is displayed, the intensity of the tactilely perceptible feedback being predefined to be smaller as the finger sweeps faster over the touch-sensitive display unit in the region of the graphical user interface. 14. The method as claimed in claim 13,
wherein said displaying displays the sliding controller as a plurality of elements arranged one beside another in a series, and wherein the intensity of the tactilely perceptible feedback is predefined depending on the elements swept over by the finger per unit time. 15. The method as claimed in claim 13, wherein the intensity of the tactilely perceptible feedback is predefined by predefining a stroke of the actuator which causes the touch-sensitive display unit to be deflected in said outputting of the tactilely perceptible feedback. 16. The method as claimed in claim 13, wherein the intensity of the tactilely perceptible feedback is predefined by predefining an acceleration of the actuator which causes the touch-sensitive display unit to be deflected in said outputting of the tactilely perceptible feedback. 17. The method as claimed in claim 13, wherein a first intensity of the tactilely perceptible feedback is predefined for when the speed with which the finger sweeps over the touch-sensitive display unit in the region of the graphical user interface is less than a predefined speed value, and a second intensity of the tactilely perceptible feedback is predefined for when the speed with which the finger sweeps over the touch-sensitive display unit in the region of the displayed graphical user interface is at least equal in magnitude to the predefined speed value. 18. The method as claimed in claim 13, wherein the intensity of the tactilely perceptible feedback is predefined in accordance with a family of characteristic curves in which different speed ranges of the finger over the touch-sensitive display unit in the region of the graphical user interface are assigned respective intensities of the tactilely perceptible feedback. 19. An input device, comprising:
a touch-sensitive display unit having a region displaying a graphical user interface with a sliding controller; an actuator configured to output a tactilely perceptible feedback on the touch-sensitive display unit; and a control unit configured to drive the actuator to output the tactilely perceptible feedback, as long as a finger sweeping over the touch-sensitive display unit in the region of the graphical user interface displayed by said touch-sensitive display unit is detected, with an intensity of the tactilely perceptible feedback depending on a speed at which the finger sweeps over the touch-sensitive display unit in the region of the graphical user interface where the sliding controller is displayed, the intensity of the tactilely perceptible feedback being predefined to be smaller as the finger sweeps faster over the touch-sensitive display unit in the region of the graphical user interface. 20. The input device as claimed in claim 19,
wherein said touch-sensitive display unit displays the sliding controller as a plurality of elements arranged one beside another in a series, and wherein said control unit is configured to predefine the intensity of the tactilely perceptible feedback depending on the elements swept over by the finger per unit time. 21. The input device as claimed in claim 19, wherein said control unit is configured to predefine the intensity of the tactilely perceptible feedback by predefining a stroke of said actuator which causes said touch-sensitive display unit to be deflected to output the tactilely perceptible feedback. 22. The input device as claimed in claim 19, wherein said control unit is configured to predefine the intensity of the tactilely perceptible feedback by predefining an acceleration of said actuator which causes said touch-sensitive display unit to be deflected in the tactilely perceptible feedback. 23. The input device as claimed in claim 19, wherein said control unit is configured to predefine the intensity of the tactilely perceptible feedback with a first intensity when the speed with which the finger sweeps over the display unit in the region of the graphical user interface is less than a predefined speed value, and a second intensity when the speed with which the finger sweeps over the display unit in the region of the displayed graphical user interface is at least equal in magnitude to the predefined speed value. 24. The input device as claimed in claim 19, wherein said control unit is configured to predefine the intensity of the tactilely perceptible feedback in accordance with a family of characteristic curves in which different speed ranges of the finger over said touch-sensitive display unit in the region of the graphical user interface are assigned respective intensities of the tactilely perceptible feedback. 25. An apparatus, comprising;
an enclosure; and an input device, including
a touch-sensitive display unit having a region displaying a graphical user interface with a sliding controller;
an actuator configured to output a tactilely perceptible feedback on the touch-sensitive display unit; and
a control unit configured to drive the actuator to output the tactilely perceptible feedback, as long as a finger sweeping over the touch-sensitive display unit in the region of the graphical user interface displayed by the touch-sensitive display unit is detected, with an intensity of the tactilely perceptible feedback depending on a speed at which the finger sweeps over the touch-sensitive display unit in the region of the graphical user interface where the sliding controller is displayed, the intensity of the tactilely perceptible feedback being predefined to be smaller as the finger sweeps faster over the touch-sensitive display unit in the region of the graphical user interface. 26. The apparatus as claimed in claim 25, wherein the enclosure is a motor vehicle. 27. The apparatus as claimed in claim 26,
wherein the touch-sensitive display unit displays the sliding controller as a plurality of elements arranged one beside another in a series, and wherein the control unit is configured to predefine the intensity of the tactilely perceptible feedback depending on the elements swept over by the finger per unit time. 28. The apparatus as claimed in claim 26, wherein the control unit is configured to predefine the intensity of the tactilely perceptible feedback by predefining an acceleration of said actuator which causes the touch-sensitive display unit to be deflected in outputting the tactilely perceptible feedback. 29. The apparatus as claimed in claim 26, wherein the control unit is configured to predefine the intensity of the tactilely perceptible feedback with a first intensity when the speed with which the finger sweeps over the display unit in the region of the graphical user interface is less than a predefined speed value, and a second intensity when the speed with which the finger sweeps over the display unit in the region of the displayed graphical user interface is at least equal in magnitude to the predefined speed value. 30. The apparatus as claimed in claim 26, wherein the control unit is configured to predefine the intensity of the tactilely perceptible feedback in accordance with a family of characteristic curves in which different speed ranges of the finger over the touch-sensitive display unit in the region of the graphical user interface are assigned respective intensities of the tactilely perceptible feedback. 31. The apparatus as claimed in claim 25, wherein the enclosure is a smartphone. 32. The apparatus as claimed in claim 25, wherein the enclosure is a tablet computer. | 2,600 |
10,503 | 10,503 | 15,517,998 | 2,616 | The present invention provides a magnetic resonance imaging system for imaging a subject by a multi-shot imaging. The magnetic resonance imaging system comprises an acquiring unit for acquiring MR raw data corresponding to a plurality of shots; an imaging unit for generating a plurality of folded images from the MR raw data, wherein each of the plurality of folded images is generated from a subset of the MR raw data; a deriving unit for deriving magnitude of each pixel of each folded image; a detecting unit for detecting a motion of the subject during the multi-shot imaging based on similarity measurements of any two folded images of the plurality of folded images, wherein the detecting unit further comprises a first deriving unit configured to derive the measured similarities; and a reconstructing unit for reconstructing a MR image of the subject based on MR raw data obtained according to a detection result of the detecting unit. Since the partially acquired MR raw data is used for motion detection directly, it would be more rapid and stable. | 1-9. (canceled) 10. A magnetic resonance imaging system for imaging a subject by a multi-shot imaging, comprising
an acquiring unit configured to acquire MR raw data corresponding to a plurality of shots; a first imaging unit configured to generate a plurality of folded images from the MR raw data, wherein each of the plurality of folded images is generated from a subset of the MR raw data, the magnetic resonance imaging system further comprising: a deriving unit configured to derive magnitude of each pixel of each folded image; and a detecting unit configured to detect an inter-shot motion of the subject during the multi-shot imaging based on similarity measurements of any two folded images of the plurality of folded images, wherein the detecting unit further comprises a first deriving unit configured to derive the measured similarities. 11. The magnetic resonance imaging system of claim 10, wherein the similarities are results of measurements of dis-similarity or similarity. 12. The magnetic resonance imaging system of claim 10,
wherein the detecting unit further comprises a second determining unit configured to cluster the folded images into at least one static cluster based on the measured similarities, any two folded images of each static cluster having the similarity which indicates that the two folded images are substantionally the same, the static cluster having the largest number of the MR raw data being referred to as a reference cluster and each remaining static cluster being referred to as a non-reference cluster. 13. The magnetic resonance imaging system of claim 12, wherein the detecting unit further comprises:
a third determining unit configured to determine a motion parameter for a rigid motion of a folded image in the non-reference cluster relative to a folded image in the reference cluster by maximizing the similarity between the two folded images. 14. The magnetic resonance imaging system of claim 13, further comprising:
a second imaging unit configured to reconstruct a MR image of the subject from the MR raw data of the refence cluster or from the MR raw data corresponding to the plurality of shots based on the motion parameter. 15. The magnetic resonance imaging system of claim 12, wherein the detecting unit further comprises:
a second deriving unit configured to derive a matrix of the similarities based on the similarity measurements of any two folded images of the plurality of folded images, each element of the matrix indicative of the measured similarity of corresponding two folded images; wherein the second determining unit is further configured to derive the reference cluster of the folded images based on the matrix of the similarity parameters. 16. The magnetic resonance imaging system of claim 10, further comprising:
a second imaging unit configured to reconstruct a MR image of the subject based on MR raw data without the inter-shot motion obtained according to a detection result of the detecting unit. 17. A magnetic resonance imaging method for imaging a subject by a multi-shot imaging, comprising:
acquiring MR raw data corresponding to a plurality of shots; generating a plurality of folded images from the MR raw data, wherein each of the plurality of folded images is generated from a subset of the MR raw data, the magnetic resonance imaging method further comprising: deriving magnitude of each pixel of each folded image; measuring similarities of any two folded images of the plurality of folded images; and detecting an inter-shot motion of the subject during the multi-shot imaging based on the measured similarities. 18. The magnetic resonance imaging method of claim 17, wherein the step of detecting further comprises
clustering the folded images into at least one static cluster based on the measured similarities, any two folded images of each static cluster having the similarity which indicates that the two folded images are substantially the same; determining the static cluster having the largest number of the MR raw data as a reference cluster; and determining each remaining static cluster as a non-reference cluster. 19. The magnetic imaging method of claim 18, wherein the step of detecting further comprises:
determine a motion parameter for a rigid motion of a folded image in the non-reference cluster relative to a folded image in the reference cluster by maximizing the similarity between the two folded images. 20. The magnetic imaging method of claim 19, further comprising:
reconstructing a MR image of the subject from the MR raw data of the refence cluster or from the MR raw data corresponding to the plurality of shots based on the motion parameter. 21. The magnetic imaging method of claim 17, further comprising:
reconstructing a MR image of the subject based on MR raw data without the inter-shot motion obtained according to a detection result of the detecting step. 22. A computer program product comprising computer program code means for causing a computer to perform the steps of the method as claimed in claims 18 when said computer program code means is run on the computer. | The present invention provides a magnetic resonance imaging system for imaging a subject by a multi-shot imaging. The magnetic resonance imaging system comprises an acquiring unit for acquiring MR raw data corresponding to a plurality of shots; an imaging unit for generating a plurality of folded images from the MR raw data, wherein each of the plurality of folded images is generated from a subset of the MR raw data; a deriving unit for deriving magnitude of each pixel of each folded image; a detecting unit for detecting a motion of the subject during the multi-shot imaging based on similarity measurements of any two folded images of the plurality of folded images, wherein the detecting unit further comprises a first deriving unit configured to derive the measured similarities; and a reconstructing unit for reconstructing a MR image of the subject based on MR raw data obtained according to a detection result of the detecting unit. Since the partially acquired MR raw data is used for motion detection directly, it would be more rapid and stable.1-9. (canceled) 10. A magnetic resonance imaging system for imaging a subject by a multi-shot imaging, comprising
an acquiring unit configured to acquire MR raw data corresponding to a plurality of shots; a first imaging unit configured to generate a plurality of folded images from the MR raw data, wherein each of the plurality of folded images is generated from a subset of the MR raw data, the magnetic resonance imaging system further comprising: a deriving unit configured to derive magnitude of each pixel of each folded image; and a detecting unit configured to detect an inter-shot motion of the subject during the multi-shot imaging based on similarity measurements of any two folded images of the plurality of folded images, wherein the detecting unit further comprises a first deriving unit configured to derive the measured similarities. 11. The magnetic resonance imaging system of claim 10, wherein the similarities are results of measurements of dis-similarity or similarity. 12. The magnetic resonance imaging system of claim 10,
wherein the detecting unit further comprises a second determining unit configured to cluster the folded images into at least one static cluster based on the measured similarities, any two folded images of each static cluster having the similarity which indicates that the two folded images are substantionally the same, the static cluster having the largest number of the MR raw data being referred to as a reference cluster and each remaining static cluster being referred to as a non-reference cluster. 13. The magnetic resonance imaging system of claim 12, wherein the detecting unit further comprises:
a third determining unit configured to determine a motion parameter for a rigid motion of a folded image in the non-reference cluster relative to a folded image in the reference cluster by maximizing the similarity between the two folded images. 14. The magnetic resonance imaging system of claim 13, further comprising:
a second imaging unit configured to reconstruct a MR image of the subject from the MR raw data of the refence cluster or from the MR raw data corresponding to the plurality of shots based on the motion parameter. 15. The magnetic resonance imaging system of claim 12, wherein the detecting unit further comprises:
a second deriving unit configured to derive a matrix of the similarities based on the similarity measurements of any two folded images of the plurality of folded images, each element of the matrix indicative of the measured similarity of corresponding two folded images; wherein the second determining unit is further configured to derive the reference cluster of the folded images based on the matrix of the similarity parameters. 16. The magnetic resonance imaging system of claim 10, further comprising:
a second imaging unit configured to reconstruct a MR image of the subject based on MR raw data without the inter-shot motion obtained according to a detection result of the detecting unit. 17. A magnetic resonance imaging method for imaging a subject by a multi-shot imaging, comprising:
acquiring MR raw data corresponding to a plurality of shots; generating a plurality of folded images from the MR raw data, wherein each of the plurality of folded images is generated from a subset of the MR raw data, the magnetic resonance imaging method further comprising: deriving magnitude of each pixel of each folded image; measuring similarities of any two folded images of the plurality of folded images; and detecting an inter-shot motion of the subject during the multi-shot imaging based on the measured similarities. 18. The magnetic resonance imaging method of claim 17, wherein the step of detecting further comprises
clustering the folded images into at least one static cluster based on the measured similarities, any two folded images of each static cluster having the similarity which indicates that the two folded images are substantially the same; determining the static cluster having the largest number of the MR raw data as a reference cluster; and determining each remaining static cluster as a non-reference cluster. 19. The magnetic imaging method of claim 18, wherein the step of detecting further comprises:
determine a motion parameter for a rigid motion of a folded image in the non-reference cluster relative to a folded image in the reference cluster by maximizing the similarity between the two folded images. 20. The magnetic imaging method of claim 19, further comprising:
reconstructing a MR image of the subject from the MR raw data of the refence cluster or from the MR raw data corresponding to the plurality of shots based on the motion parameter. 21. The magnetic imaging method of claim 17, further comprising:
reconstructing a MR image of the subject based on MR raw data without the inter-shot motion obtained according to a detection result of the detecting step. 22. A computer program product comprising computer program code means for causing a computer to perform the steps of the method as claimed in claims 18 when said computer program code means is run on the computer. | 2,600 |
10,504 | 10,504 | 14,560,726 | 2,696 | An object is to provide a display device that makes it easy to photograph a user's face looking at a display screen, or the like, in a dark place. Another object is to provide a display device that makes it easy to check a user's face looking at a display screen, or the like, in a dark place. A display device includes a first region and a second region. The first region has a function of displaying an image of a subject. The second region has a function of illuminating a subject with light. An electronic device includes a display device and an imaging device. The display device includes a first region and a second region. The first region has a function of displaying an image of a subject that is obtained using the imaging device. The second region has a function of illuminating a subject with light. | 1. A display device comprising:
a first region; and a second region, wherein the first region is configured to display an image of a subject, and wherein the second region is configured to illuminate the subject with light. 2. The display device according to claim 1, wherein the light is white light. 3. A display device comprising:
a first region; and a second region, wherein the first region is configured to display an image, and wherein the second region is configured to emit illuminating light. 4. The display device according to claim 3, wherein the illuminating light is white light. 5. An electronic device comprising:
a display device; and an imaging device, wherein the display device includes a first region and a second region, wherein the first region is configured to display an image of a subject obtained by using the imaging device, and wherein the second region is configured to illuminate the subject with light. 6. The electronic device according to claim 5, wherein the display device and the imaging device are provided on a same surface. 7. The electronic device according to claim 5, wherein the light is white light. 8. A program comprising:
a first function; and a second function, wherein the first function is displaying an image of a subject in a first region of a display device, and wherein the second function is displaying an image for illuminating the subject with light in a second region of the display device. 9. A program comprising:
a first function; a second function; and a third function, wherein the first function is obtaining an image of a subject with use of an imaging device, wherein the second function is displaying the image of the subject in a first region of a display device, and wherein the third function is displaying an image for illuminating the subject with light in a second region of the display device. | An object is to provide a display device that makes it easy to photograph a user's face looking at a display screen, or the like, in a dark place. Another object is to provide a display device that makes it easy to check a user's face looking at a display screen, or the like, in a dark place. A display device includes a first region and a second region. The first region has a function of displaying an image of a subject. The second region has a function of illuminating a subject with light. An electronic device includes a display device and an imaging device. The display device includes a first region and a second region. The first region has a function of displaying an image of a subject that is obtained using the imaging device. The second region has a function of illuminating a subject with light.1. A display device comprising:
a first region; and a second region, wherein the first region is configured to display an image of a subject, and wherein the second region is configured to illuminate the subject with light. 2. The display device according to claim 1, wherein the light is white light. 3. A display device comprising:
a first region; and a second region, wherein the first region is configured to display an image, and wherein the second region is configured to emit illuminating light. 4. The display device according to claim 3, wherein the illuminating light is white light. 5. An electronic device comprising:
a display device; and an imaging device, wherein the display device includes a first region and a second region, wherein the first region is configured to display an image of a subject obtained by using the imaging device, and wherein the second region is configured to illuminate the subject with light. 6. The electronic device according to claim 5, wherein the display device and the imaging device are provided on a same surface. 7. The electronic device according to claim 5, wherein the light is white light. 8. A program comprising:
a first function; and a second function, wherein the first function is displaying an image of a subject in a first region of a display device, and wherein the second function is displaying an image for illuminating the subject with light in a second region of the display device. 9. A program comprising:
a first function; a second function; and a third function, wherein the first function is obtaining an image of a subject with use of an imaging device, wherein the second function is displaying the image of the subject in a first region of a display device, and wherein the third function is displaying an image for illuminating the subject with light in a second region of the display device. | 2,600 |
10,505 | 10,505 | 15,413,486 | 2,669 | A framework for overlaying findings on image data is described herein. In accordance with one aspect, the framework extracts one or more findings from a radiology report, and detects one or more anatomical landmarks in image data corresponding to the radiology report. The one or more extracted findings are then correlated to, and overlaid with, the one or more detected anatomical landmarks on the image data. | 1. One or more non-transitory computer readable media embodying a program of instructions executable by machine to perform operations for processing image data, the operations comprising:
extracting one or more findings from a radiology report by performing a Natural Language Processing technique, wherein the one or more extracted findings are represented as one or more tuples; detecting one or more anatomical landmarks in image data corresponding to the radiology report; correlating the one or more tuples to the one or more anatomical landmarks; and overlaying the one or more extracted findings with the correlated one or more anatomical landmarks on the image data. 2. The one or more non-transitory computer readable media of claim 1, wherein performing the Natural Language Processing technique comprises performing machine learning. 3. The one or more non-transitory computer readable media of claim 1, wherein extracting the one or more findings from the radiology report comprises:
chunking text in the radiology report into one or more sentences, and finding anatomical, disease and pathological terms and their modifiers in the one or more sentences. 4. The one or more non-transitory computer readable media of claim 1, wherein at least one of the one or more tuples comprises an ordered list of elements including an anatomical region of interest, a disease or pathology, a morphology, a severity, an etiology, a location, a description modifier, or a combination thereof. 5. A system comprising:
a non-transitory memory device for storing computer readable program code; and a processor in communication with the memory device, the processor being operative with the computer readable program code to perform operations including
receiving a radiology report and corresponding image data,
extracting one or more findings from the radiology report,
detecting one or more anatomical landmarks in the image data,
correlating the one or more extracted findings to the one or more anatomical landmarks, and
overlaying the one or more extracted findings with the correlated one or more anatomical landmarks on the image data. 6. The system of claim 5 wherein the findings comprise anatomical, disease and pathological terms associated with one or more anatomical regions. 7. The system of claim 5 wherein the processor is operative with the computer readable program code to extract the one or more findings by performing a Natural Language Processing technique to analyze text in the radiology report. 8. The system of claim 7 wherein the processor is operative with the computer readable program code to perform the Natural Language Processing technique by performing machine learning. 9. The system of claim 5 wherein the processor is operative with the computer readable program code to extract the one or more findings by
chunking text in the radiology report into one or more sentences, and
finding anatomical, disease and pathological terms in the one or more sentences. 10. The system of claim 9 wherein the processor is operative with the computer readable program code to find the anatomical, disease and pathological terms by matching the anatomical, disease and pathological terms with terms in predefined anatomy, disease and pathology dictionaries. 11. The system of claim 5 wherein the processor is operative with the computer readable program code to represent at least one of the one or more extracted findings as a tuple. 12. The system of claim 11 wherein the tuple comprises an ordered list of an anatomical term and one or more disease and pathological terms. 13. The system of claim 12 wherein the one or more disease and pathological terms comprise a disease or pathology, a morphology, a severity, an etiology, a location, a description modifier, or a combination thereof. 14. The system of claim 5 wherein the processor is operative with the computer readable program code to further assign one or more weights to the one or more extracted findings in accordance with importance or severity. 15. The system of claim 14 wherein the processor is operative with the computer readable program code to assign zero weight to at least one of the one or more extracted findings in response to detecting a negation of existence of disease in the extracted finding. 16. The system of claim 14 wherein the processor is operative with the computer readable program code to overlay the one or more extracted findings that are weighted above a predetermined threshold with the correlated one or more anatomical landmarks on the image data. 17. The system of claim 5 wherein the processor is operative with the computer readable program code to correlate the one or more extracted findings to the one or more anatomical landmarks using one or more predefined rules. 18. The system of claim 5, wherein the processor is operative with the computer readable program code to correlate the one or more extracted findings to the one or more anatomical landmarks by
performing a word clustering technique to partition sets of words into clusters of semantically similar words, and correlating a first term in the one or more extracted findings with a second term describing the one or more anatomical landmarks in response to the first and second terms being grouped into a same word cluster. 19. A method, comprising:
receiving a radiology report and corresponding image data; extracting one or more findings from the radiology report; detecting one or more anatomical landmarks in the image data; correlating the one or more extracted findings to the one or more anatomical landmarks; and overlaying the one or more extracted findings with the correlated one or more anatomical landmarks on the image data. 20. The method of claim 19 wherein extracting the one or more findings from the radiology report comprises performing a Natural Language Processing technique. | A framework for overlaying findings on image data is described herein. In accordance with one aspect, the framework extracts one or more findings from a radiology report, and detects one or more anatomical landmarks in image data corresponding to the radiology report. The one or more extracted findings are then correlated to, and overlaid with, the one or more detected anatomical landmarks on the image data.1. One or more non-transitory computer readable media embodying a program of instructions executable by machine to perform operations for processing image data, the operations comprising:
extracting one or more findings from a radiology report by performing a Natural Language Processing technique, wherein the one or more extracted findings are represented as one or more tuples; detecting one or more anatomical landmarks in image data corresponding to the radiology report; correlating the one or more tuples to the one or more anatomical landmarks; and overlaying the one or more extracted findings with the correlated one or more anatomical landmarks on the image data. 2. The one or more non-transitory computer readable media of claim 1, wherein performing the Natural Language Processing technique comprises performing machine learning. 3. The one or more non-transitory computer readable media of claim 1, wherein extracting the one or more findings from the radiology report comprises:
chunking text in the radiology report into one or more sentences, and finding anatomical, disease and pathological terms and their modifiers in the one or more sentences. 4. The one or more non-transitory computer readable media of claim 1, wherein at least one of the one or more tuples comprises an ordered list of elements including an anatomical region of interest, a disease or pathology, a morphology, a severity, an etiology, a location, a description modifier, or a combination thereof. 5. A system comprising:
a non-transitory memory device for storing computer readable program code; and a processor in communication with the memory device, the processor being operative with the computer readable program code to perform operations including
receiving a radiology report and corresponding image data,
extracting one or more findings from the radiology report,
detecting one or more anatomical landmarks in the image data,
correlating the one or more extracted findings to the one or more anatomical landmarks, and
overlaying the one or more extracted findings with the correlated one or more anatomical landmarks on the image data. 6. The system of claim 5 wherein the findings comprise anatomical, disease and pathological terms associated with one or more anatomical regions. 7. The system of claim 5 wherein the processor is operative with the computer readable program code to extract the one or more findings by performing a Natural Language Processing technique to analyze text in the radiology report. 8. The system of claim 7 wherein the processor is operative with the computer readable program code to perform the Natural Language Processing technique by performing machine learning. 9. The system of claim 5 wherein the processor is operative with the computer readable program code to extract the one or more findings by
chunking text in the radiology report into one or more sentences, and
finding anatomical, disease and pathological terms in the one or more sentences. 10. The system of claim 9 wherein the processor is operative with the computer readable program code to find the anatomical, disease and pathological terms by matching the anatomical, disease and pathological terms with terms in predefined anatomy, disease and pathology dictionaries. 11. The system of claim 5 wherein the processor is operative with the computer readable program code to represent at least one of the one or more extracted findings as a tuple. 12. The system of claim 11 wherein the tuple comprises an ordered list of an anatomical term and one or more disease and pathological terms. 13. The system of claim 12 wherein the one or more disease and pathological terms comprise a disease or pathology, a morphology, a severity, an etiology, a location, a description modifier, or a combination thereof. 14. The system of claim 5 wherein the processor is operative with the computer readable program code to further assign one or more weights to the one or more extracted findings in accordance with importance or severity. 15. The system of claim 14 wherein the processor is operative with the computer readable program code to assign zero weight to at least one of the one or more extracted findings in response to detecting a negation of existence of disease in the extracted finding. 16. The system of claim 14 wherein the processor is operative with the computer readable program code to overlay the one or more extracted findings that are weighted above a predetermined threshold with the correlated one or more anatomical landmarks on the image data. 17. The system of claim 5 wherein the processor is operative with the computer readable program code to correlate the one or more extracted findings to the one or more anatomical landmarks using one or more predefined rules. 18. The system of claim 5, wherein the processor is operative with the computer readable program code to correlate the one or more extracted findings to the one or more anatomical landmarks by
performing a word clustering technique to partition sets of words into clusters of semantically similar words, and correlating a first term in the one or more extracted findings with a second term describing the one or more anatomical landmarks in response to the first and second terms being grouped into a same word cluster. 19. A method, comprising:
receiving a radiology report and corresponding image data; extracting one or more findings from the radiology report; detecting one or more anatomical landmarks in the image data; correlating the one or more extracted findings to the one or more anatomical landmarks; and overlaying the one or more extracted findings with the correlated one or more anatomical landmarks on the image data. 20. The method of claim 19 wherein extracting the one or more findings from the radiology report comprises performing a Natural Language Processing technique. | 2,600 |
10,506 | 10,506 | 14,922,930 | 2,625 | One embodiment provides a method, including: receiving, from at least one sensor of a band shaped wearable device, depth data, wherein the depth data is based upon a gesture performed by a user and wherein the depth data comprises data associated with a distance between the gesture and the band shaped wearable device; identifying, using a processor, the gesture performed by a user using the depth data; and performing an action based upon the gesture identified. Other aspects are described and claimed. | 1. A method, comprising:
receiving, from at least one sensor of a band shaped wearable device, depth data, wherein the depth data is based upon a gesture performed by a user with a body part and wherein the depth data comprises data associated with a distance between the gesture and the band shaped wearable device; identifying, using a processor, the gesture performed by a user using the depth data, wherein the identifying comprises determining a location of features of the body part with respect to other features of the body part; and performing an action based upon the gesture identified. 2. The method of claim 1, further comprising forming at least one image associated with the gesture performed by a user using the depth data; 3. The method of claim 1, wherein the receiving comprises receiving depth data from at least two sensors of the band shaped wearable device. 4. The method of claim 3, wherein the forming comprises forming at least two images, each image based upon depth data received from one of the at least two sensors. 5. The method of claim 3, further comprising forming a single image by combining the depth data received from the at least two sensors. 6. The method of claim 1, wherein the identifying comprises comparing the depth data to previously stored data. 7. The method of claim 1, further comprising receiving additional data. 8. The method of claim 7, wherein receiving additional data comprises receiving movement data associated with the gesture performed by a user. 9. The method of claim 7, wherein receiving additional data comprises receiving audio data. 10. The method of claim 1, wherein the depth data comprises infrared data. 11. The method of claim 1, wherein the performing an action comprises sending gesture data to an alternate device. 12. A wearable device, comprising:
a band shaped wearable housing; at least one sensor disposed on the band shaped wearable housing; a processor operatively coupled to the at least one sensor and housed by the band shaped wearable housing; a memory that stores instructions executable by the processor to: receive, from the at least one sensor, depth data, wherein the depth data is based upon a gesture performed by a user with a body part and wherein the depth data comprises data associated with a distance between the gesture and the band shaped wearable device; identify the gesture performed by a user using the depth data, wherein to identify comprises determining a location of features of the body part with respect to other features of the body part; and perform an action based upon the gesture identified. 13. The wearable device of claim 12, wherein the instructions are further executable by the processor to form at least one image associated with the gesture performed by a user using the depth data. 14. The wearable device of claim 12, wherein to receive comprises receiving depth data from at least two sensors operatively coupled to the wearable device. 15. The wearable device of claim 14, wherein to form comprises forming at least two images, each image based upon depth data received from one of the at least two sensors. 16. The wearable device of claim 14, wherein the instructions are further executable by the processor to form a single image by combining the depth data received from the at least two sensors. 17. The wearable device of claim 12, wherein to identify comprises comparing the depth data to previously stored data. 18. The wearable device of claim 12, wherein the instructions are further executable by the processor to receive additional data. 19. The wearable device of claim 18, wherein to receive additional data comprises receiving movement data associated with the gesture performed by a user. 20. The wearable device of claim 18, wherein to receive additional data comprises receiving audio data. 21. The wearable device of claim 12, wherein to perform an action comprises sending gesture data to an alternate device. 22. A product, comprising:
a storage device that stores code executable by a processor, the code comprising: code that receives, from at least one sensor of a band shaped wearable device, depth data, wherein the depth data is based upon a gesture performed by a user with a body part and wherein the depth data comprises data associated with a distance between the gesture and the band shaped wearable device; code that identifies the gesture performed by a user using the depth data, wherein the code that identifies comprises code that determines a location of features of the body part with respect to other features of the body part; and code that performs an action based upon the gesture identified. | One embodiment provides a method, including: receiving, from at least one sensor of a band shaped wearable device, depth data, wherein the depth data is based upon a gesture performed by a user and wherein the depth data comprises data associated with a distance between the gesture and the band shaped wearable device; identifying, using a processor, the gesture performed by a user using the depth data; and performing an action based upon the gesture identified. Other aspects are described and claimed.1. A method, comprising:
receiving, from at least one sensor of a band shaped wearable device, depth data, wherein the depth data is based upon a gesture performed by a user with a body part and wherein the depth data comprises data associated with a distance between the gesture and the band shaped wearable device; identifying, using a processor, the gesture performed by a user using the depth data, wherein the identifying comprises determining a location of features of the body part with respect to other features of the body part; and performing an action based upon the gesture identified. 2. The method of claim 1, further comprising forming at least one image associated with the gesture performed by a user using the depth data; 3. The method of claim 1, wherein the receiving comprises receiving depth data from at least two sensors of the band shaped wearable device. 4. The method of claim 3, wherein the forming comprises forming at least two images, each image based upon depth data received from one of the at least two sensors. 5. The method of claim 3, further comprising forming a single image by combining the depth data received from the at least two sensors. 6. The method of claim 1, wherein the identifying comprises comparing the depth data to previously stored data. 7. The method of claim 1, further comprising receiving additional data. 8. The method of claim 7, wherein receiving additional data comprises receiving movement data associated with the gesture performed by a user. 9. The method of claim 7, wherein receiving additional data comprises receiving audio data. 10. The method of claim 1, wherein the depth data comprises infrared data. 11. The method of claim 1, wherein the performing an action comprises sending gesture data to an alternate device. 12. A wearable device, comprising:
a band shaped wearable housing; at least one sensor disposed on the band shaped wearable housing; a processor operatively coupled to the at least one sensor and housed by the band shaped wearable housing; a memory that stores instructions executable by the processor to: receive, from the at least one sensor, depth data, wherein the depth data is based upon a gesture performed by a user with a body part and wherein the depth data comprises data associated with a distance between the gesture and the band shaped wearable device; identify the gesture performed by a user using the depth data, wherein to identify comprises determining a location of features of the body part with respect to other features of the body part; and perform an action based upon the gesture identified. 13. The wearable device of claim 12, wherein the instructions are further executable by the processor to form at least one image associated with the gesture performed by a user using the depth data. 14. The wearable device of claim 12, wherein to receive comprises receiving depth data from at least two sensors operatively coupled to the wearable device. 15. The wearable device of claim 14, wherein to form comprises forming at least two images, each image based upon depth data received from one of the at least two sensors. 16. The wearable device of claim 14, wherein the instructions are further executable by the processor to form a single image by combining the depth data received from the at least two sensors. 17. The wearable device of claim 12, wherein to identify comprises comparing the depth data to previously stored data. 18. The wearable device of claim 12, wherein the instructions are further executable by the processor to receive additional data. 19. The wearable device of claim 18, wherein to receive additional data comprises receiving movement data associated with the gesture performed by a user. 20. The wearable device of claim 18, wherein to receive additional data comprises receiving audio data. 21. The wearable device of claim 12, wherein to perform an action comprises sending gesture data to an alternate device. 22. A product, comprising:
a storage device that stores code executable by a processor, the code comprising: code that receives, from at least one sensor of a band shaped wearable device, depth data, wherein the depth data is based upon a gesture performed by a user with a body part and wherein the depth data comprises data associated with a distance between the gesture and the band shaped wearable device; code that identifies the gesture performed by a user using the depth data, wherein the code that identifies comprises code that determines a location of features of the body part with respect to other features of the body part; and code that performs an action based upon the gesture identified. | 2,600 |
10,507 | 10,507 | 15,984,672 | 2,689 | A residential security system includes a central security system and a plurality of offline security system components. Each of the offline security system components is directly connected to the central security system. At least one door lock includes a mobile credentialing system. The central security system includes software configured to automatically authorize an entry in response to receiving an authorized mobile credential from the at least one door lock system. | 1. A method for automatically authorizing an entrant in a residential security system comprising:
unlocking at least one entryway in a residential building in response to a door lock receiving an authorized mobile credential; transmitting a first notice from the door lock to a central security system, wherein the notice includes at least one of an identity corresponding to the mobile credential and data indicative of an authorized entrance; and transmitting a second notice from the central security system to at least one offline security system component in response to the first notice, wherein the second notice includes an indication that an authorized entrant has passed through the at least one entryway. 2. The method of claim 1, wherein unlocking at least one entryway in response to the door lock receiving the authorized mobile credential comprises unlocking a subset of entryways in the residential building and allowing a remainder of entryways in the residential building to remain locked. 3. The method of claim 1, further comprising prompting a mobile device owned by a user corresponding to the authorized mobile credential to verify that the user has accessed the door lock. 4. The method of claim 3, wherein the verification is at least one of a button push, a code entry, and a biometric key. 5. The method of claim 1, wherein transmitting the first notice comprises transmitting the first notice over a direct connection. 6. The method of claim 1, wherein transmitting the first notice comprises transmitting the first notice over a network connection. 7. The method of claim 1, wherein unlocking the at least one entryway in the residential building in response to the door lock receiving the authorized mobile credential occurs in response to a communication from a mobile credentialing server. 8. The method of claim 7, wherein the mobile credentialing server is a component of a central security system. 9. The method of claim 7, wherein the mobile credentialing system is connected to the central security system via a network. 10. The method of claim 7, further comprising transmitting a received mobile credential to the mobile credentialing server in response to receiving the mobile credential at the door lock and prior to unlocking the at least one entryway. 11. The method of claim 1, wherein the central security system automatically performs one of arming the security system, disarming the security system, transferring the security system to a home mode and transferring the security system to an away mode in response to receiving the first notice. 12. The method of claim 1, wherein the central security system automatically prompts a mobile device to display an arm system option in response to receiving the authorized mobile credential. 13. A residential security system comprising:
a central security system; a plurality of offline security system components, each of the offline security system components being directly connected to the central security system; at least one door lock including a mobile credentialing system; and wherein the central security system includes software configured to automatically authorize an entry in response to receiving an authorized mobile credential from the at least one door lock system. 14. The residential security system of claim 13, wherein the plurality of offline security system components include at least one of a window sensor and a garage sensor. 15. The residential security system of claim 13, wherein at least one of said security system components in said plurality of security system components is wireless connected to the central security system. 16. The residential security system of claim 13, wherein a mobile credentialing server is included within the central security system. 17. The residential security system of claim 13, wherein a mobile credentialing server is connected to the central security system and the at least one door lock via a network. 18. The residential security system of claim 13, wherein the door lock is an offline component. 19. The residential security system of claim 18, wherein the door lock includes a local wireless communication protocol and is configured to communicate with at least one mobile device using the local wireless communication protocol. 20. The residential security system of claim 18, wherein the door lock is directly connected to at least one of the central security system and a mobile credentialing server. | A residential security system includes a central security system and a plurality of offline security system components. Each of the offline security system components is directly connected to the central security system. At least one door lock includes a mobile credentialing system. The central security system includes software configured to automatically authorize an entry in response to receiving an authorized mobile credential from the at least one door lock system.1. A method for automatically authorizing an entrant in a residential security system comprising:
unlocking at least one entryway in a residential building in response to a door lock receiving an authorized mobile credential; transmitting a first notice from the door lock to a central security system, wherein the notice includes at least one of an identity corresponding to the mobile credential and data indicative of an authorized entrance; and transmitting a second notice from the central security system to at least one offline security system component in response to the first notice, wherein the second notice includes an indication that an authorized entrant has passed through the at least one entryway. 2. The method of claim 1, wherein unlocking at least one entryway in response to the door lock receiving the authorized mobile credential comprises unlocking a subset of entryways in the residential building and allowing a remainder of entryways in the residential building to remain locked. 3. The method of claim 1, further comprising prompting a mobile device owned by a user corresponding to the authorized mobile credential to verify that the user has accessed the door lock. 4. The method of claim 3, wherein the verification is at least one of a button push, a code entry, and a biometric key. 5. The method of claim 1, wherein transmitting the first notice comprises transmitting the first notice over a direct connection. 6. The method of claim 1, wherein transmitting the first notice comprises transmitting the first notice over a network connection. 7. The method of claim 1, wherein unlocking the at least one entryway in the residential building in response to the door lock receiving the authorized mobile credential occurs in response to a communication from a mobile credentialing server. 8. The method of claim 7, wherein the mobile credentialing server is a component of a central security system. 9. The method of claim 7, wherein the mobile credentialing system is connected to the central security system via a network. 10. The method of claim 7, further comprising transmitting a received mobile credential to the mobile credentialing server in response to receiving the mobile credential at the door lock and prior to unlocking the at least one entryway. 11. The method of claim 1, wherein the central security system automatically performs one of arming the security system, disarming the security system, transferring the security system to a home mode and transferring the security system to an away mode in response to receiving the first notice. 12. The method of claim 1, wherein the central security system automatically prompts a mobile device to display an arm system option in response to receiving the authorized mobile credential. 13. A residential security system comprising:
a central security system; a plurality of offline security system components, each of the offline security system components being directly connected to the central security system; at least one door lock including a mobile credentialing system; and wherein the central security system includes software configured to automatically authorize an entry in response to receiving an authorized mobile credential from the at least one door lock system. 14. The residential security system of claim 13, wherein the plurality of offline security system components include at least one of a window sensor and a garage sensor. 15. The residential security system of claim 13, wherein at least one of said security system components in said plurality of security system components is wireless connected to the central security system. 16. The residential security system of claim 13, wherein a mobile credentialing server is included within the central security system. 17. The residential security system of claim 13, wherein a mobile credentialing server is connected to the central security system and the at least one door lock via a network. 18. The residential security system of claim 13, wherein the door lock is an offline component. 19. The residential security system of claim 18, wherein the door lock includes a local wireless communication protocol and is configured to communicate with at least one mobile device using the local wireless communication protocol. 20. The residential security system of claim 18, wherein the door lock is directly connected to at least one of the central security system and a mobile credentialing server. | 2,600 |
10,508 | 10,508 | 14,638,855 | 2,688 | A wall module for use in a building control system includes a wall module controller and a first communications module operatively coupled to the wall module controller for establishing wired communication with a remote building controller, and for receiving instructions and/or data from the remote building controller and for sending instructions and/or data to the remote building controller via the established wired communication. The wall module includes a second communications module operatively coupled to the wall module controller for establishing short range wireless communication having a communication range of less than 60 feet with a portable handheld device, and for receiving instructions from the portable handheld device and for sending data to the portable handheld device via the establishing short range wireless communication. | 1. A wall module for use in a building control system, the wall module comprising:
a wall module controller; a first communications module operatively coupled to the wall module controller for establishing wired communication with a remote building controller, and for receiving instructions and/or data from the remote building controller and for sending instructions and/or data to the remote building controller via the established wired communication; and a second communications module operatively coupled to the wall module controller for establishing short range wireless communication having a communication range of less than 60 feet with a portable handheld device, and for receiving instructions from the portable handheld device and for sending data to the portable handheld device via the establishing short range wireless communication. 2. The wall module of claim 1, wherein the wall module controller is configured to direct one or more instructions received from the portable handheld device via the second communications module to the remote building controller via the first communications module. 3. The wall module of claim 1, wherein the wall module controller is configured to direct at least some data received from the remote building controller via the first communications module to the portable handheld device via the second communications module. 4. The wall module of claim 1, wherein the wall module controller is further configured to provide one or more control signals to one or more components of the building control system. 5. The wall module of claim 4, wherein the one or more components of the building control system comprises one or more of an HVAC component, a lighting component and a security system component. 6. The wall module of claim 1, wherein the portable handheld device is a smartphone. 7. The wall module of claim 1, wherein the portable handheld device is a tablet. 8. A wall module for use in a building control system, the wall module comprising:
a wall module controller; a first communications module operatively coupled to the wall module controller for establishing communication with a remote building controller and for receiving instructions and/or data from the remote building controller and for sending instructions and/or data to the remote building controller; a second communications module operatively coupled to the wall module controller for establishing communication with a portable handheld device and for receiving instructions from the portable handheld device and for sending data to the portable handheld device; and wherein the first communications module implements a different communication protocol than the second communications module. 9. The wall module of claim 8, wherein the first communications module is configured to establish wired communication with the remote building controller. 10. The wall module of claim 8, wherein the second communications module is configured to establish wireless communication with the portable handheld device. 11. The wall module of claim 10, wherein the second communications module is configured to establish short-range wireless communication having a range of less than 60 feet. 12. The wall module of claim 10, wherein the second communications module is configured to establish WiFi communication with the portable handheld device. 13. The wall module of claim 8, wherein the wall module controller is configured to direct one or more instructions received from the portable handheld device via the second communications module to the remote building controller via the first communications module. 14. The wall module of claim 8, wherein the wall module controller is configured to direct at least some data received from the remote building controller via the first communications module to the portable handheld device via the second communications module. 15. The wall module of claim 8, wherein the wall module controller is further configured to provide one or more control signals to one or more components of the building control system. 16. The wall module of claim 15, wherein the one or more components of the building control system comprises one or more of an HVAC component, a lighting component and a security system component. 17. A method of securely communicating building control information in a building control system including a building controller and a wall module both disposed within a building, the wall module remote from the building controller, the method comprising:
communicating building control information between the building controller and the wall module over a first communications network; communicating building control information between the wall module and a portable handheld device over a second communications network, the second communications network comprising a short range wireless communications protocol having a communication range that is less than 60 feet; wherein an individual using the portable handheld device can communicate directly with the wall module over the second communications network. 18. The method of claim 17, wherein the building control information communicated between the building controller and the wall module comprises one or more instructions received from the portable handheld device via the second communications network. 19. The method of claim 17, wherein the building control information communicated between the wall module and the portable handheld device comprises data received from the building controller via the first communications network. 20. The method of claim 17, further comprising providing one or more control signals from the wall module to one or more components of the building control system, wherein the one or more components of the building control system comprises one or more of an HVAC component, a lighting component and a security system component. | A wall module for use in a building control system includes a wall module controller and a first communications module operatively coupled to the wall module controller for establishing wired communication with a remote building controller, and for receiving instructions and/or data from the remote building controller and for sending instructions and/or data to the remote building controller via the established wired communication. The wall module includes a second communications module operatively coupled to the wall module controller for establishing short range wireless communication having a communication range of less than 60 feet with a portable handheld device, and for receiving instructions from the portable handheld device and for sending data to the portable handheld device via the establishing short range wireless communication.1. A wall module for use in a building control system, the wall module comprising:
a wall module controller; a first communications module operatively coupled to the wall module controller for establishing wired communication with a remote building controller, and for receiving instructions and/or data from the remote building controller and for sending instructions and/or data to the remote building controller via the established wired communication; and a second communications module operatively coupled to the wall module controller for establishing short range wireless communication having a communication range of less than 60 feet with a portable handheld device, and for receiving instructions from the portable handheld device and for sending data to the portable handheld device via the establishing short range wireless communication. 2. The wall module of claim 1, wherein the wall module controller is configured to direct one or more instructions received from the portable handheld device via the second communications module to the remote building controller via the first communications module. 3. The wall module of claim 1, wherein the wall module controller is configured to direct at least some data received from the remote building controller via the first communications module to the portable handheld device via the second communications module. 4. The wall module of claim 1, wherein the wall module controller is further configured to provide one or more control signals to one or more components of the building control system. 5. The wall module of claim 4, wherein the one or more components of the building control system comprises one or more of an HVAC component, a lighting component and a security system component. 6. The wall module of claim 1, wherein the portable handheld device is a smartphone. 7. The wall module of claim 1, wherein the portable handheld device is a tablet. 8. A wall module for use in a building control system, the wall module comprising:
a wall module controller; a first communications module operatively coupled to the wall module controller for establishing communication with a remote building controller and for receiving instructions and/or data from the remote building controller and for sending instructions and/or data to the remote building controller; a second communications module operatively coupled to the wall module controller for establishing communication with a portable handheld device and for receiving instructions from the portable handheld device and for sending data to the portable handheld device; and wherein the first communications module implements a different communication protocol than the second communications module. 9. The wall module of claim 8, wherein the first communications module is configured to establish wired communication with the remote building controller. 10. The wall module of claim 8, wherein the second communications module is configured to establish wireless communication with the portable handheld device. 11. The wall module of claim 10, wherein the second communications module is configured to establish short-range wireless communication having a range of less than 60 feet. 12. The wall module of claim 10, wherein the second communications module is configured to establish WiFi communication with the portable handheld device. 13. The wall module of claim 8, wherein the wall module controller is configured to direct one or more instructions received from the portable handheld device via the second communications module to the remote building controller via the first communications module. 14. The wall module of claim 8, wherein the wall module controller is configured to direct at least some data received from the remote building controller via the first communications module to the portable handheld device via the second communications module. 15. The wall module of claim 8, wherein the wall module controller is further configured to provide one or more control signals to one or more components of the building control system. 16. The wall module of claim 15, wherein the one or more components of the building control system comprises one or more of an HVAC component, a lighting component and a security system component. 17. A method of securely communicating building control information in a building control system including a building controller and a wall module both disposed within a building, the wall module remote from the building controller, the method comprising:
communicating building control information between the building controller and the wall module over a first communications network; communicating building control information between the wall module and a portable handheld device over a second communications network, the second communications network comprising a short range wireless communications protocol having a communication range that is less than 60 feet; wherein an individual using the portable handheld device can communicate directly with the wall module over the second communications network. 18. The method of claim 17, wherein the building control information communicated between the building controller and the wall module comprises one or more instructions received from the portable handheld device via the second communications network. 19. The method of claim 17, wherein the building control information communicated between the wall module and the portable handheld device comprises data received from the building controller via the first communications network. 20. The method of claim 17, further comprising providing one or more control signals from the wall module to one or more components of the building control system, wherein the one or more components of the building control system comprises one or more of an HVAC component, a lighting component and a security system component. | 2,600 |
10,509 | 10,509 | 14,764,733 | 2,649 | The present disclosure relates to an antenna arrangement ( 12 ) connectable to a transceiver ( 10 ) for simultaneously transmitting and receiving Radio Frequency, RF, signals. The antenna arrangement ( 12 ) comprises two or more sets of antenna elements ( 14, 16, 22 ). The sets of antenna elements have different antenna element spacing and comprise interface units ( 18, 20, 24 ) which are connected to a transceiver ( 10 ). The interface units ( 18, 20, 24 ) are configured for transmitting RF signals with a first frequency and for receiving RF signals with another frequency. | 1-8. (canceled) 9. An antenna arrangement connectable to a transceiver for simultaneously transmitting and receiving radio frequency (RF) signals, said antenna arrangement comprising at least two sets of antenna elements, wherein the first set of antenna elements has a first antenna element spacing and the second set of antenna elements has a second antenna element spacing and wherein an interface unit of the first set of antenna elements is connected to the transceiver for transmitting RF signals with a first frequency and for receiving RF signals with a second frequency and an interface unit of the second set of antenna elements is connected to the transceiver for transmitting RF signals with the second frequency and for receiving RF signals with the first frequency. 10. The antenna arrangement of claim 9, wherein each set of antenna elements is provided in a separate aperture. 11. The antenna arrangement of claim 9, wherein a filter with low Q-value is connected between the transceiver and each interface unit of the sets of antenna elements, respectively. 12. The antenna arrangement of claim 9, wherein the first set of antenna elements and the second set of antenna elements cover the bandwidth from 1800 MHz to 2600 MHz and the first antenna element spacing is configured for a frequency of 1800 MHz and the second antenna element spacing is configured for a frequency of 2600 MHz. 13. The antenna arrangement of claim 9, further comprising a third set of antenna elements, which has a third antenna element spacing and wherein an interface unit of the third set of antenna elements is connected to the transceiver for transmitting RF signals with a third frequency and for receiving RF signals with the first or second frequency. 14. The antenna arrangement of claim 13, wherein the third set of antenna elements covers a bandwidth from 1800 MHz to 2600 MHz and the third antenna element spacing is configured for a frequency of 2100 MHz. 15. The antenna arrangement of claim 13, further comprising a fourth set of antenna elements, which covers a bandwidth from 900 MHz to 2600 MHz and the fourth antenna element spacing is configured for a frequency of 900 MHz. 16. A radio base station comprising:
a transceiver; and an antenna arrangement connected to the transceiver and comprising at least two sets of antenna elements, wherein the first set of antenna elements has a first antenna element spacing and the second set of antenna elements has a second antenna element spacing and wherein an interface unit of the first set of antenna elements is connected to the transceiver for transmitting RF signals with a first frequency and for receiving RF signals with a second frequency and an interface unit of the second set of antenna elements is connected to the transceiver for transmitting RF signals with the second frequency and for receiving RF signals with the first frequency. 17. The radio base station of claim 16, wherein each set of antenna elements is provided in a separate aperture. 18. The radio base station of claim 16, wherein a filter with low Q-value is connected between the transceiver and each interface unit of the sets of antenna elements, respectively. 19. The radio base station of claim 16, wherein the first set of antenna elements and the second set of antenna elements cover the bandwidth from 1800 MHz to 2600 MHz and the first antenna element spacing is configured for a frequency of 1800 MHz and the second antenna element spacing is configured for a frequency of 2600 MHz. 20. The radio base station of claim 16, wherein the antenna arrangement comprises a third set of antenna elements, which has a third antenna element spacing, and wherein an interface unit of the third set of antenna elements is connected to the transceiver for transmitting RF signals with a third frequency and for receiving RF signals with the first or second frequency. 21. The radio base station of claim 19, wherein the third set of antenna elements covers a bandwidth from 1800 MHz to 2600 MHz and the third antenna element spacing is configured for a frequency of 2100 MHz. 22. The radio base station of claim 19, further comprising a fourth set of antenna elements, which covers a bandwidth from 900 MHz to 2600 MHz, and the wherein the fourth antenna element spacing is configured for a frequency of 900 MHz. | The present disclosure relates to an antenna arrangement ( 12 ) connectable to a transceiver ( 10 ) for simultaneously transmitting and receiving Radio Frequency, RF, signals. The antenna arrangement ( 12 ) comprises two or more sets of antenna elements ( 14, 16, 22 ). The sets of antenna elements have different antenna element spacing and comprise interface units ( 18, 20, 24 ) which are connected to a transceiver ( 10 ). The interface units ( 18, 20, 24 ) are configured for transmitting RF signals with a first frequency and for receiving RF signals with another frequency.1-8. (canceled) 9. An antenna arrangement connectable to a transceiver for simultaneously transmitting and receiving radio frequency (RF) signals, said antenna arrangement comprising at least two sets of antenna elements, wherein the first set of antenna elements has a first antenna element spacing and the second set of antenna elements has a second antenna element spacing and wherein an interface unit of the first set of antenna elements is connected to the transceiver for transmitting RF signals with a first frequency and for receiving RF signals with a second frequency and an interface unit of the second set of antenna elements is connected to the transceiver for transmitting RF signals with the second frequency and for receiving RF signals with the first frequency. 10. The antenna arrangement of claim 9, wherein each set of antenna elements is provided in a separate aperture. 11. The antenna arrangement of claim 9, wherein a filter with low Q-value is connected between the transceiver and each interface unit of the sets of antenna elements, respectively. 12. The antenna arrangement of claim 9, wherein the first set of antenna elements and the second set of antenna elements cover the bandwidth from 1800 MHz to 2600 MHz and the first antenna element spacing is configured for a frequency of 1800 MHz and the second antenna element spacing is configured for a frequency of 2600 MHz. 13. The antenna arrangement of claim 9, further comprising a third set of antenna elements, which has a third antenna element spacing and wherein an interface unit of the third set of antenna elements is connected to the transceiver for transmitting RF signals with a third frequency and for receiving RF signals with the first or second frequency. 14. The antenna arrangement of claim 13, wherein the third set of antenna elements covers a bandwidth from 1800 MHz to 2600 MHz and the third antenna element spacing is configured for a frequency of 2100 MHz. 15. The antenna arrangement of claim 13, further comprising a fourth set of antenna elements, which covers a bandwidth from 900 MHz to 2600 MHz and the fourth antenna element spacing is configured for a frequency of 900 MHz. 16. A radio base station comprising:
a transceiver; and an antenna arrangement connected to the transceiver and comprising at least two sets of antenna elements, wherein the first set of antenna elements has a first antenna element spacing and the second set of antenna elements has a second antenna element spacing and wherein an interface unit of the first set of antenna elements is connected to the transceiver for transmitting RF signals with a first frequency and for receiving RF signals with a second frequency and an interface unit of the second set of antenna elements is connected to the transceiver for transmitting RF signals with the second frequency and for receiving RF signals with the first frequency. 17. The radio base station of claim 16, wherein each set of antenna elements is provided in a separate aperture. 18. The radio base station of claim 16, wherein a filter with low Q-value is connected between the transceiver and each interface unit of the sets of antenna elements, respectively. 19. The radio base station of claim 16, wherein the first set of antenna elements and the second set of antenna elements cover the bandwidth from 1800 MHz to 2600 MHz and the first antenna element spacing is configured for a frequency of 1800 MHz and the second antenna element spacing is configured for a frequency of 2600 MHz. 20. The radio base station of claim 16, wherein the antenna arrangement comprises a third set of antenna elements, which has a third antenna element spacing, and wherein an interface unit of the third set of antenna elements is connected to the transceiver for transmitting RF signals with a third frequency and for receiving RF signals with the first or second frequency. 21. The radio base station of claim 19, wherein the third set of antenna elements covers a bandwidth from 1800 MHz to 2600 MHz and the third antenna element spacing is configured for a frequency of 2100 MHz. 22. The radio base station of claim 19, further comprising a fourth set of antenna elements, which covers a bandwidth from 900 MHz to 2600 MHz, and the wherein the fourth antenna element spacing is configured for a frequency of 900 MHz. | 2,600 |
10,510 | 10,510 | 15,047,556 | 2,616 | An electronic chip and a chip assembly are described. The electronic chip comprises one or more processing cores and at least one hardware interface coupled to at least one of the one or more processing cores. At least one of the one or more processing cores implements a game engine in hardware. | 1. An electronic chip comprising:
one or more processing cores; and at least one hardware interface coupled to at least one of the one or more processing cores, wherein at least one of the one or more processing cores implements a game engine. 2. The chip according to claim 1, wherein at least one of the processing cores implements a ray trace engine. 3. The chip according to claim 2, wherein the game engine is coupled to the ray trace engine, the ray trace engine being configured to augment functionality of the game engine. 4. The chip according to claim 1, wherein a plurality of processing cores implement a plurality of game engines and a plurality of ray trace engines, wherein each of the plurality of game engines is associated with one of the plurality of ray trace engines. 5. The chip according to claim 4, wherein the associated ray trace engine is a specialized ray trace engine for the respective game engine. 6. The chip according to claim 1, wherein at least one of the processing cores implements a central processing unit. 7. The chip according to claim 6, wherein a plurality of processing cores implement a plurality of central processing units, each central processing unit being configured to execute instructions of a different instruction set architecture. 8. The chip according to claim 1, wherein the game engine is configured to communicate data via the hardware interface with an external graphics processing unit. 9. The chip according to claim 1, wherein at least one of the processing cores implements a graphics processing unit. 10. The chip according to claim 9 further comprising a memory, wherein the game engine is configured to generate data for the graphics processing unit and to provide the data to the graphics processing unit via the memory. 11. The chip according to claim 9, wherein the graphics processing unit includes a video memory and the game engine is configured to provide data to the graphics processing unit via the video memory. 12. The chip according to claim 1, wherein the game engine is configured to perform one or more tasks thereby generating commands and/or datasets. 13. The chip according to claim 12, wherein the game engine and the ray trace engine are interoperably coupled to perform the one or more tasks. 14. The chip according to claim 12, wherein the one or more tasks include one or more of determining how objects cast shadows over other objects, determining how objects are reflected in other objects, and determining how light falling on one object illuminates other surrounding objects. 15. The chip according to claim 1, the chip being a system on chip (SoC). 16. A chip assembly comprising:
at least one chip, wherein the at least one chip comprises:
one or more processing cores; and
at least one hardware interface coupled to at least one of the one or more processing cores, wherein at least one of the one or more processing cores implements a game engine. 17. The chip assembly according to claim 16, further comprising at least one central processing unit, wherein the at least one central processing unit is connected to the at least one hardware interface of the chip. 18. The chip assembly according to claim 16, further comprising at least one graphics processing unit, wherein the at least one graphics processing unit is connected to the at least one hardware interface of the chip. 19. The chip assembly according to claim 16, further comprising a memory controller configured to exchange data with the game engine of the chip. 20. The chip assembly according to claim 16 further comprising a plurality of stacked integrated circuits, wherein an integrated circuit is stacked on top of another integrated circuit or on an imposer chip. 21. The chip assembly according to claim 16, wherein the chip assembly is included in a package. 22. The chip assembly according to claim 21, wherein the chip assembly is included in a System in Package (SiP) or a Package on Package. 23. A computing device including:
at least one chip, wherein the at least one chip comprises:
one or more processing cores; and
at least one hardware interface coupled to at least one of the one or more processing cores, wherein at least one of the one or more processing cores implements a game engine; and
a display configured to display data based on data provided by the at least one chip. 24. The computing device according to claim 23, wherein the computing device is a mobile device, a smartphone, or a virtual reality device. | An electronic chip and a chip assembly are described. The electronic chip comprises one or more processing cores and at least one hardware interface coupled to at least one of the one or more processing cores. At least one of the one or more processing cores implements a game engine in hardware.1. An electronic chip comprising:
one or more processing cores; and at least one hardware interface coupled to at least one of the one or more processing cores, wherein at least one of the one or more processing cores implements a game engine. 2. The chip according to claim 1, wherein at least one of the processing cores implements a ray trace engine. 3. The chip according to claim 2, wherein the game engine is coupled to the ray trace engine, the ray trace engine being configured to augment functionality of the game engine. 4. The chip according to claim 1, wherein a plurality of processing cores implement a plurality of game engines and a plurality of ray trace engines, wherein each of the plurality of game engines is associated with one of the plurality of ray trace engines. 5. The chip according to claim 4, wherein the associated ray trace engine is a specialized ray trace engine for the respective game engine. 6. The chip according to claim 1, wherein at least one of the processing cores implements a central processing unit. 7. The chip according to claim 6, wherein a plurality of processing cores implement a plurality of central processing units, each central processing unit being configured to execute instructions of a different instruction set architecture. 8. The chip according to claim 1, wherein the game engine is configured to communicate data via the hardware interface with an external graphics processing unit. 9. The chip according to claim 1, wherein at least one of the processing cores implements a graphics processing unit. 10. The chip according to claim 9 further comprising a memory, wherein the game engine is configured to generate data for the graphics processing unit and to provide the data to the graphics processing unit via the memory. 11. The chip according to claim 9, wherein the graphics processing unit includes a video memory and the game engine is configured to provide data to the graphics processing unit via the video memory. 12. The chip according to claim 1, wherein the game engine is configured to perform one or more tasks thereby generating commands and/or datasets. 13. The chip according to claim 12, wherein the game engine and the ray trace engine are interoperably coupled to perform the one or more tasks. 14. The chip according to claim 12, wherein the one or more tasks include one or more of determining how objects cast shadows over other objects, determining how objects are reflected in other objects, and determining how light falling on one object illuminates other surrounding objects. 15. The chip according to claim 1, the chip being a system on chip (SoC). 16. A chip assembly comprising:
at least one chip, wherein the at least one chip comprises:
one or more processing cores; and
at least one hardware interface coupled to at least one of the one or more processing cores, wherein at least one of the one or more processing cores implements a game engine. 17. The chip assembly according to claim 16, further comprising at least one central processing unit, wherein the at least one central processing unit is connected to the at least one hardware interface of the chip. 18. The chip assembly according to claim 16, further comprising at least one graphics processing unit, wherein the at least one graphics processing unit is connected to the at least one hardware interface of the chip. 19. The chip assembly according to claim 16, further comprising a memory controller configured to exchange data with the game engine of the chip. 20. The chip assembly according to claim 16 further comprising a plurality of stacked integrated circuits, wherein an integrated circuit is stacked on top of another integrated circuit or on an imposer chip. 21. The chip assembly according to claim 16, wherein the chip assembly is included in a package. 22. The chip assembly according to claim 21, wherein the chip assembly is included in a System in Package (SiP) or a Package on Package. 23. A computing device including:
at least one chip, wherein the at least one chip comprises:
one or more processing cores; and
at least one hardware interface coupled to at least one of the one or more processing cores, wherein at least one of the one or more processing cores implements a game engine; and
a display configured to display data based on data provided by the at least one chip. 24. The computing device according to claim 23, wherein the computing device is a mobile device, a smartphone, or a virtual reality device. | 2,600 |
10,511 | 10,511 | 15,455,196 | 2,698 | An operationally dependent external camera system for use with a portable computer device. The system includes a portable computer device, a camera, and an operation cord. The camera may include a lens, image sensor, circuit board, microphone, and control unit. The operation cord is coupled between the camera and the portable computer device to continuously transmit both data and electrical power between the portable computer device and the camera thereby making the camera operationally dependent on the portable computer device. | 1. An operationally dependent external camera system including a portable computer device comprising:
a portable computer device including at least one of a smartphone, tablet or tablet phone having a total volume smaller than twenty cubic inches; a camera having an outer dimension smaller than three cubic inches, wherein the camera further includes a plurality of functional parameters that correspond to all variable functions of the camera including but not limited to resolution and recording status; an operation cord coupled between the camera and the portable computer device configured to continuously transmit both data and electrical power between the portable computer device and the camera; and wherein the data transmitted between the camera and portable computer device correspond to the camera functional parameters causing the camera to be operationally dependent on the portable computer device for all variable functions of the camera. 2. The system of claim 1, wherein the operation cord includes a controller with at least one user input device configured to affect at least one of the functional parameters of the camera. 3. The system of claim 2, wherein the controller includes a button user input device configured to affect a functional parameter of the camera corresponding to recording on/off status. 4. The system of claim 1, wherein the functional parameters include audio on/off, three dimensional position, volume, brightness, and image capture. 5. The system of claim 1, wherein the electrical power transmitted from the portable computer device to the camera via the operation cord provides all electrical power to the camera. 6. The system of claim 1, wherein the camera includes a releasable coupling member configured to selectively retain the camera on an external user location during operation. 7. The system of claim 1, wherein the camera includes a lens, image sensor, circuit board, microphone, and control unit. 8. The system of claim 7, wherein the operation cord is coupled to the camera at an end opposite to the lens. 9. The system of claim 8, wherein the camera includes a coupling recess disposed between the operation cord and the lens. 10. The system of claim 1, wherein the portable computer device includes an application configured to receive a plurality of user inputs corresponding to the functional parameters of the camera, and wherein the application converts the received user inputs into instructions transmitted to the camera effecting at least one of the variable functions of the camera. 11. The system of claim 1, wherein the portable computer device includes an application configured to receive data recorded by the camera via the operation cord. 12. An operationally dependent external camera system including a portable computer device comprising:
a portable computer device including at least one of a smartphone, tablet or tablet phone having a total volume smaller than twenty cubic inches; a camera having an outer dimension smaller than three cubic inches, wherein the camera further includes a plurality of functional parameters that correspond to all variable functions of the camera including but not limited to resolution and recording status, and wherein the camera includes a lens, image sensor, circuit board, microphone, and control unit; an operation cord coupled between the camera and the portable computer device configured to continuously transmit both data and electrical power between the portable computer device and the camera, and wherein the operation cord includes a controller with at least one user input device configured to affect at least one of the functional parameters of the camera, and wherein operation cord is coupled to the camera at a location opposite the lens; and wherein the data transmitted between the camera and portable computer device correspond to the camera functional parameters causing the camera to be operationally dependent on the portable computer device for all variable functions of the camera. 13. A method of controlling an operationally dependent camera with a portable computer device comprising the acts of:
providing a portable computer device including at least one of a smartphone, tablet or tablet phone having a total volume smaller than twenty cubic inches; providing a camera having an outer dimension smaller than three cubic inches, wherein the camera further includes a plurality of functional parameters that correspond to all variable functions of the camera including but not limited to resolution and recording status, and wherein the camera includes a lens, image sensor, circuit board, microphone, and control unit; coupling an operation cord between the camera and the portable computer device, wherein operation cord is coupled to the camera at a location opposite the lens, and wherein the portable computer device provides all electrical power to the camera via the operation cord; and controlling all of the variable functions of the camera with the portable computer device via the transmission of the data on the operation cord. 14. The method of claim 13, wherein the operation cord further includes a controller with at least one user input device configured to affect at least one of the functional parameters of the camera. 15. The method of claim 14, wherein the controller includes a button user input device configured to affect a functional parameter of the camera corresponding to recording on/off status. 16. The method of claim 13, wherein the functional parameters include audio on/off, three dimensional position, volume, brightness, and image capture. 17. The method of claim 13, wherein the act of controlling all of the variable functions of the camera includes receiving user input on the portable computer device and transmitting corresponding data to the camera via the operation cord. 18. The method of claim 13, wherein the portable computer device includes an application configured to receive a plurality of user inputs corresponding to the control parameters of the camera, and wherein the received user inputs are converted to instructions transmitted to the camera via the operation cord. 19. The method of claim 13, wherein the portable computer device includes an application configured to receive data recorded by the camera via the operation cord. 20. The method of claim 13, wherein the portable computer device is configured to transmit data recorded by the camera over a wireless transmission medium. | An operationally dependent external camera system for use with a portable computer device. The system includes a portable computer device, a camera, and an operation cord. The camera may include a lens, image sensor, circuit board, microphone, and control unit. The operation cord is coupled between the camera and the portable computer device to continuously transmit both data and electrical power between the portable computer device and the camera thereby making the camera operationally dependent on the portable computer device.1. An operationally dependent external camera system including a portable computer device comprising:
a portable computer device including at least one of a smartphone, tablet or tablet phone having a total volume smaller than twenty cubic inches; a camera having an outer dimension smaller than three cubic inches, wherein the camera further includes a plurality of functional parameters that correspond to all variable functions of the camera including but not limited to resolution and recording status; an operation cord coupled between the camera and the portable computer device configured to continuously transmit both data and electrical power between the portable computer device and the camera; and wherein the data transmitted between the camera and portable computer device correspond to the camera functional parameters causing the camera to be operationally dependent on the portable computer device for all variable functions of the camera. 2. The system of claim 1, wherein the operation cord includes a controller with at least one user input device configured to affect at least one of the functional parameters of the camera. 3. The system of claim 2, wherein the controller includes a button user input device configured to affect a functional parameter of the camera corresponding to recording on/off status. 4. The system of claim 1, wherein the functional parameters include audio on/off, three dimensional position, volume, brightness, and image capture. 5. The system of claim 1, wherein the electrical power transmitted from the portable computer device to the camera via the operation cord provides all electrical power to the camera. 6. The system of claim 1, wherein the camera includes a releasable coupling member configured to selectively retain the camera on an external user location during operation. 7. The system of claim 1, wherein the camera includes a lens, image sensor, circuit board, microphone, and control unit. 8. The system of claim 7, wherein the operation cord is coupled to the camera at an end opposite to the lens. 9. The system of claim 8, wherein the camera includes a coupling recess disposed between the operation cord and the lens. 10. The system of claim 1, wherein the portable computer device includes an application configured to receive a plurality of user inputs corresponding to the functional parameters of the camera, and wherein the application converts the received user inputs into instructions transmitted to the camera effecting at least one of the variable functions of the camera. 11. The system of claim 1, wherein the portable computer device includes an application configured to receive data recorded by the camera via the operation cord. 12. An operationally dependent external camera system including a portable computer device comprising:
a portable computer device including at least one of a smartphone, tablet or tablet phone having a total volume smaller than twenty cubic inches; a camera having an outer dimension smaller than three cubic inches, wherein the camera further includes a plurality of functional parameters that correspond to all variable functions of the camera including but not limited to resolution and recording status, and wherein the camera includes a lens, image sensor, circuit board, microphone, and control unit; an operation cord coupled between the camera and the portable computer device configured to continuously transmit both data and electrical power between the portable computer device and the camera, and wherein the operation cord includes a controller with at least one user input device configured to affect at least one of the functional parameters of the camera, and wherein operation cord is coupled to the camera at a location opposite the lens; and wherein the data transmitted between the camera and portable computer device correspond to the camera functional parameters causing the camera to be operationally dependent on the portable computer device for all variable functions of the camera. 13. A method of controlling an operationally dependent camera with a portable computer device comprising the acts of:
providing a portable computer device including at least one of a smartphone, tablet or tablet phone having a total volume smaller than twenty cubic inches; providing a camera having an outer dimension smaller than three cubic inches, wherein the camera further includes a plurality of functional parameters that correspond to all variable functions of the camera including but not limited to resolution and recording status, and wherein the camera includes a lens, image sensor, circuit board, microphone, and control unit; coupling an operation cord between the camera and the portable computer device, wherein operation cord is coupled to the camera at a location opposite the lens, and wherein the portable computer device provides all electrical power to the camera via the operation cord; and controlling all of the variable functions of the camera with the portable computer device via the transmission of the data on the operation cord. 14. The method of claim 13, wherein the operation cord further includes a controller with at least one user input device configured to affect at least one of the functional parameters of the camera. 15. The method of claim 14, wherein the controller includes a button user input device configured to affect a functional parameter of the camera corresponding to recording on/off status. 16. The method of claim 13, wherein the functional parameters include audio on/off, three dimensional position, volume, brightness, and image capture. 17. The method of claim 13, wherein the act of controlling all of the variable functions of the camera includes receiving user input on the portable computer device and transmitting corresponding data to the camera via the operation cord. 18. The method of claim 13, wherein the portable computer device includes an application configured to receive a plurality of user inputs corresponding to the control parameters of the camera, and wherein the received user inputs are converted to instructions transmitted to the camera via the operation cord. 19. The method of claim 13, wherein the portable computer device includes an application configured to receive data recorded by the camera via the operation cord. 20. The method of claim 13, wherein the portable computer device is configured to transmit data recorded by the camera over a wireless transmission medium. | 2,600 |
10,512 | 10,512 | 15,378,920 | 2,674 | Methods, apparatus, and computer readable media are described related to recording, organizing, and making audio files available for consumption by voice-activated products. In various implementations, in response to receiving an input from a first user indicating that the first user intends to record audio content, audio content may be captured and stored. Input may be received from the first user indicating at least one identifier for the audio content. The stored audio content may be associated with the at least one identifier. A voice input may be received from a subsequent user. In response to determining that the voice input has particular characteristics, speech recognition may be biased in respect of the voice input towards recognition of the at least one identifier. In response to recognizing, based on the biased speech recognition, presence of the at least one identifier in the voice input, the stored audio content may be played. | 1. A method comprising:
in response to receiving an input from a first user indicating that the first user intends to record audio content, causing capture and storage of audio content; receiving input from the first user indicating at least one identifier for the audio content; associating the stored audio content with the at least one identifier; receiving a voice input from a subsequent user; analyzing characteristics of the voice input; in response to determining that the voice input has particular characteristics, biasing speech recognition in respect of the voice input towards recognition of the at least one identifier; and in response to recognizing, based on the biased speech recognition, presence of the at least one identifier in the voice input, causing playback of the stored audio content. 2. The method of claim 1, further comprising:
in response to the biased speech recognition yielding a non-recognition of the voice input, causing provision of a selectable option to the subsequent user for enabling the subsequent user to cause playback of the stored content. 3. The method of claim 2, wherein the selectable option includes the at least one identifier. 4. The method of claim 2, wherein the selectable option is an audio prompt. 5. The method of claim 1, further comprising:
in response to receiving the input from the first user indicating that the first user intends to record audio content, providing a prompt to the first user instructing the first user to provide the input indicating the at least one identifier. 6. A computer-implemented method, comprising:
receiving, by a voice-activated product at one or more input devices, a first command from a user, wherein the first command notifies the voice-activated product that the user wishes to record an audible rendition of a narrative; receiving, by the voice-activated product at one or more of the input devices, bibliographic input from the user, wherein the bibliographic input is indicative of bibliographic information associated with the narrative; recording, by the voice-activated product via an audio input device, the audible rendition of the narrative spoken by the user; storing, in computer memory available to the voice-activated product, an audio file comprising the recorded audible rendition of the narrative spoken by the user, wherein the audio file is indexed in the computer memory based at least in part on the bibliographic information; and rendering, by the voice-activated product via an audio output device, the audio file in response to a second command received at one or more of the input devices from the same user or a different user, wherein the second command comprises an indication of the bibliographic information. 7. The computer-implemented method of claim 6, wherein the first command comprises speech received via the audio input device. 8. The computer-implemented method of claim 6, further comprising providing, by the voice-activated product at one or more output devices, a solicitation for the bibliographic information associated with the narrative. 9. The computer-implemented method of claim 6, wherein the bibliographic input comprises speech received via the audio input device. 10. The computer-implemented method of claim 6, wherein the second command comprises speech received via the audio input device. 11. The computer-implemented method of claim 6, wherein the narrative comprises a preexisting written work, and the method further comprises matching the bibliographic information to the preexisting written work in a database. 12. The computer-implemented method of claim 11, further comprising providing, by the voice-activated product via one or more of the output devices, additional information associated with the preexisting written work in the database. 13. The computer-implemented method of claim 12, wherein the additional information includes a visual rendition representing the preexisting written work. 14. The computer-implemented method of claim 11, wherein the audio file is further indexed in the computer memory based on additional information associated with the preexisting written work in the database. 15. The computer-implemented method of claim 6, wherein the computer-memory stores a plurality of recorded audio files that are indexed by corresponding bibliographic information. 16. The computer-implemented method of claim 15, wherein the plurality of audio files are further indexed by identities of users that recorded them. 17. The computer-implemented method of claim 6, further comprising incorporating, by the voice-activated product into the audio file, one or more sound effects selected by the user. 18. The computer-implemented method of claim 6, further comprising performing voice analysis on the second command to determine that the second command was spoken by a different user than the user. 19. The computer-implemented method of claim 18, further comprising initiating, by the voice-activated product, an interactive dialog tailored towards the different user based on the voice analysis. 20. A voice-enabled device comprising:
one or more processors; one or more speakers operably coupled with the one or more processors; one or more microphones operably coupled with the one or more processors; and memory operably coupled with the one or more processors, wherein the memory stores instructions that, in response to execution of the instructions by the one or more processors, cause the one or more processors to: receive, at one or more of the microphones, a first command from a user, wherein the first command indicates that the user wishes to record an audible rendition of a narrative; receive, at one or more of the microphones devices, bibliographic input from the user, wherein the bibliographic input is indicative of bibliographic information associated with the narrative; record, via one or more of the microphones, the audible rendition of the narrative spoken by the user; store, in the memory or in computer memory of one or more remote computing devices, an audio file comprising the recorded audible rendition of the narrative spoken by the user, wherein the audio file is indexed based at least in part on the bibliographic information; and render, via one or more of the speakers, the audio file in response to a second command received at one or more of the microphones from the same user ora different user, wherein the second command comprises an indication of the bibliographic information. | Methods, apparatus, and computer readable media are described related to recording, organizing, and making audio files available for consumption by voice-activated products. In various implementations, in response to receiving an input from a first user indicating that the first user intends to record audio content, audio content may be captured and stored. Input may be received from the first user indicating at least one identifier for the audio content. The stored audio content may be associated with the at least one identifier. A voice input may be received from a subsequent user. In response to determining that the voice input has particular characteristics, speech recognition may be biased in respect of the voice input towards recognition of the at least one identifier. In response to recognizing, based on the biased speech recognition, presence of the at least one identifier in the voice input, the stored audio content may be played.1. A method comprising:
in response to receiving an input from a first user indicating that the first user intends to record audio content, causing capture and storage of audio content; receiving input from the first user indicating at least one identifier for the audio content; associating the stored audio content with the at least one identifier; receiving a voice input from a subsequent user; analyzing characteristics of the voice input; in response to determining that the voice input has particular characteristics, biasing speech recognition in respect of the voice input towards recognition of the at least one identifier; and in response to recognizing, based on the biased speech recognition, presence of the at least one identifier in the voice input, causing playback of the stored audio content. 2. The method of claim 1, further comprising:
in response to the biased speech recognition yielding a non-recognition of the voice input, causing provision of a selectable option to the subsequent user for enabling the subsequent user to cause playback of the stored content. 3. The method of claim 2, wherein the selectable option includes the at least one identifier. 4. The method of claim 2, wherein the selectable option is an audio prompt. 5. The method of claim 1, further comprising:
in response to receiving the input from the first user indicating that the first user intends to record audio content, providing a prompt to the first user instructing the first user to provide the input indicating the at least one identifier. 6. A computer-implemented method, comprising:
receiving, by a voice-activated product at one or more input devices, a first command from a user, wherein the first command notifies the voice-activated product that the user wishes to record an audible rendition of a narrative; receiving, by the voice-activated product at one or more of the input devices, bibliographic input from the user, wherein the bibliographic input is indicative of bibliographic information associated with the narrative; recording, by the voice-activated product via an audio input device, the audible rendition of the narrative spoken by the user; storing, in computer memory available to the voice-activated product, an audio file comprising the recorded audible rendition of the narrative spoken by the user, wherein the audio file is indexed in the computer memory based at least in part on the bibliographic information; and rendering, by the voice-activated product via an audio output device, the audio file in response to a second command received at one or more of the input devices from the same user or a different user, wherein the second command comprises an indication of the bibliographic information. 7. The computer-implemented method of claim 6, wherein the first command comprises speech received via the audio input device. 8. The computer-implemented method of claim 6, further comprising providing, by the voice-activated product at one or more output devices, a solicitation for the bibliographic information associated with the narrative. 9. The computer-implemented method of claim 6, wherein the bibliographic input comprises speech received via the audio input device. 10. The computer-implemented method of claim 6, wherein the second command comprises speech received via the audio input device. 11. The computer-implemented method of claim 6, wherein the narrative comprises a preexisting written work, and the method further comprises matching the bibliographic information to the preexisting written work in a database. 12. The computer-implemented method of claim 11, further comprising providing, by the voice-activated product via one or more of the output devices, additional information associated with the preexisting written work in the database. 13. The computer-implemented method of claim 12, wherein the additional information includes a visual rendition representing the preexisting written work. 14. The computer-implemented method of claim 11, wherein the audio file is further indexed in the computer memory based on additional information associated with the preexisting written work in the database. 15. The computer-implemented method of claim 6, wherein the computer-memory stores a plurality of recorded audio files that are indexed by corresponding bibliographic information. 16. The computer-implemented method of claim 15, wherein the plurality of audio files are further indexed by identities of users that recorded them. 17. The computer-implemented method of claim 6, further comprising incorporating, by the voice-activated product into the audio file, one or more sound effects selected by the user. 18. The computer-implemented method of claim 6, further comprising performing voice analysis on the second command to determine that the second command was spoken by a different user than the user. 19. The computer-implemented method of claim 18, further comprising initiating, by the voice-activated product, an interactive dialog tailored towards the different user based on the voice analysis. 20. A voice-enabled device comprising:
one or more processors; one or more speakers operably coupled with the one or more processors; one or more microphones operably coupled with the one or more processors; and memory operably coupled with the one or more processors, wherein the memory stores instructions that, in response to execution of the instructions by the one or more processors, cause the one or more processors to: receive, at one or more of the microphones, a first command from a user, wherein the first command indicates that the user wishes to record an audible rendition of a narrative; receive, at one or more of the microphones devices, bibliographic input from the user, wherein the bibliographic input is indicative of bibliographic information associated with the narrative; record, via one or more of the microphones, the audible rendition of the narrative spoken by the user; store, in the memory or in computer memory of one or more remote computing devices, an audio file comprising the recorded audible rendition of the narrative spoken by the user, wherein the audio file is indexed based at least in part on the bibliographic information; and render, via one or more of the speakers, the audio file in response to a second command received at one or more of the microphones from the same user ora different user, wherein the second command comprises an indication of the bibliographic information. | 2,600 |
10,513 | 10,513 | 15,874,452 | 2,625 | Provided is an input unit and input method for an information terminal for easy input work and avoiding an operating error. Included are a support, an input unit including a laser device on the support, an information terminal including a sensor in a display portion, and a switch connected with or without a wire to at least one of the laser device and the information terminal. A desired region of the display portion is irradiated with laser light output from the input unit. Information is input to the region by operation of the switch in the state where the region is irradiated with laser light. | 1. An input unit for an information terminal comprising:
a support; a movable portion provided on the support; a laser device provided on the support with the movable portion provided therebetween; and a switch connected to the laser device, wherein the movable portion for adjusting the output direction of a laser light is provided so that the output direction of the laser light from the laser device coincides with a user's line of sight. 2. The input unit according to claim 1, wherein the support is glasses, a hat, a helmet, or a headgear. 3. The input unit according to claim 1, wherein the switch is a breath switch, a push-button switch, a pedal switch, or a blink switch. 4. The input unit according to claim 1,
wherein the laser device outputs a first laser light and a second laser light, and wherein the first laser light and the second laser light are switched by the switch. 5. The input unit according to claim 1,
wherein the support is glasses, a hat, a helmet, or a headgear, and wherein the switch is a breath switch, a push-button switch, a pedal switch, or a blink switch. 6. The input unit according to claim 1,
wherein the support is glasses, a hat, a helmet, or a headgear, wherein the laser device outputs a first laser light and a second laser light, and wherein the first laser light and the second laser light are switched by the switch. 7. The input unit according to claim 1,
wherein the switch is a breath switch, a push-button switch, a pedal switch, or a blink switch, wherein the laser device outputs a first laser light and a second laser light, and wherein the first laser light and the second laser light are switched by the switch. 8. The input unit according to claim 1,
wherein the support is glasses, a hat, a helmet, or a headgear, wherein the switch is a breath switch, a push-button switch, a pedal switch, or a blink switch, wherein the laser device outputs a first laser light and a second laser light, and wherein the first laser light and the second laser light are switched by the switch. 9. An input method for an information terminal comprising an input unit for performing input to the information terminal,
wherein the input unit includes a laser device and a switch connected to the laser device, wherein the information terminal includes a display portion, wherein the display portion includes a sensor, wherein a region in the display portion is irradiated with a first laser light output from the laser device, wherein the first laser light is switched from the first laser light to a second laser light, wherein the region is irradiated with the second laser light, wherein the sensor included in the region detects the second laser light, and wherein the first laser light is switched to the second laser light with the switch. 10. The input method according to claim 9, wherein the second laser light and the first laser light have different intensities. 11. The input method according to claim 9, wherein the second laser light has a higher intensity than the first laser light. 12. The input method according to claim 9, wherein each of the first laser light and the second laser light is a pulsed laser light. 13. The input method according to claim 12, wherein the second laser light has a shorter pulse period than the first laser light. 14. The input method according to claim 12, wherein the second laser light and the first laser light have different duty ratios. 15. The input method according to claim 9,
wherein the second laser light and the first laser light have different intensities, and wherein the second laser light has a higher intensity than the first laser light. 16. An input support system for an information terminal including an input unit for performing input to the information terminal, and artificial intelligence,
wherein the input unit includes a laser device and a switch connected to the laser device, wherein the information terminal includes a display portion, wherein the display portion includes a sensor, wherein a region in the display portion is irradiated with a first laser light output from the laser device, wherein the sensor detects the first laser light, wherein information is input to the information terminal with the switch, and wherein the artificial intelligence extracts and holds a movement pattern of the first laser light detected by the sensor. | Provided is an input unit and input method for an information terminal for easy input work and avoiding an operating error. Included are a support, an input unit including a laser device on the support, an information terminal including a sensor in a display portion, and a switch connected with or without a wire to at least one of the laser device and the information terminal. A desired region of the display portion is irradiated with laser light output from the input unit. Information is input to the region by operation of the switch in the state where the region is irradiated with laser light.1. An input unit for an information terminal comprising:
a support; a movable portion provided on the support; a laser device provided on the support with the movable portion provided therebetween; and a switch connected to the laser device, wherein the movable portion for adjusting the output direction of a laser light is provided so that the output direction of the laser light from the laser device coincides with a user's line of sight. 2. The input unit according to claim 1, wherein the support is glasses, a hat, a helmet, or a headgear. 3. The input unit according to claim 1, wherein the switch is a breath switch, a push-button switch, a pedal switch, or a blink switch. 4. The input unit according to claim 1,
wherein the laser device outputs a first laser light and a second laser light, and wherein the first laser light and the second laser light are switched by the switch. 5. The input unit according to claim 1,
wherein the support is glasses, a hat, a helmet, or a headgear, and wherein the switch is a breath switch, a push-button switch, a pedal switch, or a blink switch. 6. The input unit according to claim 1,
wherein the support is glasses, a hat, a helmet, or a headgear, wherein the laser device outputs a first laser light and a second laser light, and wherein the first laser light and the second laser light are switched by the switch. 7. The input unit according to claim 1,
wherein the switch is a breath switch, a push-button switch, a pedal switch, or a blink switch, wherein the laser device outputs a first laser light and a second laser light, and wherein the first laser light and the second laser light are switched by the switch. 8. The input unit according to claim 1,
wherein the support is glasses, a hat, a helmet, or a headgear, wherein the switch is a breath switch, a push-button switch, a pedal switch, or a blink switch, wherein the laser device outputs a first laser light and a second laser light, and wherein the first laser light and the second laser light are switched by the switch. 9. An input method for an information terminal comprising an input unit for performing input to the information terminal,
wherein the input unit includes a laser device and a switch connected to the laser device, wherein the information terminal includes a display portion, wherein the display portion includes a sensor, wherein a region in the display portion is irradiated with a first laser light output from the laser device, wherein the first laser light is switched from the first laser light to a second laser light, wherein the region is irradiated with the second laser light, wherein the sensor included in the region detects the second laser light, and wherein the first laser light is switched to the second laser light with the switch. 10. The input method according to claim 9, wherein the second laser light and the first laser light have different intensities. 11. The input method according to claim 9, wherein the second laser light has a higher intensity than the first laser light. 12. The input method according to claim 9, wherein each of the first laser light and the second laser light is a pulsed laser light. 13. The input method according to claim 12, wherein the second laser light has a shorter pulse period than the first laser light. 14. The input method according to claim 12, wherein the second laser light and the first laser light have different duty ratios. 15. The input method according to claim 9,
wherein the second laser light and the first laser light have different intensities, and wherein the second laser light has a higher intensity than the first laser light. 16. An input support system for an information terminal including an input unit for performing input to the information terminal, and artificial intelligence,
wherein the input unit includes a laser device and a switch connected to the laser device, wherein the information terminal includes a display portion, wherein the display portion includes a sensor, wherein a region in the display portion is irradiated with a first laser light output from the laser device, wherein the sensor detects the first laser light, wherein information is input to the information terminal with the switch, and wherein the artificial intelligence extracts and holds a movement pattern of the first laser light detected by the sensor. | 2,600 |
10,514 | 10,514 | 14,515,527 | 2,646 | Beacons (e.g., mBeacons or meruBeacons) to advertise presence of nearby objects to stations in a wireless communication network from an access point are provided. Location of a station connected to the access point is detected. One or more physical objects having a location proximate to the station are identified and can be indicated to a user. To do so, in an embodiment, responsive to the proximity of locations, a beacon having a BSSID corresponding to each of the one or more physical objects is generated. The BSSID can uniquely identify the one or more physical objects. The beacon is transmitted the station which can request additional information concerning the one or more physical objects. For example, an Amazon listing for a nearby retail item can be automatically displayed on a smartphone. | 1. A computer-implemented method in an access point for advertising the presence of nearby physical objects through a wireless portion of a communication network, the method comprising the steps of:
detecting a location of a station connected to the access point, the access point providing connectively to the communication network for the station; identifying, from a database associated with the access point, one or more physical objects having a location proximate to the station; responsive to the proximity of locations, at the access point, generating a beacon having a BSSID (Basic Service Set Identification) corresponding to each of the one or more physical objects, wherein the BSSID uniquely identifies each of the one or more physical objects; transmitting the beacon to the station; receiving, from the station, at least one of the one or more BSSIDs as unique identification for obtaining information concerning at least one of the one or more physical objects. 2. The method of claim 1, wherein the one or more physical objects have no associated computer hardware for communication with the access point or the station. 3. The method of claim 1, further comprising:
aggregating the one or more BSSIDs into a single beacon or probe response in compliance with at least one or the IEEE 802.11k, IEEE 802.11v and IEEE 802.11r protocols. 4. The method of claim 1, further comprising:
receiving a request for retrievable information for a BSSID corresponding to one of the one or more physical objects; looking-up the retrievable information according to the BSSID; and transmitting the retrievable information to the station. 5. The method of claim 1, wherein the retrievable information about the nearby physical object is displayed to a user of the station. 6. The method of claim 1, wherein the retrievable information is used to query an external data resource about the nearby physical object. 7. The method of claim 1, wherein detecting the location comprises:
receiving an RSSI measurement indicative of a distance between the station and the access point. 8. The method of claim 1, wherein the BSSID includes markers to indicate to the station that the BSSID concerns a nearby physical object. 9. A non-transitory computer readable medium storing source code that, when executed by a computer, performs a method in an access point for advertising the presence of nearby physical objects through a wireless communication network, the method comprising the steps of:
detecting a location of a station connected to the access point, the access point providing connectively to the communication network for the station; identifying, from a database associated with the access point, one or more physical objects having a location proximate to the station; responsive to the proximity of locations, at the access point, generating a beacon having a BSSID (Basic Service Set Identification) corresponding to each of the one or more physical objects, wherein the BSSID uniquely identifies each of the one or more physical objects; transmitting the beacon to the station, wherein the station submits receiving, from the station, at least one of the one or more BSSIDs as unique identification for obtaining information concerning at least one of the one or more physical objects. 10. The computer readable medium of claim 9, wherein in the method, the one or more physical objects have no associated computer hardware for communication with the access point or the station. 11. The computer readable medium of claim 9, the method further comprising:
aggregating the one or more BSSIDs into a single beacon or probe response in compliance with at least one or the IEEE 802.11k, IEEE 802.11v and IEEE 802.11r protocols. 12. The computer readable medium of claim 9, the method further comprising:
receiving a request for retrievable information for a BSSID corresponding to one of the one or more physical objects; looking-up the retrievable information according to the BSSID; and transmitting the retrievable information to the station. 13. The computer readable medium of claim 9, wherein in the method, the retrievable information about the nearby physical object is displayed to a user of the station. 14. The computer readable medium of claim 9, wherein in the method, the retrievable information is used to query an external data resource about the nearby physical object. 15. The computer readable medium of claim 9, wherein in the method, the detecting the location comprises:
receiving an RSSI measurement indicative of a distance between the station and the access point. 16. The computer readable medium of claim 9, wherein in the method, the BSSID includes markers to indicate to the station that the BSSID concerns a nearby physical object. 17. An access point for advertising the presence of nearby physical objects through a wireless communication network, the access point comprising:
a processor; and a memory, comprising:
a first module to detect a location of a station connected to the access point, the access point providing connectively to the communication network for the station;
a second module to identify, from a database associated with the access point, one or more physical objects having a location proximate to the station;
a third module to, responsive to the proximity of locations, at the access point, generate a beacon having a BSSID (Basic Service Set Identification) corresponding to each of the one or more physical objects, wherein the BSSID uniquely identifies each of the one or more physical objects;
a fourth module to transmit the beacon to the station; and
a fifth module to receive, from the station, at least one of the one or more BSSIDs as unique identifiers for obtaining information concerning the one or more physical objects. | Beacons (e.g., mBeacons or meruBeacons) to advertise presence of nearby objects to stations in a wireless communication network from an access point are provided. Location of a station connected to the access point is detected. One or more physical objects having a location proximate to the station are identified and can be indicated to a user. To do so, in an embodiment, responsive to the proximity of locations, a beacon having a BSSID corresponding to each of the one or more physical objects is generated. The BSSID can uniquely identify the one or more physical objects. The beacon is transmitted the station which can request additional information concerning the one or more physical objects. For example, an Amazon listing for a nearby retail item can be automatically displayed on a smartphone.1. A computer-implemented method in an access point for advertising the presence of nearby physical objects through a wireless portion of a communication network, the method comprising the steps of:
detecting a location of a station connected to the access point, the access point providing connectively to the communication network for the station; identifying, from a database associated with the access point, one or more physical objects having a location proximate to the station; responsive to the proximity of locations, at the access point, generating a beacon having a BSSID (Basic Service Set Identification) corresponding to each of the one or more physical objects, wherein the BSSID uniquely identifies each of the one or more physical objects; transmitting the beacon to the station; receiving, from the station, at least one of the one or more BSSIDs as unique identification for obtaining information concerning at least one of the one or more physical objects. 2. The method of claim 1, wherein the one or more physical objects have no associated computer hardware for communication with the access point or the station. 3. The method of claim 1, further comprising:
aggregating the one or more BSSIDs into a single beacon or probe response in compliance with at least one or the IEEE 802.11k, IEEE 802.11v and IEEE 802.11r protocols. 4. The method of claim 1, further comprising:
receiving a request for retrievable information for a BSSID corresponding to one of the one or more physical objects; looking-up the retrievable information according to the BSSID; and transmitting the retrievable information to the station. 5. The method of claim 1, wherein the retrievable information about the nearby physical object is displayed to a user of the station. 6. The method of claim 1, wherein the retrievable information is used to query an external data resource about the nearby physical object. 7. The method of claim 1, wherein detecting the location comprises:
receiving an RSSI measurement indicative of a distance between the station and the access point. 8. The method of claim 1, wherein the BSSID includes markers to indicate to the station that the BSSID concerns a nearby physical object. 9. A non-transitory computer readable medium storing source code that, when executed by a computer, performs a method in an access point for advertising the presence of nearby physical objects through a wireless communication network, the method comprising the steps of:
detecting a location of a station connected to the access point, the access point providing connectively to the communication network for the station; identifying, from a database associated with the access point, one or more physical objects having a location proximate to the station; responsive to the proximity of locations, at the access point, generating a beacon having a BSSID (Basic Service Set Identification) corresponding to each of the one or more physical objects, wherein the BSSID uniquely identifies each of the one or more physical objects; transmitting the beacon to the station, wherein the station submits receiving, from the station, at least one of the one or more BSSIDs as unique identification for obtaining information concerning at least one of the one or more physical objects. 10. The computer readable medium of claim 9, wherein in the method, the one or more physical objects have no associated computer hardware for communication with the access point or the station. 11. The computer readable medium of claim 9, the method further comprising:
aggregating the one or more BSSIDs into a single beacon or probe response in compliance with at least one or the IEEE 802.11k, IEEE 802.11v and IEEE 802.11r protocols. 12. The computer readable medium of claim 9, the method further comprising:
receiving a request for retrievable information for a BSSID corresponding to one of the one or more physical objects; looking-up the retrievable information according to the BSSID; and transmitting the retrievable information to the station. 13. The computer readable medium of claim 9, wherein in the method, the retrievable information about the nearby physical object is displayed to a user of the station. 14. The computer readable medium of claim 9, wherein in the method, the retrievable information is used to query an external data resource about the nearby physical object. 15. The computer readable medium of claim 9, wherein in the method, the detecting the location comprises:
receiving an RSSI measurement indicative of a distance between the station and the access point. 16. The computer readable medium of claim 9, wherein in the method, the BSSID includes markers to indicate to the station that the BSSID concerns a nearby physical object. 17. An access point for advertising the presence of nearby physical objects through a wireless communication network, the access point comprising:
a processor; and a memory, comprising:
a first module to detect a location of a station connected to the access point, the access point providing connectively to the communication network for the station;
a second module to identify, from a database associated with the access point, one or more physical objects having a location proximate to the station;
a third module to, responsive to the proximity of locations, at the access point, generate a beacon having a BSSID (Basic Service Set Identification) corresponding to each of the one or more physical objects, wherein the BSSID uniquely identifies each of the one or more physical objects;
a fourth module to transmit the beacon to the station; and
a fifth module to receive, from the station, at least one of the one or more BSSIDs as unique identifiers for obtaining information concerning the one or more physical objects. | 2,600 |
10,515 | 10,515 | 14,990,169 | 2,689 | The present invention generally relates to healthcare call bells. Specifically, this invention relates to a call bell system and method for providing a need specific service identifiers and requests based on one or more criteria identified by the system. | 1. A need specific call bell system, said call bell system comprising:
a call bell device having one or more need specific buttons and first communications means; and a remote computing device having a call processing module and a second communication means; wherein said first communications means is configured to send a call request in response to a user's interaction with one or more of said one or more need specific buttons; wherein said second communication receives said call request and the call processing module processes said call request to create a notification for a particular responder to assist a patient based on the need indicated in the call request by the patient. 2. The call bell system of claim 1 wherein said wherein said call processing module comprises physical memory instructions that cause the call processing module to receive said call request from said call bell device, process said call request, generate a notification event based at least in part on said call request, and transmit said notification event to one or more responders. 3. The call bell system of claim 1, wherein said call bell device is an analog device. 4. The call bell system of claim 1, wherein said call bell device is a remote computing device. 5. The call bell system of claim 1, wherein said one or more need specific buttons are comprised of a separate call button for at least pain, hunger, bathroom, emergency and general assistance. 6. The call bell system of claim 1, wherein said call processing module is configured to receive one or more call requests simultaneously. 7. The call bell system of claim 1, wherein said physical memory storing instructions further causes said call processing module to compile a prioritized list of said one or more call requests. 8. The call bell system of claim 1, wherein one or more responders assist said user with said call request. 9. A method for providing a need specific call bell system, said method comprising:
receiving a need specific call request from a call bell device; comparing the need specific request to the qualifications of available responders; selecting a responder based on the need specific call request and the qualification of responders available; providing a notification on one or more remote computing device for the responder to assist with the need specific call request. 10. The method of claim 9 further comprising generating a notification event, wherein said notification event is based at least in part on said call request and transmitting said notification event to one or more responders, wherein said notification event is displayed on a notification receiver. 11. The method of claim 9, further comprising the step of compiling a prioritized list of said one or more call requests; wherein said one or more call requests are sorted by urgency. 12. The method of claim 9, further comprising the step of assisting said user; wherein said one or more responders assist said user with said one or more call requests. 13. The method of claim 9, wherein said call request is a selection of a request type from a group of request types comprising pain, hunger, bathroom, emergency and general assistance. 14. The method of claim 9, further comprising the step of assigning said call request to a specific responder of said one or more responders based on one or more qualifications of said specific responder. 15. The method of claim 9, further comprising the step of assigning said call request to a specific responder of said one or more responders based on said request type. | The present invention generally relates to healthcare call bells. Specifically, this invention relates to a call bell system and method for providing a need specific service identifiers and requests based on one or more criteria identified by the system.1. A need specific call bell system, said call bell system comprising:
a call bell device having one or more need specific buttons and first communications means; and a remote computing device having a call processing module and a second communication means; wherein said first communications means is configured to send a call request in response to a user's interaction with one or more of said one or more need specific buttons; wherein said second communication receives said call request and the call processing module processes said call request to create a notification for a particular responder to assist a patient based on the need indicated in the call request by the patient. 2. The call bell system of claim 1 wherein said wherein said call processing module comprises physical memory instructions that cause the call processing module to receive said call request from said call bell device, process said call request, generate a notification event based at least in part on said call request, and transmit said notification event to one or more responders. 3. The call bell system of claim 1, wherein said call bell device is an analog device. 4. The call bell system of claim 1, wherein said call bell device is a remote computing device. 5. The call bell system of claim 1, wherein said one or more need specific buttons are comprised of a separate call button for at least pain, hunger, bathroom, emergency and general assistance. 6. The call bell system of claim 1, wherein said call processing module is configured to receive one or more call requests simultaneously. 7. The call bell system of claim 1, wherein said physical memory storing instructions further causes said call processing module to compile a prioritized list of said one or more call requests. 8. The call bell system of claim 1, wherein one or more responders assist said user with said call request. 9. A method for providing a need specific call bell system, said method comprising:
receiving a need specific call request from a call bell device; comparing the need specific request to the qualifications of available responders; selecting a responder based on the need specific call request and the qualification of responders available; providing a notification on one or more remote computing device for the responder to assist with the need specific call request. 10. The method of claim 9 further comprising generating a notification event, wherein said notification event is based at least in part on said call request and transmitting said notification event to one or more responders, wherein said notification event is displayed on a notification receiver. 11. The method of claim 9, further comprising the step of compiling a prioritized list of said one or more call requests; wherein said one or more call requests are sorted by urgency. 12. The method of claim 9, further comprising the step of assisting said user; wherein said one or more responders assist said user with said one or more call requests. 13. The method of claim 9, wherein said call request is a selection of a request type from a group of request types comprising pain, hunger, bathroom, emergency and general assistance. 14. The method of claim 9, further comprising the step of assigning said call request to a specific responder of said one or more responders based on one or more qualifications of said specific responder. 15. The method of claim 9, further comprising the step of assigning said call request to a specific responder of said one or more responders based on said request type. | 2,600 |
10,516 | 10,516 | 15,507,586 | 2,625 | A method for operating an electronic device is provided. The method includes checking context information for controlling a display divided into a first display and a second display, and controlling at least one of the first display and the second display based on the context information. | 1. An electronic device comprising:
a display divided into a first display and a second display; and a processor configured to:
check context information indicating a status change of the electronic device, and
control at least one of the first display or the second display based on the context information. 2. The electronic device of claim 1, wherein the first display and the second display are configured as independent displays. 3. The electronic device of claim 1, wherein the first display and the second display are at least partial areas of a single display. 4. The electronic device of claim 1, wherein the processor is further configured to control to cut power of at least one of the first display or the second display based on the context information. 5. The electronic device of claim 1, wherein the processor is further configured to display by limiting a display attribute of at least one of the first display or the second display based on the context information. 6. The electronic device of claim 5, wherein the display attribute comprises at least one of a brightness level, the number of colors, or a screen refresh rate which can be displayed on the first display and the second display. 7. The electronic device of claim 1, wherein the processor is further configured to control to:
cut power of the first display during a preset time, and display by limiting a display attribute of the second display. 8. The electronic device of claim 1, wherein, when no movement occurs during a reference time, the processor is further configured to control to:
cut power of the first display, and display by limiting a display attribute of the second display. 9. The electronic device of claim 1, wherein, when no movement and input occur during a reference time, the processor is further configured to control to:
cut power of the first display, and display by limiting a display attribute of the second display. 10. The electronic device of claim 1, wherein, when detecting that a cover is closed, the processor is further configured to control to:
cut power of the first display, and display by limiting a display attribute of the second display. 11. A method for operating an electronic device, comprising:
checking context information for controlling a display divided into a first display and a second display; and controlling at least one of the first display or the second display based on the context information. 12. The method of claim 11, wherein the first display and the second display are configured as independent displays. 13. The method of claim 11, wherein the first display and the second display are at least partial areas of the display. 14. The method of claim 11, wherein the controlling of the at least one of the first display or the second display based on the context information comprises:
cutting power of at least one of the first display and the second display based on the context information. 15. The method of claim 11, wherein the controlling of the at least one of the first display and the second display based on the context information comprises:
limiting a display attribute of at least one of the first display or the second display based on the context information. 16. The method of claim 15, wherein the display attribute comprises at least one of a brightness level, the number of colors, or a screen refresh rate which can be displayed on the first display and the second display. 17. The method of claim 11, wherein the controlling of the at least one of the first display or the second display based on the context information comprises:
cutting power of the first display and displaying by limiting a display attribute of the second display based on if a preset time detects, if no movement occurs during a reference time, if no movement and input occur during a reference time, or if a cover is closed. 18-20. (canceled) 21. The method of claim 11, further comprising:
detecting a preset gesture input for at least one display of the first display or the second display; and when detecting the gesture input, providing an interface for editing information to be displayed on the second display. 22. The method of claim 21, wherein the providing of the interface comprises:
providing at least one of creation, display order change, deletion, upload, or download of the information to be displayed on the second display, through the interface. 23. A non-transitory computer-readable recording medium storing instructions, which when executed by at least one processor cause the at least one processor to control for:
recording a program for checking context information for controlling a display divided into a first display and a second display; and controlling at least one of the first display or the second display based on the context information. | A method for operating an electronic device is provided. The method includes checking context information for controlling a display divided into a first display and a second display, and controlling at least one of the first display and the second display based on the context information.1. An electronic device comprising:
a display divided into a first display and a second display; and a processor configured to:
check context information indicating a status change of the electronic device, and
control at least one of the first display or the second display based on the context information. 2. The electronic device of claim 1, wherein the first display and the second display are configured as independent displays. 3. The electronic device of claim 1, wherein the first display and the second display are at least partial areas of a single display. 4. The electronic device of claim 1, wherein the processor is further configured to control to cut power of at least one of the first display or the second display based on the context information. 5. The electronic device of claim 1, wherein the processor is further configured to display by limiting a display attribute of at least one of the first display or the second display based on the context information. 6. The electronic device of claim 5, wherein the display attribute comprises at least one of a brightness level, the number of colors, or a screen refresh rate which can be displayed on the first display and the second display. 7. The electronic device of claim 1, wherein the processor is further configured to control to:
cut power of the first display during a preset time, and display by limiting a display attribute of the second display. 8. The electronic device of claim 1, wherein, when no movement occurs during a reference time, the processor is further configured to control to:
cut power of the first display, and display by limiting a display attribute of the second display. 9. The electronic device of claim 1, wherein, when no movement and input occur during a reference time, the processor is further configured to control to:
cut power of the first display, and display by limiting a display attribute of the second display. 10. The electronic device of claim 1, wherein, when detecting that a cover is closed, the processor is further configured to control to:
cut power of the first display, and display by limiting a display attribute of the second display. 11. A method for operating an electronic device, comprising:
checking context information for controlling a display divided into a first display and a second display; and controlling at least one of the first display or the second display based on the context information. 12. The method of claim 11, wherein the first display and the second display are configured as independent displays. 13. The method of claim 11, wherein the first display and the second display are at least partial areas of the display. 14. The method of claim 11, wherein the controlling of the at least one of the first display or the second display based on the context information comprises:
cutting power of at least one of the first display and the second display based on the context information. 15. The method of claim 11, wherein the controlling of the at least one of the first display and the second display based on the context information comprises:
limiting a display attribute of at least one of the first display or the second display based on the context information. 16. The method of claim 15, wherein the display attribute comprises at least one of a brightness level, the number of colors, or a screen refresh rate which can be displayed on the first display and the second display. 17. The method of claim 11, wherein the controlling of the at least one of the first display or the second display based on the context information comprises:
cutting power of the first display and displaying by limiting a display attribute of the second display based on if a preset time detects, if no movement occurs during a reference time, if no movement and input occur during a reference time, or if a cover is closed. 18-20. (canceled) 21. The method of claim 11, further comprising:
detecting a preset gesture input for at least one display of the first display or the second display; and when detecting the gesture input, providing an interface for editing information to be displayed on the second display. 22. The method of claim 21, wherein the providing of the interface comprises:
providing at least one of creation, display order change, deletion, upload, or download of the information to be displayed on the second display, through the interface. 23. A non-transitory computer-readable recording medium storing instructions, which when executed by at least one processor cause the at least one processor to control for:
recording a program for checking context information for controlling a display divided into a first display and a second display; and controlling at least one of the first display or the second display based on the context information. | 2,600 |
10,517 | 10,517 | 14,345,691 | 2,642 | The invention provides a solution, in a reception point of a communication system, of providing a user equipment with a virtual cell ID, where firstly a virtual cell ID corresponding to the reception point is received from a network control device, the virtual cell ID being used for indicating a cluster to which the reception point belongs; then the virtual cell ID is provided to at least one user equipment served by the reception point; and finally resource index information corresponding to each user equipment among the at least one user equipment is transmitted respectively to the each user equipment. There is further provided a solution, in a user equipment of a communication system, of determining an orthogonal resource, where firstly a virtual cell ID of the reception point is obtained; then resource index information corresponding to the present user equipment is received from the reception point; next an orthogonal resource corresponding to the present user equipment is determined according to the virtual cell ID and the resource index information; and finally uplink control information is transmitted to the reception point over the determined orthogonal resource. A tradeoff between the number of orthogonal resources and interference intensity can be achieved by applying of the invention. | 1. A method, in a network control device of a communication system based on coordinated multi-point transmission, of grouping a plurality of reception points, the method comprising:
grouping the plurality of reception points into several clusters, wherein reception points in each cluster share a virtual cell ID; and transmitting a virtual cell ID corresponding to each reception point among the plurality of reception points respectively to the each reception point. 2. A method, in a reception point of a communication system, of providing a user equipment with a virtual cell ID, the method comprising:
receiving a virtual cell ID corresponding to the reception point from a network control device, the virtual cell ID being used for indicating a cluster to which the reception point belongs; providing the virtual cell ID to at least one user equipment served by the reception point; and transmitting resource index information corresponding to each user equipment among the at least one user equipment respectively to the each user equipment. 3. The method according to claim 2, wherein providing the virtual cell ID comprises: transmitting the virtual cell ID respectively to the at least one user equipment served by the reception point via an RRC signaling. 4. The method according to claim 2, wherein providing the virtual cell ID comprises: transmitting reference signal port information corresponding to each user equipment among the at least one user equipment respectively to the each user equipment, wherein the user equipment determines a virtual cell ID corresponding to the present user equipment from a port-to-virtual cell ID mapping table according to the reference signal port information. 5. A method, in a user equipment of a communication system, of determining an orthogonal resource, the user equipment being served by a reception point, and the method comprising:
obtaining a virtual cell ID of the reception point; receiving resource index information corresponding to the present user equipment from the reception point; determining an orthogonal resource corresponding to the present user equipment according to the virtual cell ID and the resource index information; and transmitting uplink control information to the reception point over the determined orthogonal resource. 6. The method according to claim 5, wherein obtaining the virtual cell ID comprises: receiving from the reception point the virtual cell ID that is transmitted via an RRC signaling. 7. The method according to claim 5, wherein obtaining the virtual cell ID comprises:
receiving reference signal port information corresponding to the present user equipment from the reception point; and determining the virtual cell ID corresponding to the present user equipment from a port-to-virtual cell ID mapping table according to the reference signal port information. 8. An apparatus, in a network control device of a communication system based on coordinated multi-point transmission, for grouping a plurality of reception points, the apparatus comprising:
a grouping device configured to group the plurality of reception points into several clusters, wherein reception points in each cluster share a virtual cell ID; and a first transmitting device configured to transmit a virtual cell ID corresponding to each reception point among the plurality of reception points respectively to the each reception point. 9. An apparatus, in a reception point of a communication system, for providing a user equipment with a virtual cell ID, the apparatus comprising:
a first receiving device configured to receive a virtual cell ID corresponding to the reception point from a network control device, the virtual cell ID being used for indicating a cluster to which the reception point belongs; a providing device configured to provide the virtual cell ID to at least one user equipment served by the reception point; and a second transmitting device configured to transmit resource index information corresponding to each user equipment among the at least one user equipment respectively to the each user equipment. 10. The apparatus according to claim 9, wherein the providing device is further configured to transmit the virtual cell ID respectively to the at least one user equipment served by the reception point via an RRC signaling. 11. The apparatus according to claim 9, wherein the providing device is further configured to transmit reference signal port information corresponding to each user equipment among the at least one user equipment respectively to the each user equipment, wherein the user equipment determines a virtual cell ID corresponding to the present user equipment from a port-to-virtual cell ID mapping table according to the reference signal port information. 12. An apparatus, in a user equipment of a communication system, for determining an orthogonal resource, the user equipment being served by a reception point, and the apparatus comprising:
an obtaining device configured to obtain a virtual cell ID of the reception point; a second receiving device configured to receive resource index information corresponding to the present user equipment from the reception point; a determining device configured to determine an orthogonal resource corresponding to the present user equipment according to the virtual cell ID and the resource index information; and a third transmitting device configured to transmit uplink control information to the reception point over the determined orthogonal resource. 13. The apparatus according to claim 12, wherein the obtaining device is further configured to receive from the reception point the virtual cell ID that is transmitted via an RRC signaling. 14. The apparatus according to claim 12, wherein the obtaining device is further configured:
to receive reference signal port information corresponding to the present user equipment from the reception point; and to determine the virtual cell ID corresponding to the present user equipment from a port-to-virtual cell ID mapping table according to the reference signal port information. | The invention provides a solution, in a reception point of a communication system, of providing a user equipment with a virtual cell ID, where firstly a virtual cell ID corresponding to the reception point is received from a network control device, the virtual cell ID being used for indicating a cluster to which the reception point belongs; then the virtual cell ID is provided to at least one user equipment served by the reception point; and finally resource index information corresponding to each user equipment among the at least one user equipment is transmitted respectively to the each user equipment. There is further provided a solution, in a user equipment of a communication system, of determining an orthogonal resource, where firstly a virtual cell ID of the reception point is obtained; then resource index information corresponding to the present user equipment is received from the reception point; next an orthogonal resource corresponding to the present user equipment is determined according to the virtual cell ID and the resource index information; and finally uplink control information is transmitted to the reception point over the determined orthogonal resource. A tradeoff between the number of orthogonal resources and interference intensity can be achieved by applying of the invention.1. A method, in a network control device of a communication system based on coordinated multi-point transmission, of grouping a plurality of reception points, the method comprising:
grouping the plurality of reception points into several clusters, wherein reception points in each cluster share a virtual cell ID; and transmitting a virtual cell ID corresponding to each reception point among the plurality of reception points respectively to the each reception point. 2. A method, in a reception point of a communication system, of providing a user equipment with a virtual cell ID, the method comprising:
receiving a virtual cell ID corresponding to the reception point from a network control device, the virtual cell ID being used for indicating a cluster to which the reception point belongs; providing the virtual cell ID to at least one user equipment served by the reception point; and transmitting resource index information corresponding to each user equipment among the at least one user equipment respectively to the each user equipment. 3. The method according to claim 2, wherein providing the virtual cell ID comprises: transmitting the virtual cell ID respectively to the at least one user equipment served by the reception point via an RRC signaling. 4. The method according to claim 2, wherein providing the virtual cell ID comprises: transmitting reference signal port information corresponding to each user equipment among the at least one user equipment respectively to the each user equipment, wherein the user equipment determines a virtual cell ID corresponding to the present user equipment from a port-to-virtual cell ID mapping table according to the reference signal port information. 5. A method, in a user equipment of a communication system, of determining an orthogonal resource, the user equipment being served by a reception point, and the method comprising:
obtaining a virtual cell ID of the reception point; receiving resource index information corresponding to the present user equipment from the reception point; determining an orthogonal resource corresponding to the present user equipment according to the virtual cell ID and the resource index information; and transmitting uplink control information to the reception point over the determined orthogonal resource. 6. The method according to claim 5, wherein obtaining the virtual cell ID comprises: receiving from the reception point the virtual cell ID that is transmitted via an RRC signaling. 7. The method according to claim 5, wherein obtaining the virtual cell ID comprises:
receiving reference signal port information corresponding to the present user equipment from the reception point; and determining the virtual cell ID corresponding to the present user equipment from a port-to-virtual cell ID mapping table according to the reference signal port information. 8. An apparatus, in a network control device of a communication system based on coordinated multi-point transmission, for grouping a plurality of reception points, the apparatus comprising:
a grouping device configured to group the plurality of reception points into several clusters, wherein reception points in each cluster share a virtual cell ID; and a first transmitting device configured to transmit a virtual cell ID corresponding to each reception point among the plurality of reception points respectively to the each reception point. 9. An apparatus, in a reception point of a communication system, for providing a user equipment with a virtual cell ID, the apparatus comprising:
a first receiving device configured to receive a virtual cell ID corresponding to the reception point from a network control device, the virtual cell ID being used for indicating a cluster to which the reception point belongs; a providing device configured to provide the virtual cell ID to at least one user equipment served by the reception point; and a second transmitting device configured to transmit resource index information corresponding to each user equipment among the at least one user equipment respectively to the each user equipment. 10. The apparatus according to claim 9, wherein the providing device is further configured to transmit the virtual cell ID respectively to the at least one user equipment served by the reception point via an RRC signaling. 11. The apparatus according to claim 9, wherein the providing device is further configured to transmit reference signal port information corresponding to each user equipment among the at least one user equipment respectively to the each user equipment, wherein the user equipment determines a virtual cell ID corresponding to the present user equipment from a port-to-virtual cell ID mapping table according to the reference signal port information. 12. An apparatus, in a user equipment of a communication system, for determining an orthogonal resource, the user equipment being served by a reception point, and the apparatus comprising:
an obtaining device configured to obtain a virtual cell ID of the reception point; a second receiving device configured to receive resource index information corresponding to the present user equipment from the reception point; a determining device configured to determine an orthogonal resource corresponding to the present user equipment according to the virtual cell ID and the resource index information; and a third transmitting device configured to transmit uplink control information to the reception point over the determined orthogonal resource. 13. The apparatus according to claim 12, wherein the obtaining device is further configured to receive from the reception point the virtual cell ID that is transmitted via an RRC signaling. 14. The apparatus according to claim 12, wherein the obtaining device is further configured:
to receive reference signal port information corresponding to the present user equipment from the reception point; and to determine the virtual cell ID corresponding to the present user equipment from a port-to-virtual cell ID mapping table according to the reference signal port information. | 2,600 |
10,518 | 10,518 | 15,316,947 | 2,644 | Methods and devices for maintaining a device-operated function. A first device initially operates as an active master with responsibility to execute the device-operated function by automatically performing one or more operation tasks according to certain operational parameters and gaining knowledge when performing the operation tasks. Meanwhile, a second device is acting as a passive standby device. When the first device cannot or should not operate as the active master anymore, e.g. due to low battery, the responsibility is transferred to the second device by transferring the operational parameters and the knowledge from the first device to the second device. Then, the second device operates as the active master with responsibility to execute the device-operated function according to the operational parameters and using the knowledge if required. | 1. A method performed by a first device for maintaining a device-operated function, the method comprising:
operating as an active master with responsibility to execute the device-operated function by automatically performing one or more operation tasks according to one or more operational parameters configured in the first device and gaining knowledge when performing the one or more operation tasks, and transferring the responsibility to a second device acting as a passive standby device, by transferring the one or more operational parameters and the knowledge to the second device, thereby enabling the second device to operate as the active master and to automatically perform the one or more operation tasks according to the one or more operational parameters and using the knowledge if required. 2. A method according to claim 1, wherein the first device instructs the second device to take over said responsibility when the first device detects at least one of: battery power is low, the first device is malfunctioning, and a pre-set timer has expired. 3. A method according to claim 1, wherein the first device sends the one or more operational parameters and the knowledge to the second device on a regular basis prior to the second device being instructed to take over the responsibility. 4. A method according to claim 1, wherein the first device saves the one or more operational parameters and the knowledge in a passive memory in the first device, to enable the second device to retrieve the one or more operational parameters and the knowledge from the passive memory in case the first device stops to work. 5. A method according to claim 1, wherein the first device and the second device are comprised in a system of devices and the first device broadcasts a message to the system of devices, the message indicating that the second device operates as the active master. 6. A first device operable for maintaining a device-operated function, wherein the first device comprises means configured to:
operate as an active master with responsibility to execute the device-operated function by automatically performing one or more operation tasks according to one or more operational parameters configured in the first device and gaining knowledge when performing the one or more operation tasks, and transfer the responsibility to a second device acting as a passive standby device, by transferring the one or more operational parameters and the knowledge to the second device, thereby enabling the second device to operate as the active master and to automatically perform the one or more operation tasks according to the one or more operational parameters and using the knowledge if required. 7. A first device according to claim 6, wherein the first device comprises means configured to instruct the second device to take over said responsibility when the first device detects at least one of: battery power is low, the first device is malfunctioning, a pre-set timer has expired. 8. A first device according to claim 6, wherein the first device comprises means configured to send the one or more operational parameters and the knowledge to the second device on a regular basis prior to the second device being instructed to take over the responsibility. 9. A first device according to claim 6, wherein the first device comprises means configured to save the one or more operational parameters and the knowledge in a passive memory in the first device, to enable the second device to retrieve the one or more operational parameters and the knowledge from the passive memory in case the first device stops to work. 10. A first device according to claim 6, wherein the first device is operable to be comprised in a system of devices also comprising the second device, and wherein the first device comprises means configured to broadcast a message to the system of devices, the message indicating that the second device operates as the active master. 11. A method performed by a second device for maintaining a device-operated function, the method comprising:
acting as a passive standby device when a first device is operating as an active master with responsibility to execute the device-operated function by automatically performing one or more operation tasks according to one or more operational parameters configured in the first device and gaining knowledge when performing the one or more operation tasks, deciding to take over the responsibility from the first device, obtaining the one or more operational parameters and the knowledge from the first device, and operating as the active master with responsibility to execute the device-operated function by automatically performing the one or more operation tasks according to the one or more operational parameters and using the knowledge if required. 12. A method according to claim 11, wherein the second device decides to take over the responsibility when receiving an instruction from the first device or from a server when the server detects at least one of: the first device is not reporting as expected, and a pre-set timer has expired. 13. A method according to claim 11, wherein the first device and the second device are comprised in a system of devices and the second device broadcasts a message to the system of devices, the message indicating that the second device operates as an active master. 14. A method according to claim 12, wherein the second device wakes up from a sleep mode at regular intervals to enable reception of the instruction to take over said responsibility. 15. A method according to claim 11, wherein the second device wakes up from a sleep mode at regular intervals and sends a poll to the first device to determine whether the first device continues to operate as an active master, and if the first device responds that it continues to operate as an active master, the second device returns to sleep mode again. 16. A method according to claim 15, wherein the second device decides to take over said responsibility when detecting that the first device does not respond to the poll which indicates that the first device is not operating. 17. A method according to claim 11, wherein the second device retrieves the one or more operational parameters and the knowledge from a passive memory in the first device. 18. A second device operable for maintaining a device-operated function, wherein the second device comprises means configured to:
act as a passive standby device when a first device is operating as an active master with responsibility to execute the device-operated function by automatically performing one or more operation tasks according to one or more operational parameters configured in the first device and gaining knowledge when performing the one or more operation tasks, decide to take over the responsibility from the first device, obtain the one or more operational parameters and the knowledge from the first device, and operate as the active master with responsibility to execute the device-operated function by automatically performing the one or more operation tasks according to the one or more operational parameters and by using the knowledge if required. 19. A second device according to claim 18, wherein the second device comprises means configured to decide to take over the responsibility when receiving an instruction from the first device or from a server when the server detects at least one of: the first device is not reporting as expected, a pre-set timer has expired. 20. A second device according to claim 18, wherein the second device is operable to be comprised in a system of devices also comprising the first device, and wherein the second device comprises means configured to broadcast a message to the system of devices, the message indicating that the second device operates as an active master. 21. A second device according to claim 19, wherein the second device comprises means configured to wake up from a sleep mode at regular intervals to enable reception of the instruction to take over said responsibility. 22. A second device according to claim 18, wherein the second device comprises means configured to wake up from a sleep mode at regular intervals and to send a poll to the first device to determine whether the first device continues to operate as an active master, and if the first device responds that it continues to operate as an active master, the second device comprises means configured to return to sleep mode again. 23. A second device according to claim 22, wherein the second device comprises means configured to decide to take over said responsibility when detecting that the first device does not respond to the poll which indicates that the first device is not operating. 24. A second device according to claim 18, wherein the second device comprises means configured to retrieve the one or more operational parameters and the knowledge from a passive memory in the first device. 25. A method in a system of devices comprising a first device and a second device, for maintaining a device-operated function, the method comprising:
the first device operating as an active master with responsibility to execute the device-operated function by automatically performing one or more operation tasks according to one or more operational parameters configured in the first device and gaining knowledge when performing the one or more operation tasks, the second device acting as a passive standby device, transferring the responsibility from the first device to the second device by transferring the one or more operational parameters and the knowledge from the first device to the second device, and the second device operating as the active master with responsibility to execute the device-operated function by automatically performing the one or more operation tasks according to the one or more operational parameters and using the knowledge if required. 26. A computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method according to claim 1. 27. A carrier containing the computer program of claim 26, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium. | Methods and devices for maintaining a device-operated function. A first device initially operates as an active master with responsibility to execute the device-operated function by automatically performing one or more operation tasks according to certain operational parameters and gaining knowledge when performing the operation tasks. Meanwhile, a second device is acting as a passive standby device. When the first device cannot or should not operate as the active master anymore, e.g. due to low battery, the responsibility is transferred to the second device by transferring the operational parameters and the knowledge from the first device to the second device. Then, the second device operates as the active master with responsibility to execute the device-operated function according to the operational parameters and using the knowledge if required.1. A method performed by a first device for maintaining a device-operated function, the method comprising:
operating as an active master with responsibility to execute the device-operated function by automatically performing one or more operation tasks according to one or more operational parameters configured in the first device and gaining knowledge when performing the one or more operation tasks, and transferring the responsibility to a second device acting as a passive standby device, by transferring the one or more operational parameters and the knowledge to the second device, thereby enabling the second device to operate as the active master and to automatically perform the one or more operation tasks according to the one or more operational parameters and using the knowledge if required. 2. A method according to claim 1, wherein the first device instructs the second device to take over said responsibility when the first device detects at least one of: battery power is low, the first device is malfunctioning, and a pre-set timer has expired. 3. A method according to claim 1, wherein the first device sends the one or more operational parameters and the knowledge to the second device on a regular basis prior to the second device being instructed to take over the responsibility. 4. A method according to claim 1, wherein the first device saves the one or more operational parameters and the knowledge in a passive memory in the first device, to enable the second device to retrieve the one or more operational parameters and the knowledge from the passive memory in case the first device stops to work. 5. A method according to claim 1, wherein the first device and the second device are comprised in a system of devices and the first device broadcasts a message to the system of devices, the message indicating that the second device operates as the active master. 6. A first device operable for maintaining a device-operated function, wherein the first device comprises means configured to:
operate as an active master with responsibility to execute the device-operated function by automatically performing one or more operation tasks according to one or more operational parameters configured in the first device and gaining knowledge when performing the one or more operation tasks, and transfer the responsibility to a second device acting as a passive standby device, by transferring the one or more operational parameters and the knowledge to the second device, thereby enabling the second device to operate as the active master and to automatically perform the one or more operation tasks according to the one or more operational parameters and using the knowledge if required. 7. A first device according to claim 6, wherein the first device comprises means configured to instruct the second device to take over said responsibility when the first device detects at least one of: battery power is low, the first device is malfunctioning, a pre-set timer has expired. 8. A first device according to claim 6, wherein the first device comprises means configured to send the one or more operational parameters and the knowledge to the second device on a regular basis prior to the second device being instructed to take over the responsibility. 9. A first device according to claim 6, wherein the first device comprises means configured to save the one or more operational parameters and the knowledge in a passive memory in the first device, to enable the second device to retrieve the one or more operational parameters and the knowledge from the passive memory in case the first device stops to work. 10. A first device according to claim 6, wherein the first device is operable to be comprised in a system of devices also comprising the second device, and wherein the first device comprises means configured to broadcast a message to the system of devices, the message indicating that the second device operates as the active master. 11. A method performed by a second device for maintaining a device-operated function, the method comprising:
acting as a passive standby device when a first device is operating as an active master with responsibility to execute the device-operated function by automatically performing one or more operation tasks according to one or more operational parameters configured in the first device and gaining knowledge when performing the one or more operation tasks, deciding to take over the responsibility from the first device, obtaining the one or more operational parameters and the knowledge from the first device, and operating as the active master with responsibility to execute the device-operated function by automatically performing the one or more operation tasks according to the one or more operational parameters and using the knowledge if required. 12. A method according to claim 11, wherein the second device decides to take over the responsibility when receiving an instruction from the first device or from a server when the server detects at least one of: the first device is not reporting as expected, and a pre-set timer has expired. 13. A method according to claim 11, wherein the first device and the second device are comprised in a system of devices and the second device broadcasts a message to the system of devices, the message indicating that the second device operates as an active master. 14. A method according to claim 12, wherein the second device wakes up from a sleep mode at regular intervals to enable reception of the instruction to take over said responsibility. 15. A method according to claim 11, wherein the second device wakes up from a sleep mode at regular intervals and sends a poll to the first device to determine whether the first device continues to operate as an active master, and if the first device responds that it continues to operate as an active master, the second device returns to sleep mode again. 16. A method according to claim 15, wherein the second device decides to take over said responsibility when detecting that the first device does not respond to the poll which indicates that the first device is not operating. 17. A method according to claim 11, wherein the second device retrieves the one or more operational parameters and the knowledge from a passive memory in the first device. 18. A second device operable for maintaining a device-operated function, wherein the second device comprises means configured to:
act as a passive standby device when a first device is operating as an active master with responsibility to execute the device-operated function by automatically performing one or more operation tasks according to one or more operational parameters configured in the first device and gaining knowledge when performing the one or more operation tasks, decide to take over the responsibility from the first device, obtain the one or more operational parameters and the knowledge from the first device, and operate as the active master with responsibility to execute the device-operated function by automatically performing the one or more operation tasks according to the one or more operational parameters and by using the knowledge if required. 19. A second device according to claim 18, wherein the second device comprises means configured to decide to take over the responsibility when receiving an instruction from the first device or from a server when the server detects at least one of: the first device is not reporting as expected, a pre-set timer has expired. 20. A second device according to claim 18, wherein the second device is operable to be comprised in a system of devices also comprising the first device, and wherein the second device comprises means configured to broadcast a message to the system of devices, the message indicating that the second device operates as an active master. 21. A second device according to claim 19, wherein the second device comprises means configured to wake up from a sleep mode at regular intervals to enable reception of the instruction to take over said responsibility. 22. A second device according to claim 18, wherein the second device comprises means configured to wake up from a sleep mode at regular intervals and to send a poll to the first device to determine whether the first device continues to operate as an active master, and if the first device responds that it continues to operate as an active master, the second device comprises means configured to return to sleep mode again. 23. A second device according to claim 22, wherein the second device comprises means configured to decide to take over said responsibility when detecting that the first device does not respond to the poll which indicates that the first device is not operating. 24. A second device according to claim 18, wherein the second device comprises means configured to retrieve the one or more operational parameters and the knowledge from a passive memory in the first device. 25. A method in a system of devices comprising a first device and a second device, for maintaining a device-operated function, the method comprising:
the first device operating as an active master with responsibility to execute the device-operated function by automatically performing one or more operation tasks according to one or more operational parameters configured in the first device and gaining knowledge when performing the one or more operation tasks, the second device acting as a passive standby device, transferring the responsibility from the first device to the second device by transferring the one or more operational parameters and the knowledge from the first device to the second device, and the second device operating as the active master with responsibility to execute the device-operated function by automatically performing the one or more operation tasks according to the one or more operational parameters and using the knowledge if required. 26. A computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method according to claim 1. 27. A carrier containing the computer program of claim 26, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium. | 2,600 |
10,519 | 10,519 | 16,029,681 | 2,674 | An image forming apparatus has a plurality of functions and permits setting an operating condition for each of the functions. The image forming apparatus includes: an operating portion used for operation of selecting each of the functions and setting the operating condition; an image processor that performs an image processing in a manner that depends on a function selected and an operating condition set at the operating portion; and a controller that controls the operating portion and the image processor. If one of the functions is selected at the operating portion and a setting is determined on a predetermined operating condition that is settable within the selected function, the controller prohibits the selected function from being changed to another function. | 1. An image forming apparatus comprising:
a user interface used to specify a transmission address or a saving address of image data as a destination; and a controller controlling the user interface, wherein the controller disables specification of the saving address by the user interface when the transmission address is specified as the destination. 2. The image forming apparatus according to claim 1, wherein
the controller enables specification of the transmission address to be canceled by the user interface after the transmission address is specified as the destination, and then enables the specification of the saving address in a case where the transmission address is canceled by the user interface. 3. The image forming apparatus according to claim 1, wherein, after the transmission address is specified as the destination,
the controller enables the transmission address to be selectively changed, added, or deleted while disabling the specification of the saving address by the user interface. 4. The image forming apparatus according to claim 1, wherein
the controller allows the user interface to display a list of a plurality of the saving addresses in a state with one of the plurality of the saving addresses selectable when the controller enables the specification of the saving address by the user interface. 5. The image forming apparatus according to claim 2, wherein
the controller allows the user interface to display a list of a plurality of the saving addresses in a state with one of the plurality of the saving addresses selectable when the controller enables the specification of the saving address by the user interface. 6. The image forming apparatus according to claim 1, wherein, when the destination is specified,
the controller enables an operating condition associated with the destination to be set and disables an operating condition not associated with the destination to be set. | An image forming apparatus has a plurality of functions and permits setting an operating condition for each of the functions. The image forming apparatus includes: an operating portion used for operation of selecting each of the functions and setting the operating condition; an image processor that performs an image processing in a manner that depends on a function selected and an operating condition set at the operating portion; and a controller that controls the operating portion and the image processor. If one of the functions is selected at the operating portion and a setting is determined on a predetermined operating condition that is settable within the selected function, the controller prohibits the selected function from being changed to another function.1. An image forming apparatus comprising:
a user interface used to specify a transmission address or a saving address of image data as a destination; and a controller controlling the user interface, wherein the controller disables specification of the saving address by the user interface when the transmission address is specified as the destination. 2. The image forming apparatus according to claim 1, wherein
the controller enables specification of the transmission address to be canceled by the user interface after the transmission address is specified as the destination, and then enables the specification of the saving address in a case where the transmission address is canceled by the user interface. 3. The image forming apparatus according to claim 1, wherein, after the transmission address is specified as the destination,
the controller enables the transmission address to be selectively changed, added, or deleted while disabling the specification of the saving address by the user interface. 4. The image forming apparatus according to claim 1, wherein
the controller allows the user interface to display a list of a plurality of the saving addresses in a state with one of the plurality of the saving addresses selectable when the controller enables the specification of the saving address by the user interface. 5. The image forming apparatus according to claim 2, wherein
the controller allows the user interface to display a list of a plurality of the saving addresses in a state with one of the plurality of the saving addresses selectable when the controller enables the specification of the saving address by the user interface. 6. The image forming apparatus according to claim 1, wherein, when the destination is specified,
the controller enables an operating condition associated with the destination to be set and disables an operating condition not associated with the destination to be set. | 2,600 |
10,520 | 10,520 | 15,672,972 | 2,689 | A computer device associated with a vehicle pairs during an initial/setup pairing process with a UE device according to a wireless protocol. The devices exchange and store cryptographic information during pairing. Later, either device may detect/discover that the other is currently in its presence. Presence discovery may trigger the vehicle device to generate and broadcast an operational request message based on the cryptographic information, such as a public key of the UE, stored in the vehicle device. The UE receives the request, and transmits in response a vehicle operation permission message that it generates based on user input and cryptographic information, such as a public key of the vehicle device, stored by the UE during initial/setup pairing. The operational request message may be transmitted as an audio signal and received by a microphone of the UE. The vehicle device receives the permission message and generates an operation instruction based thereon. | 1. A method, comprising:
receiving a vehicle operational request message signal generated by a computer device of a vehicle; processing the vehicle operational request message signal; determining from the processing of the vehicle operational request message signal that the vehicle operational request message signal includes a request to operate the vehicle; extracting cryptographic input information from the vehicle operational request message signal; generating a vehicle operation permission message based on the cryptographic input information; and transmitting the vehicle operation permission message as a vehicle operation permission signal. 2. The method of claim 1 wherein the computer device of the vehicle generates the vehicle operational request message signal based on detection of a trigger event. 3. The method of claim 2 wherein the trigger signal is a proximity signal generated by a user device. 4. The method of claim 1 wherein the steps are performed by a wireless user device. 5. The method of claim 4 wherein the wireless user device is a smart phone. 6. The method of claim 1 wherein the steps are performed by a wireless user device, wherein the vehicle operational request signal is an audio signal, and wherein the audio signal is received from the vehicle via a microphone of the user device. 7. The method of claim 6 wherein the audio signal includes tone components within an ultrasonic range. 8. The method of claim 1 wherein the cryptographic input information is based at least in part on secret information shared between a user device and the computer device of the vehicle during a wireless protocol pairing operation that occurred before the steps of claim 1 are performed. 9. The method of claim 1 wherein the wireless protocol is Bluetooth Low Energy. 10. The method of claim 1 wherein the vehicle operational request message signal includes a quick response code image. 11. The method of claim 1 wherein the cryptographic information includes cryptographic information associated with one or more vehicle operational devices. 12. The method of claim 2 wherein the trigger event is the manipulation of a vehicle operational component. 13. The method of claim 1 further comprising receiving a signal that includes an activate microphone message that includes instructions to activate a microphone for receiving the vehicle operational signal. 14. A method, comprising:
generating a vehicle operational request message with a computer device of a vehicle in response to a trigger event; transmitting the vehicle operational request message as a vehicle operational request message signal; receiving a vehicle operation permission message that was generated based at least in part on vehicle cryptographic information shared during a pairing between the computer device of the vehicle with a user equipment device and a one-time value transmitted in the vehicle operational request message, determining that the received vehicle operation permission message was generated based at least in part on vehicle cryptographic information shared during the pairing of the vehicle computer device with the user equipment device; and generating an instruction to perform a vehicle operation included in the received vehicle operation permission message. 15. The method of claim 14 wherein the vehicle operational request message signal is at least partially transmitted as an audio signal, and wherein the audio signal is transmitted from the vehicle via a speaker. 16. The method of claim 15 wherein the audio signal includes tone components at least partially within an ultrasonic range. 17. The method of claim 14 wherein the vehicle operational request message is at least partially displayed as a quick response code image. 18. The method of claim 14 wherein the vehicle operational request message signal is transmitted at least partially as an infrared signal. 19. A method, comprising:
receiving a vehicle operation permission message directly from a user equipment device, determining that the received vehicle operation permission message was generated based at least in part on vehicle cryptographic information shared during a pairing of a vehicle computer device with the user equipment device; determining that the received vehicle operation permission message is the same as a previously received vehicle operation permission message; determining that the received vehicle operation permission message that is the same as the previously received vehicle operation permission message was received within an absence period; and generating an instruction to perform a vehicle operation included in the received vehicle operation permission message. 20. The method of claim 19 wherein the absence period is configurable by a user interface of a user equipment device, which was paired with the vehicle computer device before the previously received vehicle operation permission message was received. 21. The method of claim 19 wherein the vehicle cryptographic information includes vehicle cryptographic information transmitted from one or more vehicle operational devices during the pairing of the vehicle computer device with the user equipment device. 22. The method of claim 21 wherein one of the one or more vehicle operational devices is an electronic key fob. | A computer device associated with a vehicle pairs during an initial/setup pairing process with a UE device according to a wireless protocol. The devices exchange and store cryptographic information during pairing. Later, either device may detect/discover that the other is currently in its presence. Presence discovery may trigger the vehicle device to generate and broadcast an operational request message based on the cryptographic information, such as a public key of the UE, stored in the vehicle device. The UE receives the request, and transmits in response a vehicle operation permission message that it generates based on user input and cryptographic information, such as a public key of the vehicle device, stored by the UE during initial/setup pairing. The operational request message may be transmitted as an audio signal and received by a microphone of the UE. The vehicle device receives the permission message and generates an operation instruction based thereon.1. A method, comprising:
receiving a vehicle operational request message signal generated by a computer device of a vehicle; processing the vehicle operational request message signal; determining from the processing of the vehicle operational request message signal that the vehicle operational request message signal includes a request to operate the vehicle; extracting cryptographic input information from the vehicle operational request message signal; generating a vehicle operation permission message based on the cryptographic input information; and transmitting the vehicle operation permission message as a vehicle operation permission signal. 2. The method of claim 1 wherein the computer device of the vehicle generates the vehicle operational request message signal based on detection of a trigger event. 3. The method of claim 2 wherein the trigger signal is a proximity signal generated by a user device. 4. The method of claim 1 wherein the steps are performed by a wireless user device. 5. The method of claim 4 wherein the wireless user device is a smart phone. 6. The method of claim 1 wherein the steps are performed by a wireless user device, wherein the vehicle operational request signal is an audio signal, and wherein the audio signal is received from the vehicle via a microphone of the user device. 7. The method of claim 6 wherein the audio signal includes tone components within an ultrasonic range. 8. The method of claim 1 wherein the cryptographic input information is based at least in part on secret information shared between a user device and the computer device of the vehicle during a wireless protocol pairing operation that occurred before the steps of claim 1 are performed. 9. The method of claim 1 wherein the wireless protocol is Bluetooth Low Energy. 10. The method of claim 1 wherein the vehicle operational request message signal includes a quick response code image. 11. The method of claim 1 wherein the cryptographic information includes cryptographic information associated with one or more vehicle operational devices. 12. The method of claim 2 wherein the trigger event is the manipulation of a vehicle operational component. 13. The method of claim 1 further comprising receiving a signal that includes an activate microphone message that includes instructions to activate a microphone for receiving the vehicle operational signal. 14. A method, comprising:
generating a vehicle operational request message with a computer device of a vehicle in response to a trigger event; transmitting the vehicle operational request message as a vehicle operational request message signal; receiving a vehicle operation permission message that was generated based at least in part on vehicle cryptographic information shared during a pairing between the computer device of the vehicle with a user equipment device and a one-time value transmitted in the vehicle operational request message, determining that the received vehicle operation permission message was generated based at least in part on vehicle cryptographic information shared during the pairing of the vehicle computer device with the user equipment device; and generating an instruction to perform a vehicle operation included in the received vehicle operation permission message. 15. The method of claim 14 wherein the vehicle operational request message signal is at least partially transmitted as an audio signal, and wherein the audio signal is transmitted from the vehicle via a speaker. 16. The method of claim 15 wherein the audio signal includes tone components at least partially within an ultrasonic range. 17. The method of claim 14 wherein the vehicle operational request message is at least partially displayed as a quick response code image. 18. The method of claim 14 wherein the vehicle operational request message signal is transmitted at least partially as an infrared signal. 19. A method, comprising:
receiving a vehicle operation permission message directly from a user equipment device, determining that the received vehicle operation permission message was generated based at least in part on vehicle cryptographic information shared during a pairing of a vehicle computer device with the user equipment device; determining that the received vehicle operation permission message is the same as a previously received vehicle operation permission message; determining that the received vehicle operation permission message that is the same as the previously received vehicle operation permission message was received within an absence period; and generating an instruction to perform a vehicle operation included in the received vehicle operation permission message. 20. The method of claim 19 wherein the absence period is configurable by a user interface of a user equipment device, which was paired with the vehicle computer device before the previously received vehicle operation permission message was received. 21. The method of claim 19 wherein the vehicle cryptographic information includes vehicle cryptographic information transmitted from one or more vehicle operational devices during the pairing of the vehicle computer device with the user equipment device. 22. The method of claim 21 wherein one of the one or more vehicle operational devices is an electronic key fob. | 2,600 |
10,521 | 10,521 | 15,611,272 | 2,621 | The invention relates to a method for operating an interactive visibility screen on a transparent pane of a pane device, in particular in a motor vehicle. The visibility screen is generated by means of a display unit of the pane device on the pane by pixel-wise fade-in of opaque image points, wherein the image points form a coherent visibility screen area. In the method, an operational action by the user is first detected by means of a detection device, which comprises a selection of a setting range and a movement relative to the pane. Subsequently, an expansion of the visibility screen area at the setting range is set as a function of the detected movement by means of a control device. | 1-10. (canceled) 11. A method for operating an interactive visibility screen on a transparent pane of a pane device in a motor vehicle, wherein the interactive visibility screen is generated on the pane by means of a display unit of the pane device by pixel-wise fade-in of opaque image points, wherein the opaque image points form a coherent visibility screen area, the method comprising:
detecting an operational action by a user by means of a detection device, wherein the operational action comprises a selection of a setting range and a movement relative to the pane; and setting an expansion of the coherent visibility screen area at the setting range as a function of the movement relative to the pane by means of a control device. 12. The method according to claim 11, wherein during the setting of the expansion of the coherent visibility screen area at the setting range at least one predetermined area of the coherent visibility screen area is opened as a function of the movement, and thereby the opaque image points are faded out in the at least one predetermined area. 13. The method according to claim 12, wherein the at least one predetermined area of the coherent visibility screen area is automatically closed after a predetermined period of time after detecting the operational action, or remains open based on the detection device detecting a confirmation operation by the user. 14. The method according to claim 12, further comprising:
detecting a viewing direction of the user; and moving the at least one predetermined area of the coherent visibility screen area within the coherent visibility screen area as a function of the viewing direction of the user. 15. The method according to claim 11, wherein:
the setting the expansion of the coherent visibility screen area is set only after identification of the user, and the operational action by the user is on an area of the pane accessible from outside the motor vehicle. 16. The method according to claim 11, wherein:
at least one first operating gesture is detected as the movement, and at least two fingers of the user are moved relative to one another from a starting position in the first operating gesture. 17. The method according to claim 11, wherein:
at least one second operating gesture is detected as the movement, and the user performs a wiping motion or a painting motion on the pane or free in the air with one hand or at least one finger in the second operating gesture. 18. A pane device for a motor vehicle, the pane device comprising:
a transparent pane; a display unit configured to generate a visibility screen on the transparent pane by pixel-wise fade-in of opaque image points, wherein the image points form a coherent visibility screen area; a detection device configured to detect an operational action of a user, wherein the operational action comprises a selection of a setting range and a movement relative to the pane; and a control device configured to set an expansion of the coherent visibility screen area at the setting range as a function of the detected movement. 19. A motor vehicle with a pane device, the motor vehicle comprising:
a transparent pane; a display unit configured to generate a visibility screen on the transparent pane by pixel-wise fade-in of opaque image points, wherein the image points form a coherent visibility screen area; a detection device configured to detect an operational action of a user, wherein the operational action comprises a selection of a setting range and a movement relative to the pane; and a control device configured to set an expansion of the coherent visibility screen area at the setting range as a function of the detected movement. 20. The motor vehicle of claim 19, further comprising:
an additional detection device directed toward a surrounding of the motor vehicle, wherein the control device is configured to assign a point in the surroundings of the motor vehicle fixed by the user's eyes through at least one predetermined area opened in the coherent visibility screen area to an image point detected by the additional detection device, and wherein the control device is configured to shift the at least one predetermined area within the coherent visibility screen area such that the point fixed by the user's eyes continues to be fixed by the user's eyes through the at least one predetermined area during the movement of the motor vehicle. | The invention relates to a method for operating an interactive visibility screen on a transparent pane of a pane device, in particular in a motor vehicle. The visibility screen is generated by means of a display unit of the pane device on the pane by pixel-wise fade-in of opaque image points, wherein the image points form a coherent visibility screen area. In the method, an operational action by the user is first detected by means of a detection device, which comprises a selection of a setting range and a movement relative to the pane. Subsequently, an expansion of the visibility screen area at the setting range is set as a function of the detected movement by means of a control device.1-10. (canceled) 11. A method for operating an interactive visibility screen on a transparent pane of a pane device in a motor vehicle, wherein the interactive visibility screen is generated on the pane by means of a display unit of the pane device by pixel-wise fade-in of opaque image points, wherein the opaque image points form a coherent visibility screen area, the method comprising:
detecting an operational action by a user by means of a detection device, wherein the operational action comprises a selection of a setting range and a movement relative to the pane; and setting an expansion of the coherent visibility screen area at the setting range as a function of the movement relative to the pane by means of a control device. 12. The method according to claim 11, wherein during the setting of the expansion of the coherent visibility screen area at the setting range at least one predetermined area of the coherent visibility screen area is opened as a function of the movement, and thereby the opaque image points are faded out in the at least one predetermined area. 13. The method according to claim 12, wherein the at least one predetermined area of the coherent visibility screen area is automatically closed after a predetermined period of time after detecting the operational action, or remains open based on the detection device detecting a confirmation operation by the user. 14. The method according to claim 12, further comprising:
detecting a viewing direction of the user; and moving the at least one predetermined area of the coherent visibility screen area within the coherent visibility screen area as a function of the viewing direction of the user. 15. The method according to claim 11, wherein:
the setting the expansion of the coherent visibility screen area is set only after identification of the user, and the operational action by the user is on an area of the pane accessible from outside the motor vehicle. 16. The method according to claim 11, wherein:
at least one first operating gesture is detected as the movement, and at least two fingers of the user are moved relative to one another from a starting position in the first operating gesture. 17. The method according to claim 11, wherein:
at least one second operating gesture is detected as the movement, and the user performs a wiping motion or a painting motion on the pane or free in the air with one hand or at least one finger in the second operating gesture. 18. A pane device for a motor vehicle, the pane device comprising:
a transparent pane; a display unit configured to generate a visibility screen on the transparent pane by pixel-wise fade-in of opaque image points, wherein the image points form a coherent visibility screen area; a detection device configured to detect an operational action of a user, wherein the operational action comprises a selection of a setting range and a movement relative to the pane; and a control device configured to set an expansion of the coherent visibility screen area at the setting range as a function of the detected movement. 19. A motor vehicle with a pane device, the motor vehicle comprising:
a transparent pane; a display unit configured to generate a visibility screen on the transparent pane by pixel-wise fade-in of opaque image points, wherein the image points form a coherent visibility screen area; a detection device configured to detect an operational action of a user, wherein the operational action comprises a selection of a setting range and a movement relative to the pane; and a control device configured to set an expansion of the coherent visibility screen area at the setting range as a function of the detected movement. 20. The motor vehicle of claim 19, further comprising:
an additional detection device directed toward a surrounding of the motor vehicle, wherein the control device is configured to assign a point in the surroundings of the motor vehicle fixed by the user's eyes through at least one predetermined area opened in the coherent visibility screen area to an image point detected by the additional detection device, and wherein the control device is configured to shift the at least one predetermined area within the coherent visibility screen area such that the point fixed by the user's eyes continues to be fixed by the user's eyes through the at least one predetermined area during the movement of the motor vehicle. | 2,600 |
10,522 | 10,522 | 15,158,333 | 2,654 | Embodiments disclosed herein relate to systems and methods for performing fitting of an auditory prosthesis using tactile responses. A tactile feedback device determines a physical manipulation in response to a test stimulus. A type of adjustment can be determined based upon the type of the physical manipulation and the type of the test signal. A scaling of the adjustment can be determined based on the degree of the physical manipulation. | 1. A method comprising:
identifying a test signal; in response to identifying the test signal, determining a physical manipulation of the device; and determining an adjustment to at least one parameter of an auditory prosthesis based upon the physical manipulation. 2. The method of claim 1, wherein the test signal is an audible tone. 3. The method of claim 2, wherein the audible tone is generated during a fitting process for an auditory prosthesis. 4. The method of claim 1, wherein determining the adjustment further comprises:
determining a type of physical manipulation, wherein a type of the adjustment is based at least upon the type of physical manipulation; and determining a degree of the physical manipulation, wherein the adjustment is scaled based upon the degree of physical manipulation. 5. The method of claim 4, wherein the physical manipulation comprises a physical displacement. 6. The method of claim 5, wherein the physical displacement comprises one of:
tilting the device; shaking the device; rotating the device; and moving the device relative to an external object. 7. The method of claim 6, when the physical displacement is a forward tilt, the type of adjustment comprises an increase in loudness and wherein the size of the increase is based upon the degree of the forward tilt relative to an initial position of the device. 8. The method of claim 6, when the physical displacement is a backward tilt, the type of adjustment comprises a decrease in loudness. 9. A device comprising:
at least one processor; and memory encoding computer executable instruction that, when executed by the at least one processor, perform a method comprising: identifying a test signal; in response to identifying the test signal, determining a physical manipulation of the device; determining an adjustment to at least one parameter of an auditory prosthesis based upon the physical manipulation; and sending the adjustment to a fitting application. 10. The device of claim 9, further comprising at least one detection component. 11. The device of claim 10, wherein the at least one detection component comprises:
an accelerometer; a gyroscope; a magnetometer; a pressure sensor; a microphone; and a camera. 12. The device of claim 9, wherein the test signal is an audible tone. 13. The device of claim 12, wherein the audible tone is generated during a fitting process for an auditory prosthesis. 14. The device of claim 9, wherein determining the adjustment further comprises:
determining a type of physical manipulation, wherein a type of the adjustment is based at least upon the type of physical manipulation; and determining a degree of the physical manipulation, wherein the adjustment is scaled based upon the degree of physical manipulation. 15. The device of claim 14, wherein the physical manipulation comprises a physical displacement. 16. The device of claim 15, wherein the physical displacement comprises one of:
tilting the device; shaking the device; rotating the device; and moving the device relative to an external object. 17. The device of claim 16, when the physical displacement is a forward tilt, the type of adjustment comprises an increase in loudness and wherein a magnitude of the increase is based upon the degree of the forward tilt relative to an initial position of the device. 18. The device of claim 16, when the physical displacement is a backward tilt, the type of adjustment comprises a decrease in loudness. 19. The device of claim 14, wherein the physical manipulation comprises a tactile response. 20. The device of claim 19, when the tactile response is a press, the adjustment comprises an increase in loudness and wherein a magnitude of the increase is based upon a strength of the press. 21. A method comprising:
generating a test signal; in response to generating the test signal, receiving data defining a physical manipulation of a remote device; determining an adjustment to at least one parameter of an auditory prosthesis, wherein determining the adjustment comprises:
determining a type of the physical manipulation of the remote device;
determining a type for the adjustment, wherein the type for the adjustment is based at least upon the type of physical manipulation;
determining a degree of the physical manipulation; and
determining a scale for the adjustment, wherein the scaled is based upon the degree of physical manipulation; and
applying the adjustment to the at least one parameter. 22. The method of claim 21, wherein the test signal is an audible tone. 23. The method of claim 22, wherein the audible tone is generated during a fitting process for the auditory prosthesis. 24. A method comprising:
placing a device in a locked state; detecting a physical manipulation of the device; and adjusting at least one parameter of an auditory prosthesis based on the physical manipulation. 25. The method of claim 24, wherein the physical manipulation comprises at least one of a physical displacement and a tactile response. 26. The method of claim 25, wherein the device comprises at least one detection component, wherein the at least one detection component comprises:
an accelerometer; a gyroscope; a magnetometer; a pressure sensor; a microphone; and a camera. 27. The method of claim 28, physical manipulation is detected using the detection component. 29. The method of claim 24, adjusting the at least one parameter comprises sending an instruction to perform the adjustment to the auditory prosthesis. 30. The method of claim 29, wherein the instruction to perform the adjustment is sent without removing the device from the locked state. | Embodiments disclosed herein relate to systems and methods for performing fitting of an auditory prosthesis using tactile responses. A tactile feedback device determines a physical manipulation in response to a test stimulus. A type of adjustment can be determined based upon the type of the physical manipulation and the type of the test signal. A scaling of the adjustment can be determined based on the degree of the physical manipulation.1. A method comprising:
identifying a test signal; in response to identifying the test signal, determining a physical manipulation of the device; and determining an adjustment to at least one parameter of an auditory prosthesis based upon the physical manipulation. 2. The method of claim 1, wherein the test signal is an audible tone. 3. The method of claim 2, wherein the audible tone is generated during a fitting process for an auditory prosthesis. 4. The method of claim 1, wherein determining the adjustment further comprises:
determining a type of physical manipulation, wherein a type of the adjustment is based at least upon the type of physical manipulation; and determining a degree of the physical manipulation, wherein the adjustment is scaled based upon the degree of physical manipulation. 5. The method of claim 4, wherein the physical manipulation comprises a physical displacement. 6. The method of claim 5, wherein the physical displacement comprises one of:
tilting the device; shaking the device; rotating the device; and moving the device relative to an external object. 7. The method of claim 6, when the physical displacement is a forward tilt, the type of adjustment comprises an increase in loudness and wherein the size of the increase is based upon the degree of the forward tilt relative to an initial position of the device. 8. The method of claim 6, when the physical displacement is a backward tilt, the type of adjustment comprises a decrease in loudness. 9. A device comprising:
at least one processor; and memory encoding computer executable instruction that, when executed by the at least one processor, perform a method comprising: identifying a test signal; in response to identifying the test signal, determining a physical manipulation of the device; determining an adjustment to at least one parameter of an auditory prosthesis based upon the physical manipulation; and sending the adjustment to a fitting application. 10. The device of claim 9, further comprising at least one detection component. 11. The device of claim 10, wherein the at least one detection component comprises:
an accelerometer; a gyroscope; a magnetometer; a pressure sensor; a microphone; and a camera. 12. The device of claim 9, wherein the test signal is an audible tone. 13. The device of claim 12, wherein the audible tone is generated during a fitting process for an auditory prosthesis. 14. The device of claim 9, wherein determining the adjustment further comprises:
determining a type of physical manipulation, wherein a type of the adjustment is based at least upon the type of physical manipulation; and determining a degree of the physical manipulation, wherein the adjustment is scaled based upon the degree of physical manipulation. 15. The device of claim 14, wherein the physical manipulation comprises a physical displacement. 16. The device of claim 15, wherein the physical displacement comprises one of:
tilting the device; shaking the device; rotating the device; and moving the device relative to an external object. 17. The device of claim 16, when the physical displacement is a forward tilt, the type of adjustment comprises an increase in loudness and wherein a magnitude of the increase is based upon the degree of the forward tilt relative to an initial position of the device. 18. The device of claim 16, when the physical displacement is a backward tilt, the type of adjustment comprises a decrease in loudness. 19. The device of claim 14, wherein the physical manipulation comprises a tactile response. 20. The device of claim 19, when the tactile response is a press, the adjustment comprises an increase in loudness and wherein a magnitude of the increase is based upon a strength of the press. 21. A method comprising:
generating a test signal; in response to generating the test signal, receiving data defining a physical manipulation of a remote device; determining an adjustment to at least one parameter of an auditory prosthesis, wherein determining the adjustment comprises:
determining a type of the physical manipulation of the remote device;
determining a type for the adjustment, wherein the type for the adjustment is based at least upon the type of physical manipulation;
determining a degree of the physical manipulation; and
determining a scale for the adjustment, wherein the scaled is based upon the degree of physical manipulation; and
applying the adjustment to the at least one parameter. 22. The method of claim 21, wherein the test signal is an audible tone. 23. The method of claim 22, wherein the audible tone is generated during a fitting process for the auditory prosthesis. 24. A method comprising:
placing a device in a locked state; detecting a physical manipulation of the device; and adjusting at least one parameter of an auditory prosthesis based on the physical manipulation. 25. The method of claim 24, wherein the physical manipulation comprises at least one of a physical displacement and a tactile response. 26. The method of claim 25, wherein the device comprises at least one detection component, wherein the at least one detection component comprises:
an accelerometer; a gyroscope; a magnetometer; a pressure sensor; a microphone; and a camera. 27. The method of claim 28, physical manipulation is detected using the detection component. 29. The method of claim 24, adjusting the at least one parameter comprises sending an instruction to perform the adjustment to the auditory prosthesis. 30. The method of claim 29, wherein the instruction to perform the adjustment is sent without removing the device from the locked state. | 2,600 |
10,523 | 10,523 | 15,786,359 | 2,612 | A system for providing an augmented reality, a virtual reality, and/or a mixed reality experience to a user includes a first display and a second display. The user views a real-world environment through the first display and the second display. The system includes a wearable visualization device that includes the first display and a fixed visualization device that includes the second display. The first display is configured to display a first layer of virtual features and the second display is configured to display a second layer of virtual features. The system includes a processor configured to generate the first layer of virtual features and the second layer of virtual features. The processor is configured to operatively communicate with the wearable visualization device and the fixed visualization device to coordinate presentation of the first layer of virtual features and the second layer of virtual features. | 1. A system for providing an augmented reality, a virtual reality, and/or a mixed reality experience to a user, wherein the user views a real-world environment through a first display and a second display, the system comprising:
a wearable visualization device comprising the first display, wherein the first display is configured to display a first layer of virtual features; a fixed visualization device comprising the second display, wherein the second display is configured to display a second layer of virtual features; and a processor configured to generate the first layer of virtual features and the second layer of virtual features, and wherein the processor is configured to operatively communicate with the wearable visualization device and the fixed visualization device to coordinate presentation of the first layer of virtual features and the second layer of virtual features. 2. The system of claim 1, wherein the first display is a transparent or semi-transparent display and is configured to enable the user, when wearing the wearable visualization device, to view the second display through the first display. 3. The system of claim 1, wherein the second display comprises a transparent light emitting diode display or a transparent organic light emitting diode display. 4. The system of claim 1, wherein the second display is coupled to a passenger ride vehicle. 5. The system of claim 1, wherein the processor is configured to coordinate the presentation of the first layer of virtual features and the second layer of virtual features with an element associated with an attraction at an amusement park. 6. The system of claim 1, wherein the first layer of virtual features comprises a virtual image of an object within a cabin of a passenger ride vehicle, and the second layer of virtual features comprises a virtual image of a feature on a window of the passenger ride vehicle. 7. The system of claim 1, wherein the first layer of virtual features comprises a virtual image of an object external to a cabin of a passenger ride vehicle. 8. The system of claim 1, comprising one or more cameras or sensors configured to monitor the real-world environment to facilitate coordination of the presentation of the first layer of virtual features and the second layer of virtual features. 9. A system for providing an augmented reality, a virtual reality, and/or a mixed reality experience to a user, the system comprising:
a passenger ride vehicle configured to traverse a path during a ride in an amusement park; a fixed visualization device comprising a transparent display coupled to the passenger ride vehicle, wherein the fixed visualization device is configured to overlay virtual features onto a real-world environment that is visible through the transparent display; and a processor configured to generate the virtual features and to coordinate presentation of the virtual features with ride effects of the ride. 10. The system of claim 9, comprising a wearable visualization device comprising another transparent display that is configured to display additional virtual features, wherein the wearable visualization device is configured to be worn by the user within the passenger ride vehicle during the ride. 11. The system of claim 10, wherein the processor is configured to coordinate presentation of the additional virtual features with the presentation of the virtual features and the ride effects. 12. The system of claim 9, wherein the transparent display comprises a transparent light emitting diode display or a transparent organic light emitting diode display. 13. The system of claim 9, wherein the processor is configured to coordinate the presentation of the virtual features with the ride effects by instructing the fixed visualization device to overlay the virtual features at a predetermined time during a ride cycle of the ride. 14. The system of claim 9, wherein the virtual features comprise cracks, condensation, charring, rain drops, snow, or a combination thereof. 15. The system of claim 9, wherein the transparent display is configured to be opaque when energized, thereby providing an illusion that a cabin of the passenger ride vehicle is enclosed by solid walls. 16. A method for providing an augmented reality, a virtual reality, and/or a mixed reality experience to a user, the method comprising:
generating, using a processor, a first layer of virtual features and a second layer of virtual features; displaying at a first display time, in response to instructions from the processor, the first layer of virtual features on a first display, wherein the first display is disposed within a wearable visualization device; and displaying at a second display time, in response to instructions from the processor, the second layer of virtual features on a second display, wherein the second display is disposed within a fixed visualization device that is physically separate from the wearable visualization device. 17. The method of claim 16, wherein the second display comprises a transparent display coupled to a passenger ride vehicle. 18. The method of claim 16, wherein the first display time and the second display time result in a coordinated display of the first layer of virtual features and the second layer of virtual features. 19. The method of claim 16, wherein the first display time and the second display time result in a coordinated display of the first layer of virtual features and the second layer of virtual features with a ride effect of a ride in an amusement park. 20. The method of claim 16, comprising receiving, at the processor, signals indicative of a real-world environment from one or more cameras or sensors, wherein the processor utilizes the received signals to determine the first display time and the second display time to facilitate coordination of presentation of the first layer of virtual features and the second layer of virtual features with elements in the real-world environment. | A system for providing an augmented reality, a virtual reality, and/or a mixed reality experience to a user includes a first display and a second display. The user views a real-world environment through the first display and the second display. The system includes a wearable visualization device that includes the first display and a fixed visualization device that includes the second display. The first display is configured to display a first layer of virtual features and the second display is configured to display a second layer of virtual features. The system includes a processor configured to generate the first layer of virtual features and the second layer of virtual features. The processor is configured to operatively communicate with the wearable visualization device and the fixed visualization device to coordinate presentation of the first layer of virtual features and the second layer of virtual features.1. A system for providing an augmented reality, a virtual reality, and/or a mixed reality experience to a user, wherein the user views a real-world environment through a first display and a second display, the system comprising:
a wearable visualization device comprising the first display, wherein the first display is configured to display a first layer of virtual features; a fixed visualization device comprising the second display, wherein the second display is configured to display a second layer of virtual features; and a processor configured to generate the first layer of virtual features and the second layer of virtual features, and wherein the processor is configured to operatively communicate with the wearable visualization device and the fixed visualization device to coordinate presentation of the first layer of virtual features and the second layer of virtual features. 2. The system of claim 1, wherein the first display is a transparent or semi-transparent display and is configured to enable the user, when wearing the wearable visualization device, to view the second display through the first display. 3. The system of claim 1, wherein the second display comprises a transparent light emitting diode display or a transparent organic light emitting diode display. 4. The system of claim 1, wherein the second display is coupled to a passenger ride vehicle. 5. The system of claim 1, wherein the processor is configured to coordinate the presentation of the first layer of virtual features and the second layer of virtual features with an element associated with an attraction at an amusement park. 6. The system of claim 1, wherein the first layer of virtual features comprises a virtual image of an object within a cabin of a passenger ride vehicle, and the second layer of virtual features comprises a virtual image of a feature on a window of the passenger ride vehicle. 7. The system of claim 1, wherein the first layer of virtual features comprises a virtual image of an object external to a cabin of a passenger ride vehicle. 8. The system of claim 1, comprising one or more cameras or sensors configured to monitor the real-world environment to facilitate coordination of the presentation of the first layer of virtual features and the second layer of virtual features. 9. A system for providing an augmented reality, a virtual reality, and/or a mixed reality experience to a user, the system comprising:
a passenger ride vehicle configured to traverse a path during a ride in an amusement park; a fixed visualization device comprising a transparent display coupled to the passenger ride vehicle, wherein the fixed visualization device is configured to overlay virtual features onto a real-world environment that is visible through the transparent display; and a processor configured to generate the virtual features and to coordinate presentation of the virtual features with ride effects of the ride. 10. The system of claim 9, comprising a wearable visualization device comprising another transparent display that is configured to display additional virtual features, wherein the wearable visualization device is configured to be worn by the user within the passenger ride vehicle during the ride. 11. The system of claim 10, wherein the processor is configured to coordinate presentation of the additional virtual features with the presentation of the virtual features and the ride effects. 12. The system of claim 9, wherein the transparent display comprises a transparent light emitting diode display or a transparent organic light emitting diode display. 13. The system of claim 9, wherein the processor is configured to coordinate the presentation of the virtual features with the ride effects by instructing the fixed visualization device to overlay the virtual features at a predetermined time during a ride cycle of the ride. 14. The system of claim 9, wherein the virtual features comprise cracks, condensation, charring, rain drops, snow, or a combination thereof. 15. The system of claim 9, wherein the transparent display is configured to be opaque when energized, thereby providing an illusion that a cabin of the passenger ride vehicle is enclosed by solid walls. 16. A method for providing an augmented reality, a virtual reality, and/or a mixed reality experience to a user, the method comprising:
generating, using a processor, a first layer of virtual features and a second layer of virtual features; displaying at a first display time, in response to instructions from the processor, the first layer of virtual features on a first display, wherein the first display is disposed within a wearable visualization device; and displaying at a second display time, in response to instructions from the processor, the second layer of virtual features on a second display, wherein the second display is disposed within a fixed visualization device that is physically separate from the wearable visualization device. 17. The method of claim 16, wherein the second display comprises a transparent display coupled to a passenger ride vehicle. 18. The method of claim 16, wherein the first display time and the second display time result in a coordinated display of the first layer of virtual features and the second layer of virtual features. 19. The method of claim 16, wherein the first display time and the second display time result in a coordinated display of the first layer of virtual features and the second layer of virtual features with a ride effect of a ride in an amusement park. 20. The method of claim 16, comprising receiving, at the processor, signals indicative of a real-world environment from one or more cameras or sensors, wherein the processor utilizes the received signals to determine the first display time and the second display time to facilitate coordination of presentation of the first layer of virtual features and the second layer of virtual features with elements in the real-world environment. | 2,600 |
10,524 | 10,524 | 15,744,611 | 2,685 | A safety automation system for an occupiable structure includes a computing management system including a computer processor, and a computer readable storage medium configured to run embedded software and cloud server software. A fire detection device of the automation system is adapted to detect a fire condition and output an associated fire condition detected signal to the computing management system. A condition deterrence device is configured to accept a wireless command signal from the computing management system associated with the fire condition detected signal and for actuating an appliance to at least reduce risk presented by the condition. | 1. A safety automation system for an occupiable structure comprising:
a computing management system including a computer processor, and a computer readable storage medium configured to run embedded software and cloud server software; a detection device adapted to detect a hazard condition and output an associated hazard condition detected signal to the computing management system; and a condition deterrence device configured to accept an electrical command signal from the computing management system associated with the hazard condition detected signal and to control an appliance to at least reduce risk presented by the hazard condition. 2. The safety automation system set forth in claim 1, wherein the computing management system is configured to send a wireless notification signal of the hazard condition to a mobile user interface device. 3. The safety automation system set forth in claim 2, wherein the hazard condition is a fire condition and the wireless notification signal includes the location of portable fire extinguishers within the occupiable structure. 4. The safety automation system set forth in claim 1, wherein the computing management system is at least in-part a portion of a cloud computing system. 5. The safety automation system set forth in claim 1, wherein the appliance is an ingress/egress point of the occupiable structure, and the condition deterrence device includes a lock configured to at least lock or unlock the ingress/egress point as directed by the computing management system. 6. The safety automation system set forth in claim 1, wherein the appliance is a garage door and the condition deterrence device includes a motor configured to at least open or close the garage door as directed by the computing management system. 7. The safety automation system set forth in claim 1, wherein the appliance is a pct door and the condition deterrence device includes an actuator configured to open or close the pet door as directed by the computing management system. 8. The safety automation system set forth in claim 1, wherein the appliance is a curtain and the condition deterrence device includes a motor configured to at least open or close the curtain as directed by the computing management system. 9. The safety automation system set forth in claim 1, wherein the appliance is a gas main and the condition deterrence device includes an actuated valve configured to shut-off the gas main as directed by the computing management system. 10. The safety automation system set forth in claim 1, wherein the appliance is a gas fireplace and the condition deterrence device includes an actuated valve configured to isolate the gas fireplace as directed by the computing management system. 11. The safety automation system set forth in claim 1, wherein the appliance is an electric load center and the condition deterrence device includes a circuit breaker for at least in-part isolating the load center as directed by the computing management system. 12. The safety automation system set forth in claim 1, wherein the appliance is a solar panel and the condition deterrence device includes a circuit breaker for electrically isolating the solar panel as directed by the computing management system. 13. The safety automation system set forth in claim 1, wherein the appliance is an electrostatic filter and the condition deterrence device is configured to energize the filter as directed by the computing management system. 14. The safety automation system set forth in claim 1, wherein the appliance is a humidifier and the condition deterrence device is configured to actuate the humidifier as directed by the computing management system. 15. The safety automation system set forth in claim 1, wherein the hazard condition is a fire condition, and the appliance is a HVAC system including a controller, a blower and a filter, and the condition deterrence device is configured to communicate with the controller as directed by the computing management system for removing smoke from, or redirecting smoke in, an air stream. 16. The safety automation system set forth in claim 1, wherein the appliance is a plurality of intelligent air vents constructed and arranged to close for deterring gas diffusion throughout the occupiable structure. 17. The safety automation system set forth in claim 1, wherein the command signal is a wireless command signal. 18. A safety automation system for an occupiable structure comprising:
a computing management system including a computer processor, and a computer readable storage medium configured to run embedded software; a first detection device adapted to detect a hazard condition and to output an associated hazard condition detected signal to the computing management system; and a second detection device adapted to detect the presence of an occupant in the occupiable structure and the occupant's proximity to the first detection device, and to output an occupied signal to the computing management system, and wherein the computing management system adjusts the sensitivity of the first detection device based on occupancy and proximity of the occupant to the first detection device. 19. A safety automation system for an occupiable structure comprising:
a detection device disposed in the occupiable structure; and a computing management system in wireless communication with the detection device, the computing management system including a computer processor, and a computer readable storage medium configured to run embedded software, and wherein the computing management system is configured adjust sensitivity of the detection device based on a time of day. 20. A safety automation system for an occupiable structure comprising:
a detection device disposed in the occupiable structure; a Global Position System transmitter device; and a computing management system in wireless communication with the detection device, the computing management system including a computer processor and a computer readable storage medium configured to run embedded software, and wherein computing management system is configured to determine a location of the Global Positioning System transmitter device and adjust a sensitivity of the detection device based on the location of the Global positioning System transmitter device. | A safety automation system for an occupiable structure includes a computing management system including a computer processor, and a computer readable storage medium configured to run embedded software and cloud server software. A fire detection device of the automation system is adapted to detect a fire condition and output an associated fire condition detected signal to the computing management system. A condition deterrence device is configured to accept a wireless command signal from the computing management system associated with the fire condition detected signal and for actuating an appliance to at least reduce risk presented by the condition.1. A safety automation system for an occupiable structure comprising:
a computing management system including a computer processor, and a computer readable storage medium configured to run embedded software and cloud server software; a detection device adapted to detect a hazard condition and output an associated hazard condition detected signal to the computing management system; and a condition deterrence device configured to accept an electrical command signal from the computing management system associated with the hazard condition detected signal and to control an appliance to at least reduce risk presented by the hazard condition. 2. The safety automation system set forth in claim 1, wherein the computing management system is configured to send a wireless notification signal of the hazard condition to a mobile user interface device. 3. The safety automation system set forth in claim 2, wherein the hazard condition is a fire condition and the wireless notification signal includes the location of portable fire extinguishers within the occupiable structure. 4. The safety automation system set forth in claim 1, wherein the computing management system is at least in-part a portion of a cloud computing system. 5. The safety automation system set forth in claim 1, wherein the appliance is an ingress/egress point of the occupiable structure, and the condition deterrence device includes a lock configured to at least lock or unlock the ingress/egress point as directed by the computing management system. 6. The safety automation system set forth in claim 1, wherein the appliance is a garage door and the condition deterrence device includes a motor configured to at least open or close the garage door as directed by the computing management system. 7. The safety automation system set forth in claim 1, wherein the appliance is a pct door and the condition deterrence device includes an actuator configured to open or close the pet door as directed by the computing management system. 8. The safety automation system set forth in claim 1, wherein the appliance is a curtain and the condition deterrence device includes a motor configured to at least open or close the curtain as directed by the computing management system. 9. The safety automation system set forth in claim 1, wherein the appliance is a gas main and the condition deterrence device includes an actuated valve configured to shut-off the gas main as directed by the computing management system. 10. The safety automation system set forth in claim 1, wherein the appliance is a gas fireplace and the condition deterrence device includes an actuated valve configured to isolate the gas fireplace as directed by the computing management system. 11. The safety automation system set forth in claim 1, wherein the appliance is an electric load center and the condition deterrence device includes a circuit breaker for at least in-part isolating the load center as directed by the computing management system. 12. The safety automation system set forth in claim 1, wherein the appliance is a solar panel and the condition deterrence device includes a circuit breaker for electrically isolating the solar panel as directed by the computing management system. 13. The safety automation system set forth in claim 1, wherein the appliance is an electrostatic filter and the condition deterrence device is configured to energize the filter as directed by the computing management system. 14. The safety automation system set forth in claim 1, wherein the appliance is a humidifier and the condition deterrence device is configured to actuate the humidifier as directed by the computing management system. 15. The safety automation system set forth in claim 1, wherein the hazard condition is a fire condition, and the appliance is a HVAC system including a controller, a blower and a filter, and the condition deterrence device is configured to communicate with the controller as directed by the computing management system for removing smoke from, or redirecting smoke in, an air stream. 16. The safety automation system set forth in claim 1, wherein the appliance is a plurality of intelligent air vents constructed and arranged to close for deterring gas diffusion throughout the occupiable structure. 17. The safety automation system set forth in claim 1, wherein the command signal is a wireless command signal. 18. A safety automation system for an occupiable structure comprising:
a computing management system including a computer processor, and a computer readable storage medium configured to run embedded software; a first detection device adapted to detect a hazard condition and to output an associated hazard condition detected signal to the computing management system; and a second detection device adapted to detect the presence of an occupant in the occupiable structure and the occupant's proximity to the first detection device, and to output an occupied signal to the computing management system, and wherein the computing management system adjusts the sensitivity of the first detection device based on occupancy and proximity of the occupant to the first detection device. 19. A safety automation system for an occupiable structure comprising:
a detection device disposed in the occupiable structure; and a computing management system in wireless communication with the detection device, the computing management system including a computer processor, and a computer readable storage medium configured to run embedded software, and wherein the computing management system is configured adjust sensitivity of the detection device based on a time of day. 20. A safety automation system for an occupiable structure comprising:
a detection device disposed in the occupiable structure; a Global Position System transmitter device; and a computing management system in wireless communication with the detection device, the computing management system including a computer processor and a computer readable storage medium configured to run embedded software, and wherein computing management system is configured to determine a location of the Global Positioning System transmitter device and adjust a sensitivity of the detection device based on the location of the Global positioning System transmitter device. | 2,600 |
10,525 | 10,525 | 15,770,989 | 2,633 | Efficient techniques to signal codebook subset restriction bit maps are provided. In some embodiments, a method of operation of a node of a cellular communications network includes determining a codebook restriction for a wireless device. The codebook restriction reduces a full codebook of the wireless device to a reduced codebook. The method also includes providing the codebook restriction to the wireless device with an indication of one or more ranks to which the codebook restriction applies. In some embodiments, this enables reduced signaling overhead from upper layers, improving the throughput of data traffic channels. This may also enable reduced RRC signaling message failures and also reduced latency. | 1. A method of operation of a node of a cellular communications network comprising:
determining a codebook restriction for a wireless device, the codebook restriction being a restriction that reduces a full codebook of the wireless device to a reduced codebook, the full codebook comprising precoding matrices for a plurality of ranks; and providing the codebook restriction to the wireless device with an indication of one or more ranks of the plurality of ranks to which the codebook restriction applies. 2. The method of claim 1 wherein providing the codebook restriction to the wireless device comprises providing the codebook restriction to the wireless device by providing, to the wireless device, an initial Radio Resource Control, RRC, configuration, or
comprises providing the codebook restriction to the wireless device by providing, to the wireless device, a Radio Resource Control, RRC, re-configuration, or
comprises providing, to the wireless device, the codebook restriction to the wireless device by using physical layer signaling, or
comprises providing, to the wireless device, a field called Applied Ranks comprising the indication of the one or more ranks of the plurality of ranks to which the codebook restriction applies. 3-5. (canceled) 6. The method of claim 1 wherein the codebook restriction comprises codebook restrictions for a subset of all possible ranks, the subset having fewer than all possible ranks, and providing the codebook restriction to the wireless device comprises providing, to the wireless device, an indication of the subset of all possible ranks to which the codebook restrictions apply. 7. The method of claim 1 wherein the codebook restriction is for a two-dimensional antenna system. 8. The method of claim 7 wherein providing the codebook restriction comprises providing the codebook restriction for a first direction in the two-dimensional antenna system. 9. The method of claim 8 wherein the first direction is a vertical or a horizontal direction. 10. (canceled) 11. The method of claim 7 wherein providing the codebook restriction comprises providing the codebook restriction for a first direction and a second direction in the two-dimensional antenna system. 12. The method of claim 11 wherein the first direction is a horizontal direction and the second direction is a vertical direction. 13-15. (canceled) 16. A node comprising:
circuitry comprising one or more processors and a memory containing instructions whereby the node is configured to:
determine a codebook restriction for a wireless device, the codebook restriction being a restriction that reduces a full codebook of the wireless device to a reduced codebook, the full codebook comprising precoding matrices for a plurality of ranks; and
provide the codebook restriction to the wireless device an indication of one or more ranks of the plurality of ranks to which the codebook restriction applies. 17-30. (canceled) 31. A method of operation of a wireless device of a cellular communications network comprising:
receiving a codebook restriction with an indication of one or more ranks of a plurality of ranks to which the codebook restriction applies, the codebook restriction being a restriction that reduces a full codebook of the wireless device to a reduced codebook, the full codebook comprising precoding matrices for the plurality of ranks; and transmitting channel feedback to a node of the cellular communications network based on the codebook restriction. 32. The method of claim 31 wherein receiving the codebook restriction comprises receiving the codebook restriction by receiving an initial Radio Resource Control, RRC, configuration, or
comprises receiving the codebook restriction by receiving physical layer signaling, or
comprises receiving the codebook restriction by receiving an RRC re-configuration, or
comprises receiving a field called Applied Ranks comprising the indication of the one or more ranks of the plurality of ranks to which the codebook restriction applies. 33-35. (canceled) 36. The method of claim 31 wherein the codebook restriction comprises codebook restrictions for a subset of all possible ranks, the subset having fewer than all possible ranks, and receiving the codebook restriction comprises receiving an indication of the subset of all possible ranks to which the codebook restrictions apply. 37. The method of claim 31 wherein the codebook restriction is for a two-dimensional antenna system. 38. The method of claim 37 wherein receiving the codebook restriction comprises receiving the codebook restriction for a first direction in the two-dimensional antenna system. 39. The method of claim 38 wherein the first direction is a vertical or a horizontal direction. 40. (canceled) 41. The method of claim 37 wherein receiving the codebook restriction comprises receiving the codebook restriction for a first direction and a second direction in the two-dimensional antenna system. 42. The method of claim 41 wherein the first direction is a horizontal direction and the second direction is a vertical direction. 43-45. (canceled) 46. A User Equipment, UE, comprising:
circuitry comprising one or more processors and a memory containing instructions whereby the UE is configured to:
receive a codebook restriction with an indication of one or more ranks of a plurality of ranks to which the codebook restriction applies, the codebook restriction being a restriction that reduces a full codebook of the wireless device to a reduced codebook, the full codebook comprising precoding matrices for the plurality of ranks; and
transmit channel feedback to a node of a cellular communications network based on the codebook restriction. 47-60. (canceled) 61. A User Equipment, UE, adapted to:
receive a codebook restriction with an indication of one or more ranks of a plurality of ranks to which the codebook restriction applies, the codebook restriction being a restriction that reduces a full codebook of the wireless device to a reduced codebook, the full codebook comprising precoding matrices for the plurality of ranks; and transmit channel feedback to a node of a cellular communications network based on the codebook restriction. 62-65. (canceled) 66. A node adapted to:
determine a codebook restriction for a wireless device, the codebook restriction being a restriction that reduces a full codebook of the wireless device to a reduced codebook, the full codebook comprising precoding matrices for a plurality of ranks; and provide the codebook restriction to the wireless device with an indication of one or more ranks of the plurality of ranks to which the codebook restriction applies. 67-70. (canceled) | Efficient techniques to signal codebook subset restriction bit maps are provided. In some embodiments, a method of operation of a node of a cellular communications network includes determining a codebook restriction for a wireless device. The codebook restriction reduces a full codebook of the wireless device to a reduced codebook. The method also includes providing the codebook restriction to the wireless device with an indication of one or more ranks to which the codebook restriction applies. In some embodiments, this enables reduced signaling overhead from upper layers, improving the throughput of data traffic channels. This may also enable reduced RRC signaling message failures and also reduced latency.1. A method of operation of a node of a cellular communications network comprising:
determining a codebook restriction for a wireless device, the codebook restriction being a restriction that reduces a full codebook of the wireless device to a reduced codebook, the full codebook comprising precoding matrices for a plurality of ranks; and providing the codebook restriction to the wireless device with an indication of one or more ranks of the plurality of ranks to which the codebook restriction applies. 2. The method of claim 1 wherein providing the codebook restriction to the wireless device comprises providing the codebook restriction to the wireless device by providing, to the wireless device, an initial Radio Resource Control, RRC, configuration, or
comprises providing the codebook restriction to the wireless device by providing, to the wireless device, a Radio Resource Control, RRC, re-configuration, or
comprises providing, to the wireless device, the codebook restriction to the wireless device by using physical layer signaling, or
comprises providing, to the wireless device, a field called Applied Ranks comprising the indication of the one or more ranks of the plurality of ranks to which the codebook restriction applies. 3-5. (canceled) 6. The method of claim 1 wherein the codebook restriction comprises codebook restrictions for a subset of all possible ranks, the subset having fewer than all possible ranks, and providing the codebook restriction to the wireless device comprises providing, to the wireless device, an indication of the subset of all possible ranks to which the codebook restrictions apply. 7. The method of claim 1 wherein the codebook restriction is for a two-dimensional antenna system. 8. The method of claim 7 wherein providing the codebook restriction comprises providing the codebook restriction for a first direction in the two-dimensional antenna system. 9. The method of claim 8 wherein the first direction is a vertical or a horizontal direction. 10. (canceled) 11. The method of claim 7 wherein providing the codebook restriction comprises providing the codebook restriction for a first direction and a second direction in the two-dimensional antenna system. 12. The method of claim 11 wherein the first direction is a horizontal direction and the second direction is a vertical direction. 13-15. (canceled) 16. A node comprising:
circuitry comprising one or more processors and a memory containing instructions whereby the node is configured to:
determine a codebook restriction for a wireless device, the codebook restriction being a restriction that reduces a full codebook of the wireless device to a reduced codebook, the full codebook comprising precoding matrices for a plurality of ranks; and
provide the codebook restriction to the wireless device an indication of one or more ranks of the plurality of ranks to which the codebook restriction applies. 17-30. (canceled) 31. A method of operation of a wireless device of a cellular communications network comprising:
receiving a codebook restriction with an indication of one or more ranks of a plurality of ranks to which the codebook restriction applies, the codebook restriction being a restriction that reduces a full codebook of the wireless device to a reduced codebook, the full codebook comprising precoding matrices for the plurality of ranks; and transmitting channel feedback to a node of the cellular communications network based on the codebook restriction. 32. The method of claim 31 wherein receiving the codebook restriction comprises receiving the codebook restriction by receiving an initial Radio Resource Control, RRC, configuration, or
comprises receiving the codebook restriction by receiving physical layer signaling, or
comprises receiving the codebook restriction by receiving an RRC re-configuration, or
comprises receiving a field called Applied Ranks comprising the indication of the one or more ranks of the plurality of ranks to which the codebook restriction applies. 33-35. (canceled) 36. The method of claim 31 wherein the codebook restriction comprises codebook restrictions for a subset of all possible ranks, the subset having fewer than all possible ranks, and receiving the codebook restriction comprises receiving an indication of the subset of all possible ranks to which the codebook restrictions apply. 37. The method of claim 31 wherein the codebook restriction is for a two-dimensional antenna system. 38. The method of claim 37 wherein receiving the codebook restriction comprises receiving the codebook restriction for a first direction in the two-dimensional antenna system. 39. The method of claim 38 wherein the first direction is a vertical or a horizontal direction. 40. (canceled) 41. The method of claim 37 wherein receiving the codebook restriction comprises receiving the codebook restriction for a first direction and a second direction in the two-dimensional antenna system. 42. The method of claim 41 wherein the first direction is a horizontal direction and the second direction is a vertical direction. 43-45. (canceled) 46. A User Equipment, UE, comprising:
circuitry comprising one or more processors and a memory containing instructions whereby the UE is configured to:
receive a codebook restriction with an indication of one or more ranks of a plurality of ranks to which the codebook restriction applies, the codebook restriction being a restriction that reduces a full codebook of the wireless device to a reduced codebook, the full codebook comprising precoding matrices for the plurality of ranks; and
transmit channel feedback to a node of a cellular communications network based on the codebook restriction. 47-60. (canceled) 61. A User Equipment, UE, adapted to:
receive a codebook restriction with an indication of one or more ranks of a plurality of ranks to which the codebook restriction applies, the codebook restriction being a restriction that reduces a full codebook of the wireless device to a reduced codebook, the full codebook comprising precoding matrices for the plurality of ranks; and transmit channel feedback to a node of a cellular communications network based on the codebook restriction. 62-65. (canceled) 66. A node adapted to:
determine a codebook restriction for a wireless device, the codebook restriction being a restriction that reduces a full codebook of the wireless device to a reduced codebook, the full codebook comprising precoding matrices for a plurality of ranks; and provide the codebook restriction to the wireless device with an indication of one or more ranks of the plurality of ranks to which the codebook restriction applies. 67-70. (canceled) | 2,600 |
10,526 | 10,526 | 15,460,083 | 2,637 | The disclosed embodiments provide an optically switched network system. This system includes a passive optical switch with N inputs and N outputs, which can communicate different wavelengths from each of the N inputs to each of the N outputs. It also includes N end-nodes, and N pairs of optical fibers, wherein each pair connects one of the N end-nodes to one of the N inputs and one of the N outputs. The optically switched network is organized into a virtual data plane and a virtual control plane, which both communicate through the same underlying physical network. The virtual data plane provides any-to-all parallel connectivity for data transmissions among the N end-nodes. The virtual control plane is organized as a ring that serially connects the N end-nodes, wherein the ring communicates arbitration information among distributed-arbitration logic at each of the N end-nodes. | 1. An optically switched network, comprising:
a passive optical switch with N inputs and N outputs, wherein the passive optical switch can communicate different wavelengths from each of the N inputs to each of the N outputs; N end-nodes; and N pairs of optical fibers, wherein each pair connects one of the N end-nodes to one of the N inputs and one of the N outputs of the passive optical switch; wherein the optically switched network is organized into two overlay networks over a same underlying physical network: one for a virtual data plane and one for a virtual control plane, wherein both the virtual data plane and the virtual control plane communicate through the same underlying physical network; wherein the virtual data plane is organized in a star topology that provides any-to-all parallel connectivity for data transmissions among the N end-nodes; and wherein the virtual control plane is organized as a ring that serially connects the N end-nodes, wherein the ring is used to communicate arbitration information among distributed-arbitration logic located at each of the N end-nodes. 2. The optically switched network of claim 1, wherein the virtual control plane uses one or more control wavelengths λc to communicate the arbitration information between consecutive end-nodes in the ring. 3. The optically switched network of claim 1, wherein the virtual data plane uses one or more data wavelengths λi, which are different from one or more control wavelengths λc, to provide any-to-all parallel connectivity for data transmissions among the N end-nodes. 4. The optically switched network of claim 1,
wherein each of the N end-nodes can transmit on the virtual control plane simultaneously with transmitting on the virtual data plane; and wherein each of the N end-nodes can receive on the virtual control plane simultaneously with receiving on the virtual data plane. 5. The optically switched network of claim 1, wherein the distributed-arbitration logic at each of the N end-nodes decides independently when and where to transmit data. 6. The optically switched network of claim 1, wherein each of the N end-nodes maintains packet-queuing data structures for storing packets to be transmitted across the optically switched network. 7. The optically switched network of claim 1, wherein the virtual control plane uses a token to communicate the arbitration information between consecutive end-nodes on the ring. 8. The optically switched network of claim 1, wherein each of the N end-nodes includes a tunable laser to facilitate transmissions from the end-node. 9. The optically switched network of claim 1, wherein the passive optical switch comprises a wavelength-division multiplexing (WDM) switch, which provides any-to-all parallel connectivity for multiple wavelengths among the N end-nodes. 10. The optically switched network of claim 1, wherein the passive optical switch is implemented using one or more silicon-photonic chips. 11. An enterprise computer system, comprising:
a set of servers; a set of storage devices; and an optically switched network that facilitates communications among the set of servers and the set of storage devices, wherein the set of servers and the set of storage devices comprise end-nodes in the optically switched network, wherein the optically switched network includes:
a passive optical switch with N inputs and N outputs, wherein the passive optical switch can communicate different wavelengths from each of the N inputs to each of the N outputs;
N end-nodes; and
N pairs of optical fibers, wherein each pair connects one of the N end-nodes to one of the N inputs and one of the N outputs of the passive optical switch;
wherein the optically switched network is organized into two overlay networks over a same underlying physical network: one for a virtual data plane and one for a virtual control plane, wherein both the virtual data plane and the virtual control plane communicate through the same underlying physical network;
wherein the virtual data plane is organized in a star topology that provides any-to-all parallel connectivity for data transmissions among the N end-nodes; and
wherein the virtual control plane is organized as a ring that serially connects the N end-nodes, wherein the ring is used to communicate arbitration information among distributed-arbitration logic located at each of the N end-nodes. 12. The enterprise computer system of claim 11, wherein the virtual control plane uses one or more control wavelengths λc to communicate the arbitration information between consecutive end-nodes in the ring. 13. The enterprise computer system of claim 11, wherein the virtual data plane uses one or more data wavelengths λi, which are different from one or more control wavelengths λc, to provide any-to-all parallel connectivity for data transmissions among the N end-nodes. 14. The enterprise computer system of claim 11,
wherein each of the N end-nodes can transmit on the virtual control plane simultaneously with transmitting on the virtual data plane; and wherein each of the N end-nodes can receive on the virtual control plane simultaneously with receiving on the virtual data plane. 15. The enterprise computer system of claim 11, wherein the distributed-arbitration logic at each of the N end-nodes decides independently when and where to transmit data. 16. The enterprise computer system of claim 11, wherein each of the N end-nodes maintains packet-queuing data structures for storing packets to be transmitted across the optically switched network. 17. The enterprise computer system of claim 11, wherein the virtual control plane uses a token to communicate the arbitration information between consecutive end-nodes on the ring. 18. The enterprise computer system of claim 11, wherein each of the N end-nodes includes a tunable laser to facilitate transmissions from the end-node. 19. The enterprise computer system of claim 11, wherein the passive optical switch comprises a wavelength-division multiplexing (WDM) switch, which provides any-to-all parallel connectivity for multiple wavelengths among the N end-nodes. 20. A method for facilitating communications through an optically switched network, comprising:
operating the optically switched network, wherein the optically switched network comprises:
a passive optical switch with N inputs and N outputs, wherein the passive optical switch can communicate different wavelengths from each of the N inputs to each of the N outputs;
N end-nodes; and
N pairs of optical fibers, wherein each pair connects one of the N end-nodes to one of the N inputs and one of the N outputs of the passive optical switch;
wherein the optically switched network is organized into two overlay networks over a same underlying physical network: one for a virtual data plane and one for a virtual control plane, wherein both the virtual data plane and the virtual control plane communicate through the same underlying physical network; and
while the optically switched network is operating:
using the virtual control plane to communicate arbitration information among distributed-arbitration logic located at each of the N end-nodes, wherein the virtual control plane is organized as a ring that serially connects the N end-nodes; and
using the distributed-arbitration logic to coordinate data transmissions through the virtual data plane, wherein the virtual data plane is organized in a star topology that provides any-to-all parallel connectivity for data transmissions among the N end-nodes. | The disclosed embodiments provide an optically switched network system. This system includes a passive optical switch with N inputs and N outputs, which can communicate different wavelengths from each of the N inputs to each of the N outputs. It also includes N end-nodes, and N pairs of optical fibers, wherein each pair connects one of the N end-nodes to one of the N inputs and one of the N outputs. The optically switched network is organized into a virtual data plane and a virtual control plane, which both communicate through the same underlying physical network. The virtual data plane provides any-to-all parallel connectivity for data transmissions among the N end-nodes. The virtual control plane is organized as a ring that serially connects the N end-nodes, wherein the ring communicates arbitration information among distributed-arbitration logic at each of the N end-nodes.1. An optically switched network, comprising:
a passive optical switch with N inputs and N outputs, wherein the passive optical switch can communicate different wavelengths from each of the N inputs to each of the N outputs; N end-nodes; and N pairs of optical fibers, wherein each pair connects one of the N end-nodes to one of the N inputs and one of the N outputs of the passive optical switch; wherein the optically switched network is organized into two overlay networks over a same underlying physical network: one for a virtual data plane and one for a virtual control plane, wherein both the virtual data plane and the virtual control plane communicate through the same underlying physical network; wherein the virtual data plane is organized in a star topology that provides any-to-all parallel connectivity for data transmissions among the N end-nodes; and wherein the virtual control plane is organized as a ring that serially connects the N end-nodes, wherein the ring is used to communicate arbitration information among distributed-arbitration logic located at each of the N end-nodes. 2. The optically switched network of claim 1, wherein the virtual control plane uses one or more control wavelengths λc to communicate the arbitration information between consecutive end-nodes in the ring. 3. The optically switched network of claim 1, wherein the virtual data plane uses one or more data wavelengths λi, which are different from one or more control wavelengths λc, to provide any-to-all parallel connectivity for data transmissions among the N end-nodes. 4. The optically switched network of claim 1,
wherein each of the N end-nodes can transmit on the virtual control plane simultaneously with transmitting on the virtual data plane; and wherein each of the N end-nodes can receive on the virtual control plane simultaneously with receiving on the virtual data plane. 5. The optically switched network of claim 1, wherein the distributed-arbitration logic at each of the N end-nodes decides independently when and where to transmit data. 6. The optically switched network of claim 1, wherein each of the N end-nodes maintains packet-queuing data structures for storing packets to be transmitted across the optically switched network. 7. The optically switched network of claim 1, wherein the virtual control plane uses a token to communicate the arbitration information between consecutive end-nodes on the ring. 8. The optically switched network of claim 1, wherein each of the N end-nodes includes a tunable laser to facilitate transmissions from the end-node. 9. The optically switched network of claim 1, wherein the passive optical switch comprises a wavelength-division multiplexing (WDM) switch, which provides any-to-all parallel connectivity for multiple wavelengths among the N end-nodes. 10. The optically switched network of claim 1, wherein the passive optical switch is implemented using one or more silicon-photonic chips. 11. An enterprise computer system, comprising:
a set of servers; a set of storage devices; and an optically switched network that facilitates communications among the set of servers and the set of storage devices, wherein the set of servers and the set of storage devices comprise end-nodes in the optically switched network, wherein the optically switched network includes:
a passive optical switch with N inputs and N outputs, wherein the passive optical switch can communicate different wavelengths from each of the N inputs to each of the N outputs;
N end-nodes; and
N pairs of optical fibers, wherein each pair connects one of the N end-nodes to one of the N inputs and one of the N outputs of the passive optical switch;
wherein the optically switched network is organized into two overlay networks over a same underlying physical network: one for a virtual data plane and one for a virtual control plane, wherein both the virtual data plane and the virtual control plane communicate through the same underlying physical network;
wherein the virtual data plane is organized in a star topology that provides any-to-all parallel connectivity for data transmissions among the N end-nodes; and
wherein the virtual control plane is organized as a ring that serially connects the N end-nodes, wherein the ring is used to communicate arbitration information among distributed-arbitration logic located at each of the N end-nodes. 12. The enterprise computer system of claim 11, wherein the virtual control plane uses one or more control wavelengths λc to communicate the arbitration information between consecutive end-nodes in the ring. 13. The enterprise computer system of claim 11, wherein the virtual data plane uses one or more data wavelengths λi, which are different from one or more control wavelengths λc, to provide any-to-all parallel connectivity for data transmissions among the N end-nodes. 14. The enterprise computer system of claim 11,
wherein each of the N end-nodes can transmit on the virtual control plane simultaneously with transmitting on the virtual data plane; and wherein each of the N end-nodes can receive on the virtual control plane simultaneously with receiving on the virtual data plane. 15. The enterprise computer system of claim 11, wherein the distributed-arbitration logic at each of the N end-nodes decides independently when and where to transmit data. 16. The enterprise computer system of claim 11, wherein each of the N end-nodes maintains packet-queuing data structures for storing packets to be transmitted across the optically switched network. 17. The enterprise computer system of claim 11, wherein the virtual control plane uses a token to communicate the arbitration information between consecutive end-nodes on the ring. 18. The enterprise computer system of claim 11, wherein each of the N end-nodes includes a tunable laser to facilitate transmissions from the end-node. 19. The enterprise computer system of claim 11, wherein the passive optical switch comprises a wavelength-division multiplexing (WDM) switch, which provides any-to-all parallel connectivity for multiple wavelengths among the N end-nodes. 20. A method for facilitating communications through an optically switched network, comprising:
operating the optically switched network, wherein the optically switched network comprises:
a passive optical switch with N inputs and N outputs, wherein the passive optical switch can communicate different wavelengths from each of the N inputs to each of the N outputs;
N end-nodes; and
N pairs of optical fibers, wherein each pair connects one of the N end-nodes to one of the N inputs and one of the N outputs of the passive optical switch;
wherein the optically switched network is organized into two overlay networks over a same underlying physical network: one for a virtual data plane and one for a virtual control plane, wherein both the virtual data plane and the virtual control plane communicate through the same underlying physical network; and
while the optically switched network is operating:
using the virtual control plane to communicate arbitration information among distributed-arbitration logic located at each of the N end-nodes, wherein the virtual control plane is organized as a ring that serially connects the N end-nodes; and
using the distributed-arbitration logic to coordinate data transmissions through the virtual data plane, wherein the virtual data plane is organized in a star topology that provides any-to-all parallel connectivity for data transmissions among the N end-nodes. | 2,600 |
10,527 | 10,527 | 15,676,106 | 2,657 | One embodiment provides a method, including: receiving, at an information handling device, user input comprising a potential wake word; determining, using a processor, whether the potential wake word is associated with a stored wake word; and responsive to determining that the potential wake word is associated with the stored wake word, activating, based on the potential wake word, a digital assistant associated with the information handling device. Other aspects are described and claimed. | 1. A method, comprising:
receiving, at an information handling device, user input comprising a potential wake word; determining, using a processor, whether the potential wake word is associated with a stored wake word, wherein the potential wake word comprises a wake word sharing a degree of phonetic similarity and having identifiable differences to the stored wake word; and responsive to determining that the potential wake word is associated with the stored wake word, activating, based on the potential wake word, a digital assistant associated with the information handling device. 2. The method of claim 1, wherein the potential wake word is a phonetic variation of the stored wake word. 3. The method of claim 1, wherein the determining comprises comparing the potential wake word against a list of known variants associated with the stored wake word. 4. The method of claim 3, wherein the activating comprises activating the digital assistant responsive to matching the potential wake word with at least one known variant from the list of known variants. 5. The method of claim 1, wherein the receiving comprises receiving the potential wake word multiple times. 6. The method of claim 5, wherein the determining comprises determining that the potential wake is associated with the stored wake word based upon the receiving the potential wake word multiple times in a predetermined time period. 7. The method of claim 5, further comprising storing the potential wake word in a list of known variants associated with the stored wake word responsive to receiving the potential wake word multiple times. 8. The method of claim 1, wherein the determining comprises identifying, using location data, a location of a user providing the user input and associating the potential wake word with the stored wake word based upon the identified location. 9. The method of claim 1, wherein the determining comprises identifying whether additional user input, provided after the potential wake word, comprises a user command. 10. The method of claim 9, responsive to identifying that a user command is provided after the potential wake word, associating the potential wake word with a stored wake word. 11. An information handling device, comprising:
a processor; a memory device that stores instructions executable by the processor to: receive user input comprising a potential wake word; determine whether the potential wake word is associated with a stored wake word, wherein the potential wake word comprises a wake word sharing a degree of phonetic similarity and having identifiable differences to the stored wake word; and responsive to determining that the potential wake word is associated with the stored wake word, activate, based on the potential wake word, a digital assistant associated with the information handling device. 12. The information handling device of claim 11, wherein the potential wake word is a phonetic variation of the stored wake word. 13. The information handling device of claim 11, wherein the instructions executable by the processor to determine comprise instructions executable by the processor to compare the potential wake word against a list of known variants associated with the stored wake word. 14. The information handling device of claim 13, wherein the instructions executable by the processor to activate comprise instructions executable by the processor to activate the digital assistant responsive to matching the potential wake word with at least one known variant from the list of known variants. 15. The information handling device of claim 1, wherein the instructions executable by the processor to receive comprise instructions executable by the processor to receive the potential wake word multiple times. 16. The information handling device of claim 15, wherein the instructions executable by the processor to determine comprise instructions executable by the processor to determine that the potential wake word is associated with the stored wake word based upon the instructions executable by the processor to receive the potential wake word multiple times in a predetermined time period. 17. The information handling device of claim 15, wherein the instructions are further executable by the processor to store the potential wake word in a list of known variants associated with the stored wake word responsive to receiving the potential wake word multiple times. 18. The information handling device of claim 11, wherein the instructions executable by the processor to determine comprise instructions executable by the processor to identify whether additional user input, provided after the potential wake word, comprises a user command. 19. The information handling device of claim 18, wherein the instructions are further executable by the processor to associate, responsive to identifying that a user command is provided after the potential wake word, the potential wake word with a stored wake word. 20. A product, comprising:
a storage device that stores code, the code being executable by a processor and comprising: code that receives user input comprising a potential wake word; code that determines whether the potential wake word is associated with a stored wake word, wherein the potential wake word comprises a wake word sharing a degree of phonetic similarity and having identifiable differences to the stored wake word; and code that activates, based on the potential wake word and responsive to determining that the potential wake word is associated with the stored wake word, a digital assistant associated with the information handling device. | One embodiment provides a method, including: receiving, at an information handling device, user input comprising a potential wake word; determining, using a processor, whether the potential wake word is associated with a stored wake word; and responsive to determining that the potential wake word is associated with the stored wake word, activating, based on the potential wake word, a digital assistant associated with the information handling device. Other aspects are described and claimed.1. A method, comprising:
receiving, at an information handling device, user input comprising a potential wake word; determining, using a processor, whether the potential wake word is associated with a stored wake word, wherein the potential wake word comprises a wake word sharing a degree of phonetic similarity and having identifiable differences to the stored wake word; and responsive to determining that the potential wake word is associated with the stored wake word, activating, based on the potential wake word, a digital assistant associated with the information handling device. 2. The method of claim 1, wherein the potential wake word is a phonetic variation of the stored wake word. 3. The method of claim 1, wherein the determining comprises comparing the potential wake word against a list of known variants associated with the stored wake word. 4. The method of claim 3, wherein the activating comprises activating the digital assistant responsive to matching the potential wake word with at least one known variant from the list of known variants. 5. The method of claim 1, wherein the receiving comprises receiving the potential wake word multiple times. 6. The method of claim 5, wherein the determining comprises determining that the potential wake is associated with the stored wake word based upon the receiving the potential wake word multiple times in a predetermined time period. 7. The method of claim 5, further comprising storing the potential wake word in a list of known variants associated with the stored wake word responsive to receiving the potential wake word multiple times. 8. The method of claim 1, wherein the determining comprises identifying, using location data, a location of a user providing the user input and associating the potential wake word with the stored wake word based upon the identified location. 9. The method of claim 1, wherein the determining comprises identifying whether additional user input, provided after the potential wake word, comprises a user command. 10. The method of claim 9, responsive to identifying that a user command is provided after the potential wake word, associating the potential wake word with a stored wake word. 11. An information handling device, comprising:
a processor; a memory device that stores instructions executable by the processor to: receive user input comprising a potential wake word; determine whether the potential wake word is associated with a stored wake word, wherein the potential wake word comprises a wake word sharing a degree of phonetic similarity and having identifiable differences to the stored wake word; and responsive to determining that the potential wake word is associated with the stored wake word, activate, based on the potential wake word, a digital assistant associated with the information handling device. 12. The information handling device of claim 11, wherein the potential wake word is a phonetic variation of the stored wake word. 13. The information handling device of claim 11, wherein the instructions executable by the processor to determine comprise instructions executable by the processor to compare the potential wake word against a list of known variants associated with the stored wake word. 14. The information handling device of claim 13, wherein the instructions executable by the processor to activate comprise instructions executable by the processor to activate the digital assistant responsive to matching the potential wake word with at least one known variant from the list of known variants. 15. The information handling device of claim 1, wherein the instructions executable by the processor to receive comprise instructions executable by the processor to receive the potential wake word multiple times. 16. The information handling device of claim 15, wherein the instructions executable by the processor to determine comprise instructions executable by the processor to determine that the potential wake word is associated with the stored wake word based upon the instructions executable by the processor to receive the potential wake word multiple times in a predetermined time period. 17. The information handling device of claim 15, wherein the instructions are further executable by the processor to store the potential wake word in a list of known variants associated with the stored wake word responsive to receiving the potential wake word multiple times. 18. The information handling device of claim 11, wherein the instructions executable by the processor to determine comprise instructions executable by the processor to identify whether additional user input, provided after the potential wake word, comprises a user command. 19. The information handling device of claim 18, wherein the instructions are further executable by the processor to associate, responsive to identifying that a user command is provided after the potential wake word, the potential wake word with a stored wake word. 20. A product, comprising:
a storage device that stores code, the code being executable by a processor and comprising: code that receives user input comprising a potential wake word; code that determines whether the potential wake word is associated with a stored wake word, wherein the potential wake word comprises a wake word sharing a degree of phonetic similarity and having identifiable differences to the stored wake word; and code that activates, based on the potential wake word and responsive to determining that the potential wake word is associated with the stored wake word, a digital assistant associated with the information handling device. | 2,600 |
10,528 | 10,528 | 15,749,585 | 2,685 | Methods and systems are presented in this disclosure for performing multi-frequency communications during wellbore operations. Communication of data related to a state of a wellbore (e.g., characteristics and/or locations of one or more fluids flowing along a casing in the wellbore during a cementing operation) can be performed simultaneously or sequentially involving a plurality of nodes located along the casing in the wellbore, wherein each of the nodes is configured to use a different frequency for communication. In this way, a higher information throughput and more reliable communication can be achieved during wellbore operations. | 1. A method for performing multi-frequency communications in wellbore operations, the method comprising:
performing data communication involving a plurality of nodes located along a casing in a wellbore, and by using multiple frequencies for the data communication. 2. The method of claim 1, wherein the data communication involving the plurality of nodes is performed simultaneously. 3. The method of claim 1, further comprising initiating one or more operations related to the wellbore based on the communicated data. 4. The method of claim 1, further comprising:
configuring a first node of the plurality of nodes to use a first resonant frequency for the data communication; and configuring a second node of the plurality of nodes to use a second resonant frequency for the data communication, the first resonant frequency is lower than the second resonant frequency. 5. The method of claim 4, wherein:
a first propagation range for the data communication associated with the first node is longer than a second propagation range for the data communication associated with the second node; or a first bandwidth for the data communication associated with the first node is smaller than a second bandwidth for the data communication associated with the second node. 6. (canceled) 7. The method of claim 4, wherein:
configuring the first node comprises wrapping first turns of coil around the casing and configuring the second node comprises wrapping second turns of coil around the casing, the first turns of coil comprises more turns of coil around the casing than the second turns of coil; or the method further comprises configuring the first node and the second node as toroidally wound coils. 8. (canceled) 9. The method of claim 7, further comprising:
physically separating a first core material of the first node from a second core material of the second node; or configuring the first node and the second node to share a common core material. 10. (canceled) 11. The method of claim 1, wherein performing the data communication involving the plurality of nodes comprises:
performing the data communication by simultaneously transmitting, from a set of adjacent nodes of the plurality of nodes, signals having non-overlapping frequency bandwidths. 12. The method of claim 1, further comprising:
obtaining, from the plurality of nodes, information about one or more fluids flowing through an annulus region between the casing and a reservoir formation of the wellbore. 13. A system for performing multi-frequency communications in wellbore operations, the system comprising:
a plurality of nodes located along a casing in a wellbore configured to perform data communication using multiple frequencies. 14. The system of claim 13, wherein the plurality of nodes is configured to simultaneously perform the data communication. 15. The system of claim 13, further comprising:
at least one processor configured to process the data communicated by the plurality of nodes, wherein the at least one processor is further configured to initiate one or more operations of the wellbore based on the processed data. 16. The system of claim 13, wherein:
a first node of the plurality of nodes is configured to use a first resonant frequency for the data communication; a second node of the plurality of nodes is configured to use a second resonant frequency for the data communication; and the first resonant frequency is lower than the second resonant frequency. 17. The system of claim 16, wherein a first propagation range for the data communication associated with the first node is longer than a second propagation range for the data communication associated with the second node. 18. The system of claim 16, wherein a first bandwidth for the data communication associated with the first node is smaller than a second bandwidth for the data communication associated with the second node. 19. The system of claim 16, wherein:
the first node is configured by wrapping first turns of coil around the casing, the second node is configured by wrapping second turns of coil around the casing, and the first turns of coil comprises more turns of coil around the casing than the second turns of coil; or the first node and the second node are configured as toroidally wound coils. 20. (canceled) 21. The system of claim 19, wherein:
a first core material of the first node is physically separated from a second core material of the second node; or the first node and the second node are configured to share a common core material. 22. (canceled) 23. The system of claim 13, wherein a set of adjacent nodes of the plurality of nodes is configured to perform the data communication by simultaneously transmitting signals having non-overlapping frequency bandwidths. 24. The system of claim 13, wherein the at least one processor is further configured to:
obtain, from the plurality of nodes, information about one or more fluids flowing through an annulus region between the casing and a reservoir formation of the wellbore; and initiate the one or more operations related to cementing of the wellbore based on the obtained information. 25. The system of claim 24, wherein the one or more fluids are pumped into the annulus region using a pump. | Methods and systems are presented in this disclosure for performing multi-frequency communications during wellbore operations. Communication of data related to a state of a wellbore (e.g., characteristics and/or locations of one or more fluids flowing along a casing in the wellbore during a cementing operation) can be performed simultaneously or sequentially involving a plurality of nodes located along the casing in the wellbore, wherein each of the nodes is configured to use a different frequency for communication. In this way, a higher information throughput and more reliable communication can be achieved during wellbore operations.1. A method for performing multi-frequency communications in wellbore operations, the method comprising:
performing data communication involving a plurality of nodes located along a casing in a wellbore, and by using multiple frequencies for the data communication. 2. The method of claim 1, wherein the data communication involving the plurality of nodes is performed simultaneously. 3. The method of claim 1, further comprising initiating one or more operations related to the wellbore based on the communicated data. 4. The method of claim 1, further comprising:
configuring a first node of the plurality of nodes to use a first resonant frequency for the data communication; and configuring a second node of the plurality of nodes to use a second resonant frequency for the data communication, the first resonant frequency is lower than the second resonant frequency. 5. The method of claim 4, wherein:
a first propagation range for the data communication associated with the first node is longer than a second propagation range for the data communication associated with the second node; or a first bandwidth for the data communication associated with the first node is smaller than a second bandwidth for the data communication associated with the second node. 6. (canceled) 7. The method of claim 4, wherein:
configuring the first node comprises wrapping first turns of coil around the casing and configuring the second node comprises wrapping second turns of coil around the casing, the first turns of coil comprises more turns of coil around the casing than the second turns of coil; or the method further comprises configuring the first node and the second node as toroidally wound coils. 8. (canceled) 9. The method of claim 7, further comprising:
physically separating a first core material of the first node from a second core material of the second node; or configuring the first node and the second node to share a common core material. 10. (canceled) 11. The method of claim 1, wherein performing the data communication involving the plurality of nodes comprises:
performing the data communication by simultaneously transmitting, from a set of adjacent nodes of the plurality of nodes, signals having non-overlapping frequency bandwidths. 12. The method of claim 1, further comprising:
obtaining, from the plurality of nodes, information about one or more fluids flowing through an annulus region between the casing and a reservoir formation of the wellbore. 13. A system for performing multi-frequency communications in wellbore operations, the system comprising:
a plurality of nodes located along a casing in a wellbore configured to perform data communication using multiple frequencies. 14. The system of claim 13, wherein the plurality of nodes is configured to simultaneously perform the data communication. 15. The system of claim 13, further comprising:
at least one processor configured to process the data communicated by the plurality of nodes, wherein the at least one processor is further configured to initiate one or more operations of the wellbore based on the processed data. 16. The system of claim 13, wherein:
a first node of the plurality of nodes is configured to use a first resonant frequency for the data communication; a second node of the plurality of nodes is configured to use a second resonant frequency for the data communication; and the first resonant frequency is lower than the second resonant frequency. 17. The system of claim 16, wherein a first propagation range for the data communication associated with the first node is longer than a second propagation range for the data communication associated with the second node. 18. The system of claim 16, wherein a first bandwidth for the data communication associated with the first node is smaller than a second bandwidth for the data communication associated with the second node. 19. The system of claim 16, wherein:
the first node is configured by wrapping first turns of coil around the casing, the second node is configured by wrapping second turns of coil around the casing, and the first turns of coil comprises more turns of coil around the casing than the second turns of coil; or the first node and the second node are configured as toroidally wound coils. 20. (canceled) 21. The system of claim 19, wherein:
a first core material of the first node is physically separated from a second core material of the second node; or the first node and the second node are configured to share a common core material. 22. (canceled) 23. The system of claim 13, wherein a set of adjacent nodes of the plurality of nodes is configured to perform the data communication by simultaneously transmitting signals having non-overlapping frequency bandwidths. 24. The system of claim 13, wherein the at least one processor is further configured to:
obtain, from the plurality of nodes, information about one or more fluids flowing through an annulus region between the casing and a reservoir formation of the wellbore; and initiate the one or more operations related to cementing of the wellbore based on the obtained information. 25. The system of claim 24, wherein the one or more fluids are pumped into the annulus region using a pump. | 2,600 |
10,529 | 10,529 | 15,618,607 | 2,657 | Active speaker detection can include receiving speaker detection signals from a plurality of devices participating in an electronic meeting. Each speaker detection signal specifies a score indicating whether an active speaker is detected by a respective device of the plurality of devices that generates the speaker detection signal. Active speaker detection further can include determining, using a processor, a device of the plurality of devices that detects an active speaker based upon the speaker detection signals. | 1. A method for active speaker detection, comprising:
receiving speaker detection signals from a plurality of devices participating in an electronic meeting, wherein each speaker detection signal specifies a score indicating whether an active speaker is detected by a respective device of the plurality of devices that generates the speaker detection signal; and determining, using a processor, a device of the plurality of devices that detects an active speaker based upon the speaker detection signals. 2. The method of claim 1, wherein, in response to the determining, the method further comprises:
providing video received from the determined device to the plurality of devices during the electronic meeting. 3. The method of claim 1, wherein the speaker detection signals are received separately from audio data or video data received from the plurality of devices. 4. The method of claim 1, wherein at least two of the devices including the determined device are co-located. 5. The method of claim 1, wherein the determined device does not send audio data. 6. The method of claim 1, wherein a microphone of the determined device is muted. 7. The method of claim 1, wherein the score is a Boolean flag. 8. The method of claim 1, wherein the speaker detection signals specify only scores. 9. A system for active speaker detection, comprising:
a memory configured to store program code; and a processor coupled to the memory, wherein the processor, in response to executing the program code, is configured to initiate operations including: receiving speaker detection signals from a plurality of devices participating in an electronic meeting, wherein each speaker detection signal specifies a score indicating whether an active speaker is detected by a respective device of the plurality of devices that generates the speaker detection signal; and determining a device of the plurality of devices that detects an active speaker based upon the speaker detection signals. 10. The system of claim 9, wherein, in response to the determining, the processor is configured to initiate executable operations comprising:
providing video received from the determined device to the plurality of devices during the electronic meeting. 11. The system of claim 9, wherein the speaker detection signals are received separately from audio data or video data received from the plurality of devices. 12. The system of claim 9, wherein at least two of the devices including the determined device are co-located. 13. The system of claim 9, wherein the determined device does not send audio data. 14. The system of claim 9, wherein the determined device is muted. 15. The system of claim 9, wherein the score is a Boolean flag. 16. The system of claim 9, wherein the speaker detection signals specify only scores. 17. A computer program product for active speaker detection, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to initiate operations comprising:
receiving speaker detection signals from a plurality of devices participating in an electronic meeting, wherein each speaker detection signal specifies a score indicating whether an active speaker is detected by a respective device of the plurality of devices that generates the speaker detection signal; and determining a device of the plurality of devices that detects an active speaker based upon the speaker detection signals. 18. The computer program product of claim 17, wherein, in response to the determining, the processor is configured to initiate executable operations comprising:
providing video received from the determined device to the plurality of devices during the electronic meeting. 19. The computer program product of claim 17, wherein the speaker detection signals are received separately from audio data or video data received from the plurality of devices. 20. The computer program product of claim 17, wherein the determined device does not send audio data or has a microphone that is muted. | Active speaker detection can include receiving speaker detection signals from a plurality of devices participating in an electronic meeting. Each speaker detection signal specifies a score indicating whether an active speaker is detected by a respective device of the plurality of devices that generates the speaker detection signal. Active speaker detection further can include determining, using a processor, a device of the plurality of devices that detects an active speaker based upon the speaker detection signals.1. A method for active speaker detection, comprising:
receiving speaker detection signals from a plurality of devices participating in an electronic meeting, wherein each speaker detection signal specifies a score indicating whether an active speaker is detected by a respective device of the plurality of devices that generates the speaker detection signal; and determining, using a processor, a device of the plurality of devices that detects an active speaker based upon the speaker detection signals. 2. The method of claim 1, wherein, in response to the determining, the method further comprises:
providing video received from the determined device to the plurality of devices during the electronic meeting. 3. The method of claim 1, wherein the speaker detection signals are received separately from audio data or video data received from the plurality of devices. 4. The method of claim 1, wherein at least two of the devices including the determined device are co-located. 5. The method of claim 1, wherein the determined device does not send audio data. 6. The method of claim 1, wherein a microphone of the determined device is muted. 7. The method of claim 1, wherein the score is a Boolean flag. 8. The method of claim 1, wherein the speaker detection signals specify only scores. 9. A system for active speaker detection, comprising:
a memory configured to store program code; and a processor coupled to the memory, wherein the processor, in response to executing the program code, is configured to initiate operations including: receiving speaker detection signals from a plurality of devices participating in an electronic meeting, wherein each speaker detection signal specifies a score indicating whether an active speaker is detected by a respective device of the plurality of devices that generates the speaker detection signal; and determining a device of the plurality of devices that detects an active speaker based upon the speaker detection signals. 10. The system of claim 9, wherein, in response to the determining, the processor is configured to initiate executable operations comprising:
providing video received from the determined device to the plurality of devices during the electronic meeting. 11. The system of claim 9, wherein the speaker detection signals are received separately from audio data or video data received from the plurality of devices. 12. The system of claim 9, wherein at least two of the devices including the determined device are co-located. 13. The system of claim 9, wherein the determined device does not send audio data. 14. The system of claim 9, wherein the determined device is muted. 15. The system of claim 9, wherein the score is a Boolean flag. 16. The system of claim 9, wherein the speaker detection signals specify only scores. 17. A computer program product for active speaker detection, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to initiate operations comprising:
receiving speaker detection signals from a plurality of devices participating in an electronic meeting, wherein each speaker detection signal specifies a score indicating whether an active speaker is detected by a respective device of the plurality of devices that generates the speaker detection signal; and determining a device of the plurality of devices that detects an active speaker based upon the speaker detection signals. 18. The computer program product of claim 17, wherein, in response to the determining, the processor is configured to initiate executable operations comprising:
providing video received from the determined device to the plurality of devices during the electronic meeting. 19. The computer program product of claim 17, wherein the speaker detection signals are received separately from audio data or video data received from the plurality of devices. 20. The computer program product of claim 17, wherein the determined device does not send audio data or has a microphone that is muted. | 2,600 |
10,530 | 10,530 | 15,290,757 | 2,684 | A sensor assembly with a sensing element sends a sensor signal from the sensing element to attached process transmitter over sensor connection wires. The sensor assembly has memory circuitry for storing information related to the sensor assembly and interface circuitry that provides for digital communication of the stored information with the attached process transmitter. This digital communication is sent over the sensor connection wires. | 1. A process instrument comprising:
a sensor assembly including:
sensor wires;
a sensor element conductively connected to the sensor wires so as to transmit, without amplification or signal processing, an analog sensor signal indicative of a process parameter sensed by the sensor element;
memory circuitry having configuration data related to the sensor assembly; and
interface circuitry electrically connected to the memory circuitry and reactively coupled to the sensor wires so as to transmit, over the sensor wires, a digital communication signal indicative of the configuration data; and
a process transmitter including:
sensor measurement circuitry conductively connected to the sensor wires so as to receive, over the sensor wires, the analog sensor signal, the sensor measurement circuitry configured to convert the received analog sensor signal to a digital sensor signal;
sensor communication circuitry reactively coupled to the sensor wires so as to receive, via the sensor wires, the digital communication signal indicative of the configuration data;
a microprocessor electrically connected to the sensor measurement circuitry so as to receive the digital sensor signal and electrically connected to the sensor communication circuitry so as to receive the digital communication signal indicative of the configuration data, the microprocessor configured to calculate, based on the received digital sensor signal and the configuration data, a calibrated measurement value indicative of the process parameter; and
communication circuitry electrically connected to the microprocessor and configured to transmit an output signal representative of the calibrated measurement value. 2. The process instrument of claim 1, wherein the interface circuitry of the sensor assembly is inductively coupled to the sensor wires. 3. The process instrument of claim 1, wherein the interface circuitry of the sensor assembly is capacitively coupled to the sensor wires. 4. The process instrument of claim 1, wherein the interface circuitry of the sensor assembly comprises:
a modulator conductively connected to the memory circuitry and reactively coupled to the sensor wires and configured to receive, from the memory circuitry, the configuration data related to the sensor assembly and to transmit, over the sensor wires, the digital communication indicative of the configuration data. 5. The process instrument of claim 1, wherein the digital communication signal is a first digital communication signal, the interface circuitry of the sensor assembly further comprising:
a demodulator conductively connected to the memory circuitry and reactively coupled to the sensor wires so as to receive, over the sensor wires, a second digital communication signal containing instructions. 6. The process instrument of claim 1, wherein the microprocessor of the process transmitter is a first microprocessor, the memory circuitry of the sensor assembly further comprising:
nonvolatile memory; a second microprocessor in electrical communication with both the nonvolatile memory and the interface circuitry, the second microprocessor configured to execute the instructions contained in the second digital communication signal, and to store and/or retrieve, in response to the received instructions, information to and/or from the nonvolatile memory, respectively. 7. The process instrument of claim 1, wherein the interface circuitry of the sensor assembly comprises:
a rectifier/power buffer that provides, using the digital communication, power to the interface circuitry and/or memory circuitry. 8. The process instrument of claim 1, wherein the memory circuitry and the interface circuitry of the sensor assembly comprise an RFID chip. 9. The process instrument of claim 1, wherein the sensor element of the sensor assembly comprises a temperature sensing element. 10. The process instrument of claim 1, wherein the sensor element of the sensor assembly comprises a thermocouple. 11. The process instrument of claim 1, wherein the sensor element of the sensor assembly comprises a Resistive Temperature Device (RTD). 12. The process instrument of claim 1, wherein the sensor assembly further comprises:
a bypass capacitor electrically connected across the sensing element. 13. The process instrument of claim 1, wherein the interface circuitry of the sensor assembly is reactively coupled to the sensor wires via a transformer having a first winding formed by the sensor wires. 14. The process instrument of claim 13, wherein the first winding formed by the sensor wires is in series with the sensor element. 15. The process instrument of claim 13, wherein the first winding formed by the sensor wires is in parallel with the sensor element. 16. A sensor assembly for use with a process transmitter, the sensor assembly comprising:
sensor wires; a sensor element conductively connected to the sensor wires so as to transmit, without amplification or signal processing, an analog sensor signal indicative of a process parameter sensed by the sensor element; memory circuitry having configuration data related to the sensor assembly; and interface circuitry electrically connected to the memory circuitry and reactively coupled to the sensor wires so as to transmit, over the sensor wires, a digital communication signal indicative of the configuration data. 17. The sensor assembly of claim 16, wherein the interface circuitry is inductively coupled to the sensor wires. 18. The sensor assembly of claim 16, wherein the interface circuitry is capacitively coupled to the sensor wires. 19. The sensor assembly of claim 16, wherein the interface circuitry comprises:
a modulator conductively connected to the memory circuitry and reactively coupled to the sensor wires and configured to receive, from the memory circuitry, the configuration data related to the sensor assembly and to transmit, over the sensor wires, the digital communication indicative of the configuration data. 20. The sensor assembly of claim 16, wherein the digital communication signal is a first digital communication signal, the interface circuitry further comprising:
a demodulator conductively connected to the memory circuitry and reactively coupled to the sensor wires so as to receive, over the sensor wires, a second digital communication signal containing instructions. 21. The sensor assembly of claim 16, wherein the microprocessor of the process transmitter is a first microprocessor, the memory circuitry further comprising:
nonvolatile memory; a second microprocessor in electrical communication with both the nonvolatile memory and the interface circuitry, the second microprocessor configured to execute the instructions contained in the second digital communication signal, and to store and/or retrieve, in response to the received instructions, information to and/or from the nonvolatile memory, respectively. 22. The sensor assembly of claim 16, wherein the interface circuitry comprises:
a rectifier/power buffer that provides, using the digital communication, power to the interface circuitry and/or memory circuitry. 23. The sensor assembly of claim 16, wherein the memory circuitry and the interface circuitry comprise an RFID chip. 24. The sensor assembly of claim 16, wherein the sensor element comprises a temperature sensing element. 25. The sensor assembly of claim 16, wherein the sensor element comprises a thermocouple. 26. The sensor assembly of claim 16, wherein the sensor element comprises a Resistive Temperature Device (RTD). 27. The sensor assembly of claim 16, further comprising:
a bypass capacitor electrically connected across the sensing element. 28. The sensor assembly of claim 16, wherein the interface circuitry is reactively coupled to the sensor wires via a transformer having a first winding formed by the sensor wires. 29. The sensor assembly of claim 28, wherein the first winding formed by the sensor wires is in series with the sensor element. 30. The sensor assembly of claim 28, wherein the first winding formed by the sensor wires is in parallel with the sensor element. 31. A sensor assembly for use with a process transmitter, the sensor assembly comprising:
sensor wires; a sensor element conductively connected to the sensor wires so as to transmit, without amplification or signal processing, an analog sensor signal indicative of a process parameter sensed by the sensor element; and an RFID chip having configuration data related to the sensor assembly, the RFID chip reactively coupled to the sensor wires so as to transmit, over the sensor wires, a digital communication signal indicative of the configuration data. 32. The sensor assembly of claim 31, wherein the sensor element comprises a temperature sensing element. 33. The sensor assembly of claim 31, wherein the sensor element comprises a thermocouple. 34. The sensor assembly of claim 31, wherein the sensor element comprises a Resistive Temperature Device (RTD). 35. The sensor assembly of claim 31, further comprising:
a bypass capacitor electrically connected across the sensing element. 36. A process transmitter for use with a sensor assembly having sensor wires, the process transmitter comprising:
sensor measurement circuitry configured to conductively connect to the sensor wires so as to receive, over the sensor wires, the analog sensor signal, the sensor measurement circuitry further configured to convert the received analog sensor signal to a digital sensor signal; an RFID chip configured to reactively couple to the sensor wires so as to receive, via the sensor wires, the digital communication signal indicative of configuration data of the sensor assembly, a microprocessor electrically connected to the sensor measurement circuitry so as to receive the digital sensor signal and electrically connected to the RFID chip so as to receive the digital communication signal indicative of the configuration data, the microprocessor configured to calculate, based on the received digital sensor signal and the configuration data, a calibrated measurement value indicative of a process parameter; and a communication port electrically connected to the microprocessor and configured to transmit an output signal representative of the calibrated measurement value. | A sensor assembly with a sensing element sends a sensor signal from the sensing element to attached process transmitter over sensor connection wires. The sensor assembly has memory circuitry for storing information related to the sensor assembly and interface circuitry that provides for digital communication of the stored information with the attached process transmitter. This digital communication is sent over the sensor connection wires.1. A process instrument comprising:
a sensor assembly including:
sensor wires;
a sensor element conductively connected to the sensor wires so as to transmit, without amplification or signal processing, an analog sensor signal indicative of a process parameter sensed by the sensor element;
memory circuitry having configuration data related to the sensor assembly; and
interface circuitry electrically connected to the memory circuitry and reactively coupled to the sensor wires so as to transmit, over the sensor wires, a digital communication signal indicative of the configuration data; and
a process transmitter including:
sensor measurement circuitry conductively connected to the sensor wires so as to receive, over the sensor wires, the analog sensor signal, the sensor measurement circuitry configured to convert the received analog sensor signal to a digital sensor signal;
sensor communication circuitry reactively coupled to the sensor wires so as to receive, via the sensor wires, the digital communication signal indicative of the configuration data;
a microprocessor electrically connected to the sensor measurement circuitry so as to receive the digital sensor signal and electrically connected to the sensor communication circuitry so as to receive the digital communication signal indicative of the configuration data, the microprocessor configured to calculate, based on the received digital sensor signal and the configuration data, a calibrated measurement value indicative of the process parameter; and
communication circuitry electrically connected to the microprocessor and configured to transmit an output signal representative of the calibrated measurement value. 2. The process instrument of claim 1, wherein the interface circuitry of the sensor assembly is inductively coupled to the sensor wires. 3. The process instrument of claim 1, wherein the interface circuitry of the sensor assembly is capacitively coupled to the sensor wires. 4. The process instrument of claim 1, wherein the interface circuitry of the sensor assembly comprises:
a modulator conductively connected to the memory circuitry and reactively coupled to the sensor wires and configured to receive, from the memory circuitry, the configuration data related to the sensor assembly and to transmit, over the sensor wires, the digital communication indicative of the configuration data. 5. The process instrument of claim 1, wherein the digital communication signal is a first digital communication signal, the interface circuitry of the sensor assembly further comprising:
a demodulator conductively connected to the memory circuitry and reactively coupled to the sensor wires so as to receive, over the sensor wires, a second digital communication signal containing instructions. 6. The process instrument of claim 1, wherein the microprocessor of the process transmitter is a first microprocessor, the memory circuitry of the sensor assembly further comprising:
nonvolatile memory; a second microprocessor in electrical communication with both the nonvolatile memory and the interface circuitry, the second microprocessor configured to execute the instructions contained in the second digital communication signal, and to store and/or retrieve, in response to the received instructions, information to and/or from the nonvolatile memory, respectively. 7. The process instrument of claim 1, wherein the interface circuitry of the sensor assembly comprises:
a rectifier/power buffer that provides, using the digital communication, power to the interface circuitry and/or memory circuitry. 8. The process instrument of claim 1, wherein the memory circuitry and the interface circuitry of the sensor assembly comprise an RFID chip. 9. The process instrument of claim 1, wherein the sensor element of the sensor assembly comprises a temperature sensing element. 10. The process instrument of claim 1, wherein the sensor element of the sensor assembly comprises a thermocouple. 11. The process instrument of claim 1, wherein the sensor element of the sensor assembly comprises a Resistive Temperature Device (RTD). 12. The process instrument of claim 1, wherein the sensor assembly further comprises:
a bypass capacitor electrically connected across the sensing element. 13. The process instrument of claim 1, wherein the interface circuitry of the sensor assembly is reactively coupled to the sensor wires via a transformer having a first winding formed by the sensor wires. 14. The process instrument of claim 13, wherein the first winding formed by the sensor wires is in series with the sensor element. 15. The process instrument of claim 13, wherein the first winding formed by the sensor wires is in parallel with the sensor element. 16. A sensor assembly for use with a process transmitter, the sensor assembly comprising:
sensor wires; a sensor element conductively connected to the sensor wires so as to transmit, without amplification or signal processing, an analog sensor signal indicative of a process parameter sensed by the sensor element; memory circuitry having configuration data related to the sensor assembly; and interface circuitry electrically connected to the memory circuitry and reactively coupled to the sensor wires so as to transmit, over the sensor wires, a digital communication signal indicative of the configuration data. 17. The sensor assembly of claim 16, wherein the interface circuitry is inductively coupled to the sensor wires. 18. The sensor assembly of claim 16, wherein the interface circuitry is capacitively coupled to the sensor wires. 19. The sensor assembly of claim 16, wherein the interface circuitry comprises:
a modulator conductively connected to the memory circuitry and reactively coupled to the sensor wires and configured to receive, from the memory circuitry, the configuration data related to the sensor assembly and to transmit, over the sensor wires, the digital communication indicative of the configuration data. 20. The sensor assembly of claim 16, wherein the digital communication signal is a first digital communication signal, the interface circuitry further comprising:
a demodulator conductively connected to the memory circuitry and reactively coupled to the sensor wires so as to receive, over the sensor wires, a second digital communication signal containing instructions. 21. The sensor assembly of claim 16, wherein the microprocessor of the process transmitter is a first microprocessor, the memory circuitry further comprising:
nonvolatile memory; a second microprocessor in electrical communication with both the nonvolatile memory and the interface circuitry, the second microprocessor configured to execute the instructions contained in the second digital communication signal, and to store and/or retrieve, in response to the received instructions, information to and/or from the nonvolatile memory, respectively. 22. The sensor assembly of claim 16, wherein the interface circuitry comprises:
a rectifier/power buffer that provides, using the digital communication, power to the interface circuitry and/or memory circuitry. 23. The sensor assembly of claim 16, wherein the memory circuitry and the interface circuitry comprise an RFID chip. 24. The sensor assembly of claim 16, wherein the sensor element comprises a temperature sensing element. 25. The sensor assembly of claim 16, wherein the sensor element comprises a thermocouple. 26. The sensor assembly of claim 16, wherein the sensor element comprises a Resistive Temperature Device (RTD). 27. The sensor assembly of claim 16, further comprising:
a bypass capacitor electrically connected across the sensing element. 28. The sensor assembly of claim 16, wherein the interface circuitry is reactively coupled to the sensor wires via a transformer having a first winding formed by the sensor wires. 29. The sensor assembly of claim 28, wherein the first winding formed by the sensor wires is in series with the sensor element. 30. The sensor assembly of claim 28, wherein the first winding formed by the sensor wires is in parallel with the sensor element. 31. A sensor assembly for use with a process transmitter, the sensor assembly comprising:
sensor wires; a sensor element conductively connected to the sensor wires so as to transmit, without amplification or signal processing, an analog sensor signal indicative of a process parameter sensed by the sensor element; and an RFID chip having configuration data related to the sensor assembly, the RFID chip reactively coupled to the sensor wires so as to transmit, over the sensor wires, a digital communication signal indicative of the configuration data. 32. The sensor assembly of claim 31, wherein the sensor element comprises a temperature sensing element. 33. The sensor assembly of claim 31, wherein the sensor element comprises a thermocouple. 34. The sensor assembly of claim 31, wherein the sensor element comprises a Resistive Temperature Device (RTD). 35. The sensor assembly of claim 31, further comprising:
a bypass capacitor electrically connected across the sensing element. 36. A process transmitter for use with a sensor assembly having sensor wires, the process transmitter comprising:
sensor measurement circuitry configured to conductively connect to the sensor wires so as to receive, over the sensor wires, the analog sensor signal, the sensor measurement circuitry further configured to convert the received analog sensor signal to a digital sensor signal; an RFID chip configured to reactively couple to the sensor wires so as to receive, via the sensor wires, the digital communication signal indicative of configuration data of the sensor assembly, a microprocessor electrically connected to the sensor measurement circuitry so as to receive the digital sensor signal and electrically connected to the RFID chip so as to receive the digital communication signal indicative of the configuration data, the microprocessor configured to calculate, based on the received digital sensor signal and the configuration data, a calibrated measurement value indicative of a process parameter; and a communication port electrically connected to the microprocessor and configured to transmit an output signal representative of the calibrated measurement value. | 2,600 |
10,531 | 10,531 | 12,358,596 | 2,652 | A device for obtaining, storing and displaying information from a remote server, the device has a modem for establishing communication sessions with the remote server. A memory coupled to the modem stores the obtained information, and a display is coupled to the memory for displaying the stored information. The device automatically and periodically communicates with the remote server for obtaining the information. | 1. A device for obtaining, storing and displaying personalized information from a first remote information server via the Internet over a Wireless Local Area Network (WLAN), the device comprising:
an antenna for transmitting and receiving digital data over the air; a WLAN transceiver coupled to said antenna for bi-directional packet-based digital data communication over the air, via said antenna, with another WLAN transceiver of the same type; a first memory coupled to said WLAN transceiver for storing digital data received by said WLAN transceiver; a display for visually presenting an information, said display being coupled to said memory for displaying information stored in the first memory; and a single enclosure housing said antenna, said WLAN transceiver, said first memory and said display, said single enclosure having dimensions and an appearance of a conventional flat, wall-mountable framed picture, wherein said device is addressable in the WLAN, and said device is operative for communicating over the WLAN with the first remote information server via the Internet for receiving information therefrom, and for storing and displaying the received information. 2. The device according to claim 1, wherein said device is operative for automatically and periodically communicating with said first remote information server at all times when said device is in operation. 3. The device according to claim 1, further comprising a second memory for storing a digital address uniquely identifying said device in the WLAN. 4. The device according to claim 1, wherein said device is operative to send the digital address and a request for information, and to obtain and display the received information from the first remote information server in response to the sent request for information. 5. The device according to claim 1, wherein said device is configured for wall mounting in a residential building, and the first remote information server is located outside the residential building. 6. The device according to claim 1, wherein said WLAN transceiver is operative to communicate substantially according to IEEE802.11 standard. 7. The device according to claim 1, wherein said display is alphanumeric. 8. The device according to claim 1, wherein said display is based on Field Emission Display (FED) or Cathode Ray Tube (CRT) technologies. 9. The device according to claim 1, wherein said display comprises a flat screen that is based on Liquid Crystal Display (LCD) technology. 10. The device according to claim 1, wherein said display is an analog video display. 11. The device according to claim 10, wherein said display is coupled to said first memory via a composite video interface, and the composite video interface is one of a PAL and an NTSC interface. 12. The device according to claim 1, wherein said display is a digital display. 13. The device according to claim 12, wherein said display is an HDTV display. 14. The device according to claim 1, wherein said first memory is non-volatile. 15. The device according to claim 1, wherein said first memory is based on one out a Flash memory, a DRAM memory and a PAM memory. 16. The device according to claim 1, further comprising: an AC power plug for connecting to an AC power outlet; and a power supply connected to said AC power plug to be powered by power supplied from the AC power outlet, said power supply comprising an AC to DC converter for DC powering at least part of said device. 17. The device according to claim 1, further comprising a battery, wherein said device is operative to be at least in part powered from said battery, and wherein said battery is a primary or rechargeable battery. 18. The device according to claim 1, for use with a wiring connected for concurrently carrying a power signal and an information signal over the same wires, said device further comprising a connector for connecting to the wiring, and said device being further operative to be at least in part powered from the power signal. 19. The device according to claim 18, wherein the wiring is a telephone wire pair and the information signal is an analog telephone signal. 20. The device according to claim 1, further comprising firmware and a processor for executing said firmware, said processor being coupled to control at least said antenna and said display. 21. The device according to claim 20, wherein said processor is one of: a microprocessor; and a microcomputer, and said device further comprises at least one user operated button or switch coupled to said processor, for user control of operation of said device. 22. The device according to claim 21, wherein the user control of operation of said device comprises at least one out of:
turning said device on and off; resetting said device to default values; changing the contrast of said display; changing the brightness of said display; changing the zoom of images presented on said display; selecting a language; and selecting the information to be presented on said display. 23. The device according to claim 20, wherein the firmware include at least part of a web client for communication with, and accessing information stored in, the remote information server. 24. The device according to claim 23 wherein said web client includes at least part of a graphical web browser. 25. The device according to claim 23 wherein said graphical web browser is based on Windows Internet Explorer. 26. The device according to claim 1, wherein the first remote server information is organized as a web-site including web pages as part of the World wide Web (WWW), and is further identified by said device using the web site Uniform Resource Locator (URL). 27. The device according to claim 1, wherein said device is operative for communicating with a second remote information server via the Internet for receiving information therefrom, and for storing and displaying the received information from the second remote information server. 28. The device according to claim 27, wherein said device is adapted to communicate with the first and second remote information servers for obtaining selected and distinct information from each remote information server. 29. The device according to claim 27, wherein said device communicates with the first and second remote servers one at a time. 30. The device according to claim 1, wherein said device is operative for communicating with only a single remote information server external to the building. 31. The device according to claim 1, wherein communication with the first remote information server is based on Internet protocol suite. 32. The device according to claim 31, wherein communication with the first remote information server is based on TCP/IP. 33. The device according to claim 1, wherein said device is operative to initiate a communication with the first remote information server after a set period following a prior communication session. 34. The device according to claim 33, wherein the set period is at least one of: set by the user; set previously in said device; and is set by the first remote information server in a previous communication session. 35. The device according to claim 1, wherein said device is operative to initiate a communication with the first remote information server on a daily basis at a pre-set time of day (TOD). 36. The device according to claim 35, wherein the pre-set time of day is at least one of: set by the user; set previously in (Original) The device; and set by the remote information server in a previous communication session. 37. The device according to claim 1, wherein the first remote information server is a dedicated remote information server, and said device is further operative to initiate a communication with the dedicated remote information server based on a user request. 38. The device according to claim 1, wherein the information received from said first remote information server is publicly available for free. 39. The device according to claim 38, wherein the information received from the first remote information server is also available in other mediums. 40. The device according to claim 39, wherein the first remote information server is also associated with one of: a newspaper; a radio station; and a television station. 41. The device according to claim 1, wherein the information received from the first remote information server and displayed relates to a future event, a planned activity or a forecast of a situation. 42. The device according to claim 41, wherein the information received from the first remote information server includes at least one of: a weather forecast; a future sports event; a future culture event; a future entertainment event; a TV station guide; and a radio station guide. 43. The device according to claim 1, wherein the information received from the first remote information server and displayed relates to a past event, a past activity, or a former situation. 44. The device according to claim 43, wherein the information from the first remote information server includes at least one of: sports event results; a stock quote; lottery results; and a currency exchange rate. 45. The device according to claim 1, wherein the information received from the first remote information server and displayed relates to an ongoing or recent event, a current or recent activity, or an existing situation. 46. The device according to claim 45, wherein the information received from the first remote information server includes at least one of: public news; and traffic related information. 47. The device according to claim 1, wherein said device is operative to delay a communication with the first remote information server for a set period, in the case that such communication cannot be properly executed in due time. 48. The device according to claim 47, wherein the set period is at least one of: set by the user; set previously in said device; and set by the first remote information server in a previous communication session. 49. The device according to claim 1, wherein said device is operative to communicate with an alternative second distinct remote information server via the Internet, in the case the communication with the first remote information server cannot be properly executed in due time or after a set delay. 50. The device according to claim 1, wherein the communication with the first remote information server is based on spread spectrum modulation. 51. The device according to claim 50, wherein the spread spectrum modulation (-OK) is a DSSS (Direct Sequence Spread Spectrum) modulation. 52. The device according to claim 1, wherein the communication with the first remote information server uses a license-free radio frequency band. 53. The device according to claim 52, wherein the license-free radio frequency band is one of: 900 MHz; 2.4 GHz; and 5.8 GHz. 54. The device according to claim 1, wherein said device is dedicated only for obtaining, storing and displaying information from the first remote information server. 55. The device according to claim 1, further comprising a second memory for storing a digital address uniquely identifying said device in the WLAN, and wherein said second memory is operative for storing a digital address for uniquely identifying said device in a Local Area Network (LAN) or on the Internet. 56. The device according to claim 55, wherein the digital address is either a MAC address or an IP address. 57. The device according to claim 1, further comprising a second memory for storing a digital address uniquely identifying said device in the WLAN, and wherein said second memory is operative for storing personalized information. 58. The device according to claim 57, wherein the personalized information is a user name or a password. 59. The device according to claim 57, wherein the personalized information is set by the user. 60. The device according to claim 57, wherein the personalized information is set by the remote information server. 61. The device according to claim 57, wherein the personalized information is associated with the physical geographical location of said device. 62. The device according to claim 57, wherein the personalized information comprises a code identifying specific information requested by the user, and wherein said device is operative to sending the code to the first remote information server for obtaining and displaying specific information received from first remote information server in response to the request sent. 63. The device according to claim 62, wherein the personalized information is set by the user or by the first remote information server. 64. The device according to claim 1, wherein said device is further operative to store and play digital audio data. 65. The device according to claim 1, wherein: said device is further operative to receive and display information from a connected unit; said device further comprises a connector coupled to said first memory for connecting to the unit; and said device is operative to receive digital data comprising information from the unit and displaying the information on said display. 66. The device according to claim 65, wherein said device is further operative to transmit digital data to the unit. 67. The device according to claim 66, wherein the communication with the unit via said connector is based on a standard serial digital data bus. 68. The device according to claim 65, further comprising:
An AC power plug connectable to an AC power source;
a power supply connected to said AC power plug to be powered by power supplied by the AC power source and to provide DC power for DC powering said first memory and said display, and
wherein said connector is further coupled to said power supply for DC powering the unit via said connector. 69. The device according to claim 68, wherein the unit is a battery operated unit, and said power supply further comprising a charger for charging the battery of the battery operated unit. 70. The device according to claim 68, wherein the unit is a handheld unit and said device is further adapted to mechanically dock, supply power to, and communicate with the handheld unit. 71. The device according to claim 70 in combination with a cradle for detachable mounting of the handheld unit, the handheld unit having a mating connector, wherein said connector is part of said cradle, and said connector connects with the handheld unit mating connector when the handheld unit is mounted in said cradle. 72. The device according to claim 71, wherein said handheld unit is a Personal Digital Assistant (PDA), or a cellular telephone. 73. The device according to claim 68, wherein: said device is further adapted so that the unit is removably mechanically attached to said device; and said power supply is operative to supply DC power to the unit when mechanically attached via said connector. 74. The device according to claim 1, further operative as a clock for maintaining and displaying the time, and wherein said device is further operative to display the current hour, minute and second. 75. The device according to claim 74, further operative to display the current year, the current month and the current day of the month. 76. The device according to claim 74, further operative to display the time of a last information update or a last communication session. 77. The device according to claim 1, wherein said single enclose in constructed to have at least one of the following:
a form substantially similar to that of a standard picture frame;
wall mounting elements substantially similar to those of a standard picture frame for hanging on a wall; and
a shape to at least in part substitute for a standard picture frame. 78. The device according to claim 1, further comprising a digital to analog converter coupled to said first memory for converting digital data stored in said first memory to an analog signal. 79. The device according to claim 78, wherein the analog signal in an analog video signal for connecting to an analog video display. 80. The device according to claim 79, wherein the analog video signal is an S-Video signal or a composite video signal in a PAL or NTSC format. 81. The device according to claim 1, further supporting an information pushing from the remote server, wherein said device is further operative to respond and communicate with the remote information server, when the remote information server initiates a communication session and said device receives information from the remote server and displays the received information. 82. The device according to claim 1, wherein said device is further operative to periodically retrieve and display information from said first memory. 83. The device according to claim 1, wherein said device is further adapted to communicate with the Internet via a gateway. 84. A device dedicated for obtaining, storing and displaying information requested by a user from a single dedicated remote information server via the Internet using a digital data cellular network, the device comprising:
a cellular antenna for transmitting and receiving digital data over the air via the digital data cellular network; a cellular modem coupled to said cellular antenna for bi-directional packet-based digital data communication with the digital data cellular network; a first memory coupled to said cellular modem for storing information contained in digital data received from the Internet via said digital data cellular network; a display for visually presenting information, said display being coupled to said first memory for displaying information stored in the first memory; a second memory for storing a digital address uniquely identifying the user; a third memory for storing data identifying the information requested by the user; a fourth memory for storing a digital address uniquely identifying the device in the Internet; and a single enclosure having a thin flat picture appearance housing said cellular antenna, said cellular modem, said first memory and said display, wherein said device is operative to obtain from the user, and store in said second memory, the digital address uniquely identifying the user and to obtain, and store in said third memory, the information requested by the user, and wherein said device is further operative for communicating over the cellular network and via the Internet with the single dedicated remote information server, for transmitting to the server the digital address uniquely identifying the user, the data identifying the information requested by the user and the digital address uniquely identifying the device in the Internet, for receiving requested information from the server, and for storing the requested information in said first memory and displaying the received information from said first memory. 85. The device according to claim 84, wherein said device is operative for automatically and periodically communicating with the dedicated remote information server at all times when said device is in operation. 86. The device according to claim 84, wherein said device is configured for wall mounting in a residential building, and the dedicated remote information server is located outside the residential building. 87. The device according to claim 84, wherein said display is alphanumeric. 88. The device according to claim 84, wherein said display is based on Field Emission Display (FED) or Cathode Ray Tube (CRT) technologies. 89. The device according to claim 84, wherein said display comprises a flat screen that is based on Liquid Crystal Display (LCD) technology. 90. The device according to claim 84, wherein said display is an analog video display. 91. The device according to claim 90, wherein said display is coupled to said first memory via a composite video interface, and the composite video interface is one of a PAL interface and an NTSC interface. 92. The device according to claim 84, wherein said display is a digital display. 93. The device according to claim 92, wherein said display is an HDTV display. 94. The device according to claim 84, wherein at least one of said memories is a non-volatile memory. 95. The device according to claim 84, wherein at least one of said memories is based on one out a Flash memory, a DRAM memory and a RAM memory. 96. The device according to claim 84, further comprising:
an AC power plug for connecting to an AC power outlet; and a power supply connected to said AC power plug to be powered by power supplied from the AC power outlet, said power supply comprising an AC to DC converter for DC powering at least part of said device. 97. The device according to claim 84, further comprising a battery, wherein said device is operative to be at least in part powered from said battery, and wherein said battery is a primary or rechargeable battery. 98. The device according to claim 84, for use with wiring connected for concurrently carrying a power signal and an information signal over the same wires, said device further comprising a connector for connecting to the wiring, and said device being further operative to be at least in part powered from the power signal. 99. The device according to claim 84, further comprising firmware and a processor for executing said firmware, said processor being coupled to control at least said cellular antenna and said display. 100. The device according to claim 99, wherein said processor is one of: a microprocessor; and a microcomputer, and said device further comprises at least one user operated button or switch coupled to said processor, for user control of operation of said device. 101. The device according to claim 100, wherein the user control of operation of said device comprises at least one of:
turning said device on and off; resetting said device to default values; changing the contrast of said display; changing the brightness of said display; changing the zoom of images presented on said display; selecting a language; and electing the information to be presented on said display. 102. The device according to claim 100, wherein said firmware include at least part of a web client for communication with, and accessing information stored in, the dedicated remote information server. 103. The device according to claim 102 wherein said at least part of a web client includes at least part of a graphical web browser. 104. The device according to claim 102 wherein said at least part of a graphical web browser is based on Windows Internet Explorer. 105. The device according to claim 84, wherein the dedicated remote information server is organized as a web-site including web pages as part of the World wide Web (WWW), and is further identified by said device using the web site Uniform Resource Locator (URL). 106. The device according to claim 84, wherein communication with the dedicated remote information server is based on Internet protocol suite. 107. The device according to claim 106, wherein communication with the dedicated remote information server is based on TCP/IP. 108. The device according to claim 84, wherein said device is operative to initiate a communication with the dedicated remote information server on a daily basis at a pre-set time of day (TOD). 109. The device according to claim 108, wherein the pre-set time of day is at least one of: set by the user; set previously in said device; and set by the dedicated remote information server in a previous communication session. 110. The device according to claim 84, wherein the information received from the dedicated remote information server is publicly available for free. 111. The device according to claim 84, wherein the information received from the dedicated remote information server and displayed relates to a future event, a planned activity, or a forecast of a situation. 112. The device according to claim 111, wherein the information received from the dedicated remote information server includes at least one of: a weather forecast; a future sports event; a future culture event; a future entertainment event; a TV station guide; and a radio station guide. 113. The device according to claim 84, wherein the information received from the dedicated remote information server and displayed relates to a past event, a past activity, or a former situation. 114. The device according to claim 113, wherein the information from the dedicated remote information server includes at least one of: sports event results; a stock quote; lottery results; and a currency exchange rate. 115. The device according to claim 84, wherein the information received from the dedicated remote information server and displayed relates to an ongoing or recent event, a current or recent activity, or an existing situation. 116. The device according to claim 115, wherein the information received from the dedicated remote information server includes at least one of: public news; and traffic related information. 117. The device according to claim 84, wherein said device is operative to delay a communication with the dedicated remote information server for a set period, in the case that such communication cannot be properly executed within a selected time period. 118. The device according to claim 117, wherein the set period is at least one of: set by the user; set previously in said device; and set by the dedicated remote information server in a previous communication session. 119. The device according to claim 84, wherein the communication with the dedicated remote information server is based on spread spectrum modulation. 120. The device according to claim 119, wherein the spread spectrum modulation is a DSSS (Direct Sequence Spread Spectrum) modulation. 121. The device according to claim 84, wherein the cellular network uses a licensed radio frequency band. 122. The device according to claim 84, wherein the cellular network uses a license-free radio frequency band. 123. The device according to claim 122, wherein the license-free radio frequency band is one of: 900 MHz; 2.4 GHz; and 5.8 GHz. 124. The device according to claim 84, wherein the digital address digital address uniquely identifying the device in the Internet is either based on a MAC address or based on an IP address. 125. The device according to claim 84, wherein the digital address uniquely identifying the user is a user name or a password. 126. The device according to claim 84, wherein said device is further operative to store and play digital audio data. 127. The device according to claim 84, wherein:
said device is operative to receive and display information from a connected unit; said device further comprises a connector coupled to said first memory for connecting to the unit; and said device is operative to receive digital data comprising information from the unit and to display the information on said display. 128. The device according to claim 127, wherein said device is further operative to transmit digital data to the unit. 129. The device according to claim 127, wherein the communication with the unit via said connector is based on a standard serial digital data bus. 130. The device according to claim 127, further comprising:
an AC power plug connectable to an AC power source; and a power supply connected to said AC power plug to be powered by power supplied by the AC power source and to provide DC power for DC powering said first memory and said display, and
wherein said connector is further coupled to said power supply for DC powering the unit via said connector. 131. The device according to claim 130, wherein the unit is a battery operated unit having a battery, and said power supply further comprising a charger for charging the battery of the battery operated unit. 132. The device according to claim 130, wherein the unit is a handheld unit and said device is further adapted to mechanically dock, supply power to, and communicate with the handheld unit. 133. The device according to claim 132, in combination with a cradle for detachable mounting of the handheld unit, wherein:
the handheld unit has a mating connector; said connector is part of said cradle; and said connector connects with the handheld unit mating connector when the handheld unit is mounted in said cradle. 134. The device according to claim 84, further operative as a clock for maintaining and displaying the current time, and wherein said device is further operative to display the current hour, minute and second. 135. The device according to claim 134, further operative to display the current year, the current month and the current day of the month. 136. The device according to claim 134, further operative to display the time of a last information update or a last communication session. 137. The device according to claim 84, wherein said single enclosure in constructed to have at least one of the following:
a form substantially similar to that of a standard picture frame; wall mounting elements substantially similar to those of a standard picture frame for hanging on a wall; and a shape to at least in part substitute for a standard picture frame. 138. The device according to claim 84, further comprising a digital to analog converter coupled to said first memory for converting digital data stored in said first memory to an analog signal. 139. The device according to claim 138, wherein the analog signal is an analog video signal for connecting to an analog video display. 140. The device according to claim 139, wherein the analog video signal is an S-Video signal or a composite video signal in a PAL format or NTSC format. | A device for obtaining, storing and displaying information from a remote server, the device has a modem for establishing communication sessions with the remote server. A memory coupled to the modem stores the obtained information, and a display is coupled to the memory for displaying the stored information. The device automatically and periodically communicates with the remote server for obtaining the information.1. A device for obtaining, storing and displaying personalized information from a first remote information server via the Internet over a Wireless Local Area Network (WLAN), the device comprising:
an antenna for transmitting and receiving digital data over the air; a WLAN transceiver coupled to said antenna for bi-directional packet-based digital data communication over the air, via said antenna, with another WLAN transceiver of the same type; a first memory coupled to said WLAN transceiver for storing digital data received by said WLAN transceiver; a display for visually presenting an information, said display being coupled to said memory for displaying information stored in the first memory; and a single enclosure housing said antenna, said WLAN transceiver, said first memory and said display, said single enclosure having dimensions and an appearance of a conventional flat, wall-mountable framed picture, wherein said device is addressable in the WLAN, and said device is operative for communicating over the WLAN with the first remote information server via the Internet for receiving information therefrom, and for storing and displaying the received information. 2. The device according to claim 1, wherein said device is operative for automatically and periodically communicating with said first remote information server at all times when said device is in operation. 3. The device according to claim 1, further comprising a second memory for storing a digital address uniquely identifying said device in the WLAN. 4. The device according to claim 1, wherein said device is operative to send the digital address and a request for information, and to obtain and display the received information from the first remote information server in response to the sent request for information. 5. The device according to claim 1, wherein said device is configured for wall mounting in a residential building, and the first remote information server is located outside the residential building. 6. The device according to claim 1, wherein said WLAN transceiver is operative to communicate substantially according to IEEE802.11 standard. 7. The device according to claim 1, wherein said display is alphanumeric. 8. The device according to claim 1, wherein said display is based on Field Emission Display (FED) or Cathode Ray Tube (CRT) technologies. 9. The device according to claim 1, wherein said display comprises a flat screen that is based on Liquid Crystal Display (LCD) technology. 10. The device according to claim 1, wherein said display is an analog video display. 11. The device according to claim 10, wherein said display is coupled to said first memory via a composite video interface, and the composite video interface is one of a PAL and an NTSC interface. 12. The device according to claim 1, wherein said display is a digital display. 13. The device according to claim 12, wherein said display is an HDTV display. 14. The device according to claim 1, wherein said first memory is non-volatile. 15. The device according to claim 1, wherein said first memory is based on one out a Flash memory, a DRAM memory and a PAM memory. 16. The device according to claim 1, further comprising: an AC power plug for connecting to an AC power outlet; and a power supply connected to said AC power plug to be powered by power supplied from the AC power outlet, said power supply comprising an AC to DC converter for DC powering at least part of said device. 17. The device according to claim 1, further comprising a battery, wherein said device is operative to be at least in part powered from said battery, and wherein said battery is a primary or rechargeable battery. 18. The device according to claim 1, for use with a wiring connected for concurrently carrying a power signal and an information signal over the same wires, said device further comprising a connector for connecting to the wiring, and said device being further operative to be at least in part powered from the power signal. 19. The device according to claim 18, wherein the wiring is a telephone wire pair and the information signal is an analog telephone signal. 20. The device according to claim 1, further comprising firmware and a processor for executing said firmware, said processor being coupled to control at least said antenna and said display. 21. The device according to claim 20, wherein said processor is one of: a microprocessor; and a microcomputer, and said device further comprises at least one user operated button or switch coupled to said processor, for user control of operation of said device. 22. The device according to claim 21, wherein the user control of operation of said device comprises at least one out of:
turning said device on and off; resetting said device to default values; changing the contrast of said display; changing the brightness of said display; changing the zoom of images presented on said display; selecting a language; and selecting the information to be presented on said display. 23. The device according to claim 20, wherein the firmware include at least part of a web client for communication with, and accessing information stored in, the remote information server. 24. The device according to claim 23 wherein said web client includes at least part of a graphical web browser. 25. The device according to claim 23 wherein said graphical web browser is based on Windows Internet Explorer. 26. The device according to claim 1, wherein the first remote server information is organized as a web-site including web pages as part of the World wide Web (WWW), and is further identified by said device using the web site Uniform Resource Locator (URL). 27. The device according to claim 1, wherein said device is operative for communicating with a second remote information server via the Internet for receiving information therefrom, and for storing and displaying the received information from the second remote information server. 28. The device according to claim 27, wherein said device is adapted to communicate with the first and second remote information servers for obtaining selected and distinct information from each remote information server. 29. The device according to claim 27, wherein said device communicates with the first and second remote servers one at a time. 30. The device according to claim 1, wherein said device is operative for communicating with only a single remote information server external to the building. 31. The device according to claim 1, wherein communication with the first remote information server is based on Internet protocol suite. 32. The device according to claim 31, wherein communication with the first remote information server is based on TCP/IP. 33. The device according to claim 1, wherein said device is operative to initiate a communication with the first remote information server after a set period following a prior communication session. 34. The device according to claim 33, wherein the set period is at least one of: set by the user; set previously in said device; and is set by the first remote information server in a previous communication session. 35. The device according to claim 1, wherein said device is operative to initiate a communication with the first remote information server on a daily basis at a pre-set time of day (TOD). 36. The device according to claim 35, wherein the pre-set time of day is at least one of: set by the user; set previously in (Original) The device; and set by the remote information server in a previous communication session. 37. The device according to claim 1, wherein the first remote information server is a dedicated remote information server, and said device is further operative to initiate a communication with the dedicated remote information server based on a user request. 38. The device according to claim 1, wherein the information received from said first remote information server is publicly available for free. 39. The device according to claim 38, wherein the information received from the first remote information server is also available in other mediums. 40. The device according to claim 39, wherein the first remote information server is also associated with one of: a newspaper; a radio station; and a television station. 41. The device according to claim 1, wherein the information received from the first remote information server and displayed relates to a future event, a planned activity or a forecast of a situation. 42. The device according to claim 41, wherein the information received from the first remote information server includes at least one of: a weather forecast; a future sports event; a future culture event; a future entertainment event; a TV station guide; and a radio station guide. 43. The device according to claim 1, wherein the information received from the first remote information server and displayed relates to a past event, a past activity, or a former situation. 44. The device according to claim 43, wherein the information from the first remote information server includes at least one of: sports event results; a stock quote; lottery results; and a currency exchange rate. 45. The device according to claim 1, wherein the information received from the first remote information server and displayed relates to an ongoing or recent event, a current or recent activity, or an existing situation. 46. The device according to claim 45, wherein the information received from the first remote information server includes at least one of: public news; and traffic related information. 47. The device according to claim 1, wherein said device is operative to delay a communication with the first remote information server for a set period, in the case that such communication cannot be properly executed in due time. 48. The device according to claim 47, wherein the set period is at least one of: set by the user; set previously in said device; and set by the first remote information server in a previous communication session. 49. The device according to claim 1, wherein said device is operative to communicate with an alternative second distinct remote information server via the Internet, in the case the communication with the first remote information server cannot be properly executed in due time or after a set delay. 50. The device according to claim 1, wherein the communication with the first remote information server is based on spread spectrum modulation. 51. The device according to claim 50, wherein the spread spectrum modulation (-OK) is a DSSS (Direct Sequence Spread Spectrum) modulation. 52. The device according to claim 1, wherein the communication with the first remote information server uses a license-free radio frequency band. 53. The device according to claim 52, wherein the license-free radio frequency band is one of: 900 MHz; 2.4 GHz; and 5.8 GHz. 54. The device according to claim 1, wherein said device is dedicated only for obtaining, storing and displaying information from the first remote information server. 55. The device according to claim 1, further comprising a second memory for storing a digital address uniquely identifying said device in the WLAN, and wherein said second memory is operative for storing a digital address for uniquely identifying said device in a Local Area Network (LAN) or on the Internet. 56. The device according to claim 55, wherein the digital address is either a MAC address or an IP address. 57. The device according to claim 1, further comprising a second memory for storing a digital address uniquely identifying said device in the WLAN, and wherein said second memory is operative for storing personalized information. 58. The device according to claim 57, wherein the personalized information is a user name or a password. 59. The device according to claim 57, wherein the personalized information is set by the user. 60. The device according to claim 57, wherein the personalized information is set by the remote information server. 61. The device according to claim 57, wherein the personalized information is associated with the physical geographical location of said device. 62. The device according to claim 57, wherein the personalized information comprises a code identifying specific information requested by the user, and wherein said device is operative to sending the code to the first remote information server for obtaining and displaying specific information received from first remote information server in response to the request sent. 63. The device according to claim 62, wherein the personalized information is set by the user or by the first remote information server. 64. The device according to claim 1, wherein said device is further operative to store and play digital audio data. 65. The device according to claim 1, wherein: said device is further operative to receive and display information from a connected unit; said device further comprises a connector coupled to said first memory for connecting to the unit; and said device is operative to receive digital data comprising information from the unit and displaying the information on said display. 66. The device according to claim 65, wherein said device is further operative to transmit digital data to the unit. 67. The device according to claim 66, wherein the communication with the unit via said connector is based on a standard serial digital data bus. 68. The device according to claim 65, further comprising:
An AC power plug connectable to an AC power source;
a power supply connected to said AC power plug to be powered by power supplied by the AC power source and to provide DC power for DC powering said first memory and said display, and
wherein said connector is further coupled to said power supply for DC powering the unit via said connector. 69. The device according to claim 68, wherein the unit is a battery operated unit, and said power supply further comprising a charger for charging the battery of the battery operated unit. 70. The device according to claim 68, wherein the unit is a handheld unit and said device is further adapted to mechanically dock, supply power to, and communicate with the handheld unit. 71. The device according to claim 70 in combination with a cradle for detachable mounting of the handheld unit, the handheld unit having a mating connector, wherein said connector is part of said cradle, and said connector connects with the handheld unit mating connector when the handheld unit is mounted in said cradle. 72. The device according to claim 71, wherein said handheld unit is a Personal Digital Assistant (PDA), or a cellular telephone. 73. The device according to claim 68, wherein: said device is further adapted so that the unit is removably mechanically attached to said device; and said power supply is operative to supply DC power to the unit when mechanically attached via said connector. 74. The device according to claim 1, further operative as a clock for maintaining and displaying the time, and wherein said device is further operative to display the current hour, minute and second. 75. The device according to claim 74, further operative to display the current year, the current month and the current day of the month. 76. The device according to claim 74, further operative to display the time of a last information update or a last communication session. 77. The device according to claim 1, wherein said single enclose in constructed to have at least one of the following:
a form substantially similar to that of a standard picture frame;
wall mounting elements substantially similar to those of a standard picture frame for hanging on a wall; and
a shape to at least in part substitute for a standard picture frame. 78. The device according to claim 1, further comprising a digital to analog converter coupled to said first memory for converting digital data stored in said first memory to an analog signal. 79. The device according to claim 78, wherein the analog signal in an analog video signal for connecting to an analog video display. 80. The device according to claim 79, wherein the analog video signal is an S-Video signal or a composite video signal in a PAL or NTSC format. 81. The device according to claim 1, further supporting an information pushing from the remote server, wherein said device is further operative to respond and communicate with the remote information server, when the remote information server initiates a communication session and said device receives information from the remote server and displays the received information. 82. The device according to claim 1, wherein said device is further operative to periodically retrieve and display information from said first memory. 83. The device according to claim 1, wherein said device is further adapted to communicate with the Internet via a gateway. 84. A device dedicated for obtaining, storing and displaying information requested by a user from a single dedicated remote information server via the Internet using a digital data cellular network, the device comprising:
a cellular antenna for transmitting and receiving digital data over the air via the digital data cellular network; a cellular modem coupled to said cellular antenna for bi-directional packet-based digital data communication with the digital data cellular network; a first memory coupled to said cellular modem for storing information contained in digital data received from the Internet via said digital data cellular network; a display for visually presenting information, said display being coupled to said first memory for displaying information stored in the first memory; a second memory for storing a digital address uniquely identifying the user; a third memory for storing data identifying the information requested by the user; a fourth memory for storing a digital address uniquely identifying the device in the Internet; and a single enclosure having a thin flat picture appearance housing said cellular antenna, said cellular modem, said first memory and said display, wherein said device is operative to obtain from the user, and store in said second memory, the digital address uniquely identifying the user and to obtain, and store in said third memory, the information requested by the user, and wherein said device is further operative for communicating over the cellular network and via the Internet with the single dedicated remote information server, for transmitting to the server the digital address uniquely identifying the user, the data identifying the information requested by the user and the digital address uniquely identifying the device in the Internet, for receiving requested information from the server, and for storing the requested information in said first memory and displaying the received information from said first memory. 85. The device according to claim 84, wherein said device is operative for automatically and periodically communicating with the dedicated remote information server at all times when said device is in operation. 86. The device according to claim 84, wherein said device is configured for wall mounting in a residential building, and the dedicated remote information server is located outside the residential building. 87. The device according to claim 84, wherein said display is alphanumeric. 88. The device according to claim 84, wherein said display is based on Field Emission Display (FED) or Cathode Ray Tube (CRT) technologies. 89. The device according to claim 84, wherein said display comprises a flat screen that is based on Liquid Crystal Display (LCD) technology. 90. The device according to claim 84, wherein said display is an analog video display. 91. The device according to claim 90, wherein said display is coupled to said first memory via a composite video interface, and the composite video interface is one of a PAL interface and an NTSC interface. 92. The device according to claim 84, wherein said display is a digital display. 93. The device according to claim 92, wherein said display is an HDTV display. 94. The device according to claim 84, wherein at least one of said memories is a non-volatile memory. 95. The device according to claim 84, wherein at least one of said memories is based on one out a Flash memory, a DRAM memory and a RAM memory. 96. The device according to claim 84, further comprising:
an AC power plug for connecting to an AC power outlet; and a power supply connected to said AC power plug to be powered by power supplied from the AC power outlet, said power supply comprising an AC to DC converter for DC powering at least part of said device. 97. The device according to claim 84, further comprising a battery, wherein said device is operative to be at least in part powered from said battery, and wherein said battery is a primary or rechargeable battery. 98. The device according to claim 84, for use with wiring connected for concurrently carrying a power signal and an information signal over the same wires, said device further comprising a connector for connecting to the wiring, and said device being further operative to be at least in part powered from the power signal. 99. The device according to claim 84, further comprising firmware and a processor for executing said firmware, said processor being coupled to control at least said cellular antenna and said display. 100. The device according to claim 99, wherein said processor is one of: a microprocessor; and a microcomputer, and said device further comprises at least one user operated button or switch coupled to said processor, for user control of operation of said device. 101. The device according to claim 100, wherein the user control of operation of said device comprises at least one of:
turning said device on and off; resetting said device to default values; changing the contrast of said display; changing the brightness of said display; changing the zoom of images presented on said display; selecting a language; and electing the information to be presented on said display. 102. The device according to claim 100, wherein said firmware include at least part of a web client for communication with, and accessing information stored in, the dedicated remote information server. 103. The device according to claim 102 wherein said at least part of a web client includes at least part of a graphical web browser. 104. The device according to claim 102 wherein said at least part of a graphical web browser is based on Windows Internet Explorer. 105. The device according to claim 84, wherein the dedicated remote information server is organized as a web-site including web pages as part of the World wide Web (WWW), and is further identified by said device using the web site Uniform Resource Locator (URL). 106. The device according to claim 84, wherein communication with the dedicated remote information server is based on Internet protocol suite. 107. The device according to claim 106, wherein communication with the dedicated remote information server is based on TCP/IP. 108. The device according to claim 84, wherein said device is operative to initiate a communication with the dedicated remote information server on a daily basis at a pre-set time of day (TOD). 109. The device according to claim 108, wherein the pre-set time of day is at least one of: set by the user; set previously in said device; and set by the dedicated remote information server in a previous communication session. 110. The device according to claim 84, wherein the information received from the dedicated remote information server is publicly available for free. 111. The device according to claim 84, wherein the information received from the dedicated remote information server and displayed relates to a future event, a planned activity, or a forecast of a situation. 112. The device according to claim 111, wherein the information received from the dedicated remote information server includes at least one of: a weather forecast; a future sports event; a future culture event; a future entertainment event; a TV station guide; and a radio station guide. 113. The device according to claim 84, wherein the information received from the dedicated remote information server and displayed relates to a past event, a past activity, or a former situation. 114. The device according to claim 113, wherein the information from the dedicated remote information server includes at least one of: sports event results; a stock quote; lottery results; and a currency exchange rate. 115. The device according to claim 84, wherein the information received from the dedicated remote information server and displayed relates to an ongoing or recent event, a current or recent activity, or an existing situation. 116. The device according to claim 115, wherein the information received from the dedicated remote information server includes at least one of: public news; and traffic related information. 117. The device according to claim 84, wherein said device is operative to delay a communication with the dedicated remote information server for a set period, in the case that such communication cannot be properly executed within a selected time period. 118. The device according to claim 117, wherein the set period is at least one of: set by the user; set previously in said device; and set by the dedicated remote information server in a previous communication session. 119. The device according to claim 84, wherein the communication with the dedicated remote information server is based on spread spectrum modulation. 120. The device according to claim 119, wherein the spread spectrum modulation is a DSSS (Direct Sequence Spread Spectrum) modulation. 121. The device according to claim 84, wherein the cellular network uses a licensed radio frequency band. 122. The device according to claim 84, wherein the cellular network uses a license-free radio frequency band. 123. The device according to claim 122, wherein the license-free radio frequency band is one of: 900 MHz; 2.4 GHz; and 5.8 GHz. 124. The device according to claim 84, wherein the digital address digital address uniquely identifying the device in the Internet is either based on a MAC address or based on an IP address. 125. The device according to claim 84, wherein the digital address uniquely identifying the user is a user name or a password. 126. The device according to claim 84, wherein said device is further operative to store and play digital audio data. 127. The device according to claim 84, wherein:
said device is operative to receive and display information from a connected unit; said device further comprises a connector coupled to said first memory for connecting to the unit; and said device is operative to receive digital data comprising information from the unit and to display the information on said display. 128. The device according to claim 127, wherein said device is further operative to transmit digital data to the unit. 129. The device according to claim 127, wherein the communication with the unit via said connector is based on a standard serial digital data bus. 130. The device according to claim 127, further comprising:
an AC power plug connectable to an AC power source; and a power supply connected to said AC power plug to be powered by power supplied by the AC power source and to provide DC power for DC powering said first memory and said display, and
wherein said connector is further coupled to said power supply for DC powering the unit via said connector. 131. The device according to claim 130, wherein the unit is a battery operated unit having a battery, and said power supply further comprising a charger for charging the battery of the battery operated unit. 132. The device according to claim 130, wherein the unit is a handheld unit and said device is further adapted to mechanically dock, supply power to, and communicate with the handheld unit. 133. The device according to claim 132, in combination with a cradle for detachable mounting of the handheld unit, wherein:
the handheld unit has a mating connector; said connector is part of said cradle; and said connector connects with the handheld unit mating connector when the handheld unit is mounted in said cradle. 134. The device according to claim 84, further operative as a clock for maintaining and displaying the current time, and wherein said device is further operative to display the current hour, minute and second. 135. The device according to claim 134, further operative to display the current year, the current month and the current day of the month. 136. The device according to claim 134, further operative to display the time of a last information update or a last communication session. 137. The device according to claim 84, wherein said single enclosure in constructed to have at least one of the following:
a form substantially similar to that of a standard picture frame; wall mounting elements substantially similar to those of a standard picture frame for hanging on a wall; and a shape to at least in part substitute for a standard picture frame. 138. The device according to claim 84, further comprising a digital to analog converter coupled to said first memory for converting digital data stored in said first memory to an analog signal. 139. The device according to claim 138, wherein the analog signal is an analog video signal for connecting to an analog video display. 140. The device according to claim 139, wherein the analog video signal is an S-Video signal or a composite video signal in a PAL format or NTSC format. | 2,600 |
10,532 | 10,532 | 15,975,239 | 2,641 | The present invention concerns a femtocell device ( 1 ) and method at a femtocell device ( 1 ) comprising a first interface ( 23 ) for communicating to a macrocell ( 4 ) and a second interface ( 25 ) for communicating to a broadband network ( 2 ), the method comprising the steps of, when communication to the broadband network ( 2 ) is inactive, receiving a paging message from the macrocell ( 4 ) destined to a mobile device ( 3 ) connected to the femtocell device ( 1 ), and notifying the mobile device ( 3 ) to redirect to the macrocell ( 4 ). | 1-6. (canceled) 7. A method at a femtocell device comprising a first interface for communicating to a cellular radio network and a second interface for communicating to a broadband network, a cellular device being connected to said femtocell device at said first interface, said method comprising, when communication at said femtocell device to said broadband network becomes inactive,
maintaining said connection with said at least one cellular device, and on reception of a mobile network call setup from said cellular device, rejecting said call setup and notifying said cellular device to redirect to a base station of a cellular network serving a macrocell, wherein when communication at said femtocell device to said broadband network is inactive, and if no cellular device is connected to said femtocell device, deactivating said second interface. | The present invention concerns a femtocell device ( 1 ) and method at a femtocell device ( 1 ) comprising a first interface ( 23 ) for communicating to a macrocell ( 4 ) and a second interface ( 25 ) for communicating to a broadband network ( 2 ), the method comprising the steps of, when communication to the broadband network ( 2 ) is inactive, receiving a paging message from the macrocell ( 4 ) destined to a mobile device ( 3 ) connected to the femtocell device ( 1 ), and notifying the mobile device ( 3 ) to redirect to the macrocell ( 4 ).1-6. (canceled) 7. A method at a femtocell device comprising a first interface for communicating to a cellular radio network and a second interface for communicating to a broadband network, a cellular device being connected to said femtocell device at said first interface, said method comprising, when communication at said femtocell device to said broadband network becomes inactive,
maintaining said connection with said at least one cellular device, and on reception of a mobile network call setup from said cellular device, rejecting said call setup and notifying said cellular device to redirect to a base station of a cellular network serving a macrocell, wherein when communication at said femtocell device to said broadband network is inactive, and if no cellular device is connected to said femtocell device, deactivating said second interface. | 2,600 |
10,533 | 10,533 | 14,790,874 | 2,628 | Systems, devices, and methods described herein provide an input device having keys operatively coupled with light devices that emit light out of the keys. The input device is associated with different modes of operation. The light devices emit the light out of the keys responsive to a selected mode of operation being a first mode of operation. The light devices stop emitting the light out of the keys responsive to the selected mode of operation being a different, second mode of operation. The light devices are activated to indicate the selected mode of operation. One or more processors perform first functions associated with the keys during the first mode of operation and perform different, second functions associated with the same keys during the second mode of operation. | 1. A system comprising:
an input device including a key operatively coupled with a first light device and a second light device; and one or more processors activating the first light device to emit a first light out of the key responsive to a selected mode of operation being a first mode of operation and activating the second light device to emit a second light out of the key responsive to the selected mode of operation being a different, second mode of operation. 2. The system of claim 1, wherein the key includes a light transmissive window through which the first light is emitted from the key. 3. The system of claim 2, wherein the key includes one or more light dams that prevent the second light from emitting from the key through the light transmissive window. 4. The system of claim 1, wherein the one or more processors perform a first function associated with the key responsive to actuation of the key during the first mode of operation and the one or more processors perform a different, second function associated with the key responsive to actuation of the key during the second mode of operation. 5. The system of claim 1, wherein the key includes a first light transmissive window through which the first light is emitted from the key and a second light transmissive window through which the second light is emitted from the key. 6. The system of claim 1, further comprising an output device displaying output to a user responsive to actuation of the key, wherein the one or more processors direct the output device to display a first output responsive to actuation of the key during operation in the first mode of operation and the one or more processors direct the output device to display a different, second output responsive to actuation of the key during operation in the second mode of operation. 7. The system of claim 6, wherein the one or more processors direct the output device to display an alphanumeric symbol as the first output responsive to the actuation of the key during operation in the first mode of operation and the one or more processors direct the output device to perform an editing function of alphanumeric symbols displayed on the output device responsive to the actuation of the key during operation in the second mode of operation. 8. An input device comprising:
keys associated with different functions in different modes of operation performed by a system such that activation of the keys during a first mode of operation causes the system to perform first functions and actuation of the first keys during a second mode of operation causes the system to perform different, second functions; first light devices and second light devices operatively coupled with the keys; and one or more processors activating the first light devices to emit first light out of the keys responsive to operation in the first mode of operation, the first light devices emitting the first light to indicate the first mode of operation, the first light devices not emitting the first light out of the keys responsive to the keys operating in the second mode of operation, the one or more processors also activating the second light devices to emit a different, second light out of the keys responsive to the keys operating during the second mode of operation. 9. The input device of claim 8, wherein the keys include light transmissive windows through which the first light is emitted from the keys during the first mode of operation. 10. (canceled) 11. The input device of claim 8, wherein the keys include first light transmissive windows through which the first light is emitted from the keys and second light transmissive windows through which the second light is emitted. 12. The input device of claim 11, wherein the keys include light dams that prevent the first light from emitting from the keys through the second light transmissive windows and that prevent the second light from emitting from the keys through the first light transmissive windows. 13. The input device of claim 8, wherein both the first light devices emit the first light out of the keys and the second light devices emit the second light out of the keys responsive to operation in a third mode of operation. 14. The input device of claim 8, wherein a first key of the keys is actuated during operation in the first mode of operation to direct the system to display a first output on an output device and the first key is actuated during operation in the second mode of operation to direct the system to display a different, second output on the output device. 15. A method comprising:
forming a first light transmissive window in an exterior of a key in an input device of a system; placing a first light device and a second light device in the key of the input device such that the first light device emits light out of the key through the first light transmissive window responsive to activation of the first light device; and operatively coupling the first light device and the second light device with one or more processors of the system such that the one or more processors control the activation the first light device to emit the light out of the key responsive to a selected mode of operation being a first mode of operation and to stop emitting the light out of the key responsive to the selected mode of operation being a different, second mode of operation, the one or more processors controlling the second light device to emit light out of the key responsive to the selected mode of operation being the second mode of operation. 16. The method of claim 15, wherein forming the first light transmissive window includes etching an alphanumeric symbol through a thickness of the exterior of the key. 17. The method of claim 15, further comprising placing a key cover above the first light device and forming an exterior coating over the key cover and the first light device such that the key cover is between the first light device and the exterior coating. 18. The method of claim 17, further comprising forming an interior coating on the key cover such that the exterior coating is formed on the interior coating. 19. The method of claim 15, further comprising:
forming a second light transmissive window in the exterior coating of the key; and placing the second light device in the key such that the second light device emits light out of the key through the second light transmissive window. 20. The method of claim 19, wherein placing the first light device includes placing the first light device on a first side of a light dam and placing the second light device includes placing the second light device on an opposite, second side of the light dam such that the light dam prevents the light emitted by the first light device from exiting the key through the second light transmissive window. | Systems, devices, and methods described herein provide an input device having keys operatively coupled with light devices that emit light out of the keys. The input device is associated with different modes of operation. The light devices emit the light out of the keys responsive to a selected mode of operation being a first mode of operation. The light devices stop emitting the light out of the keys responsive to the selected mode of operation being a different, second mode of operation. The light devices are activated to indicate the selected mode of operation. One or more processors perform first functions associated with the keys during the first mode of operation and perform different, second functions associated with the same keys during the second mode of operation.1. A system comprising:
an input device including a key operatively coupled with a first light device and a second light device; and one or more processors activating the first light device to emit a first light out of the key responsive to a selected mode of operation being a first mode of operation and activating the second light device to emit a second light out of the key responsive to the selected mode of operation being a different, second mode of operation. 2. The system of claim 1, wherein the key includes a light transmissive window through which the first light is emitted from the key. 3. The system of claim 2, wherein the key includes one or more light dams that prevent the second light from emitting from the key through the light transmissive window. 4. The system of claim 1, wherein the one or more processors perform a first function associated with the key responsive to actuation of the key during the first mode of operation and the one or more processors perform a different, second function associated with the key responsive to actuation of the key during the second mode of operation. 5. The system of claim 1, wherein the key includes a first light transmissive window through which the first light is emitted from the key and a second light transmissive window through which the second light is emitted from the key. 6. The system of claim 1, further comprising an output device displaying output to a user responsive to actuation of the key, wherein the one or more processors direct the output device to display a first output responsive to actuation of the key during operation in the first mode of operation and the one or more processors direct the output device to display a different, second output responsive to actuation of the key during operation in the second mode of operation. 7. The system of claim 6, wherein the one or more processors direct the output device to display an alphanumeric symbol as the first output responsive to the actuation of the key during operation in the first mode of operation and the one or more processors direct the output device to perform an editing function of alphanumeric symbols displayed on the output device responsive to the actuation of the key during operation in the second mode of operation. 8. An input device comprising:
keys associated with different functions in different modes of operation performed by a system such that activation of the keys during a first mode of operation causes the system to perform first functions and actuation of the first keys during a second mode of operation causes the system to perform different, second functions; first light devices and second light devices operatively coupled with the keys; and one or more processors activating the first light devices to emit first light out of the keys responsive to operation in the first mode of operation, the first light devices emitting the first light to indicate the first mode of operation, the first light devices not emitting the first light out of the keys responsive to the keys operating in the second mode of operation, the one or more processors also activating the second light devices to emit a different, second light out of the keys responsive to the keys operating during the second mode of operation. 9. The input device of claim 8, wherein the keys include light transmissive windows through which the first light is emitted from the keys during the first mode of operation. 10. (canceled) 11. The input device of claim 8, wherein the keys include first light transmissive windows through which the first light is emitted from the keys and second light transmissive windows through which the second light is emitted. 12. The input device of claim 11, wherein the keys include light dams that prevent the first light from emitting from the keys through the second light transmissive windows and that prevent the second light from emitting from the keys through the first light transmissive windows. 13. The input device of claim 8, wherein both the first light devices emit the first light out of the keys and the second light devices emit the second light out of the keys responsive to operation in a third mode of operation. 14. The input device of claim 8, wherein a first key of the keys is actuated during operation in the first mode of operation to direct the system to display a first output on an output device and the first key is actuated during operation in the second mode of operation to direct the system to display a different, second output on the output device. 15. A method comprising:
forming a first light transmissive window in an exterior of a key in an input device of a system; placing a first light device and a second light device in the key of the input device such that the first light device emits light out of the key through the first light transmissive window responsive to activation of the first light device; and operatively coupling the first light device and the second light device with one or more processors of the system such that the one or more processors control the activation the first light device to emit the light out of the key responsive to a selected mode of operation being a first mode of operation and to stop emitting the light out of the key responsive to the selected mode of operation being a different, second mode of operation, the one or more processors controlling the second light device to emit light out of the key responsive to the selected mode of operation being the second mode of operation. 16. The method of claim 15, wherein forming the first light transmissive window includes etching an alphanumeric symbol through a thickness of the exterior of the key. 17. The method of claim 15, further comprising placing a key cover above the first light device and forming an exterior coating over the key cover and the first light device such that the key cover is between the first light device and the exterior coating. 18. The method of claim 17, further comprising forming an interior coating on the key cover such that the exterior coating is formed on the interior coating. 19. The method of claim 15, further comprising:
forming a second light transmissive window in the exterior coating of the key; and placing the second light device in the key such that the second light device emits light out of the key through the second light transmissive window. 20. The method of claim 19, wherein placing the first light device includes placing the first light device on a first side of a light dam and placing the second light device includes placing the second light device on an opposite, second side of the light dam such that the light dam prevents the light emitted by the first light device from exiting the key through the second light transmissive window. | 2,600 |
10,534 | 10,534 | 14,724,255 | 2,619 | Systems, methods, apparatuses, and computer program products for digital badges, signs, or any other type of display are provided. One apparatus is directed to display device that may include a microprocessor, a memory, and a display. The microprocessor and the memory are configured to control the display device to receive content from a network device, receive at least one rule used by the microprocessor to determine when the content is displayed, and display the received content on the display according to the at least one rule. | 1. A display device, comprising:
a microprocessor; a memory; and a display; wherein the microprocessor and the memory are configured to control the display device to
receive content from a network device,
receive at least one rule used by the microprocessor to determine when the content is displayed, and
display the received content on the display according to the at least one rule. 2. The display device according to claim 1, wherein the display device comprises an identification badge. 3. The display device according to claim 1, wherein the display device further comprises mounting hardware configured to mount the display device on clothing of a person. 4. The display device according to claim 1, wherein the display device further comprises a power source. 5. The display device according to claim 1, wherein the display device further comprises a loss prevention device configured to deactivate the display device when the display device is removed from a predefined premises. 6. The display device according to claim 1, wherein the display device further comprises at least one sensor configured to create data and/or observations and to transmit the data and/or observations to the network device in near real-time; or to store the data to transmit to the network device at a pre-determined time. 7. The display device according to claim 1, wherein the display device has a unique device identifier. 8. The display device according to claim 1, wherein the display device is configured to be associated with an individual, and to display at least a name of the individual. 9. The display device according to claim 1, wherein rules and/or events are defined for when and where the received content is displayed on the display of the display device. 10. A method, comprising:
receiving, at a microprocessor of a display device, content from a network device; receiving at least one rule used by the microprocessor to determine when and how the content is displayed on the display device; and displaying the received content on a display of the display device according to the at least one rule. 11. The method according to claim 10, wherein the displaying comprises displaying the received content on an identification badge. 12. The method according to claim 10, wherein the display device further comprises mounting hardware configured to mount the display device on clothing of a person. 13. The method according to claim 10, further comprising providing, by a power source, power to the display device. 14. The method according to claim 10, further comprising deactivating, by a loss prevention device, the display device when the display device is removed from a predefined premises. 15. The method according to claim 10, further comprising:
creating, by at least one sensor of the display device, data and/or observations; and transmitting the data and/or observations to the network device in near real-time or storing the data to transmit to the network device at a pre-determined time. 16. The method according to claim 10, wherein the display device has a unique device identifier. 17. The method according to claim 10, further comprising associating the display device with an individual, and displaying at least a name of the individual on the display device. 18. The method according to claim 10, further comprising defining rules and/or events for when and where the received content is displayed on the display of the display device. 19. A computer program, embodied on a non-transitory computer readable medium, the computer program configured to control a processor to perform a process, comprising:
receiving, at a microprocessor of a display device, content from a network device; receiving at least one rule used by the microprocessor to determine when and how the content is displayed on the display device; and
displaying the received content on a display of the display device according to the at least one rule. | Systems, methods, apparatuses, and computer program products for digital badges, signs, or any other type of display are provided. One apparatus is directed to display device that may include a microprocessor, a memory, and a display. The microprocessor and the memory are configured to control the display device to receive content from a network device, receive at least one rule used by the microprocessor to determine when the content is displayed, and display the received content on the display according to the at least one rule.1. A display device, comprising:
a microprocessor; a memory; and a display; wherein the microprocessor and the memory are configured to control the display device to
receive content from a network device,
receive at least one rule used by the microprocessor to determine when the content is displayed, and
display the received content on the display according to the at least one rule. 2. The display device according to claim 1, wherein the display device comprises an identification badge. 3. The display device according to claim 1, wherein the display device further comprises mounting hardware configured to mount the display device on clothing of a person. 4. The display device according to claim 1, wherein the display device further comprises a power source. 5. The display device according to claim 1, wherein the display device further comprises a loss prevention device configured to deactivate the display device when the display device is removed from a predefined premises. 6. The display device according to claim 1, wherein the display device further comprises at least one sensor configured to create data and/or observations and to transmit the data and/or observations to the network device in near real-time; or to store the data to transmit to the network device at a pre-determined time. 7. The display device according to claim 1, wherein the display device has a unique device identifier. 8. The display device according to claim 1, wherein the display device is configured to be associated with an individual, and to display at least a name of the individual. 9. The display device according to claim 1, wherein rules and/or events are defined for when and where the received content is displayed on the display of the display device. 10. A method, comprising:
receiving, at a microprocessor of a display device, content from a network device; receiving at least one rule used by the microprocessor to determine when and how the content is displayed on the display device; and displaying the received content on a display of the display device according to the at least one rule. 11. The method according to claim 10, wherein the displaying comprises displaying the received content on an identification badge. 12. The method according to claim 10, wherein the display device further comprises mounting hardware configured to mount the display device on clothing of a person. 13. The method according to claim 10, further comprising providing, by a power source, power to the display device. 14. The method according to claim 10, further comprising deactivating, by a loss prevention device, the display device when the display device is removed from a predefined premises. 15. The method according to claim 10, further comprising:
creating, by at least one sensor of the display device, data and/or observations; and transmitting the data and/or observations to the network device in near real-time or storing the data to transmit to the network device at a pre-determined time. 16. The method according to claim 10, wherein the display device has a unique device identifier. 17. The method according to claim 10, further comprising associating the display device with an individual, and displaying at least a name of the individual on the display device. 18. The method according to claim 10, further comprising defining rules and/or events for when and where the received content is displayed on the display of the display device. 19. A computer program, embodied on a non-transitory computer readable medium, the computer program configured to control a processor to perform a process, comprising:
receiving, at a microprocessor of a display device, content from a network device; receiving at least one rule used by the microprocessor to determine when and how the content is displayed on the display device; and
displaying the received content on a display of the display device according to the at least one rule. | 2,600 |
10,535 | 10,535 | 15,209,188 | 2,694 | The functionality of a conventional mouse is extended to provide an extended number of simultaneously adjustable user interface parameters employing one or more user-removable modules. In an embodiment, a user interface for controlling an external device, such as a computer, includes a first user interface sensor configured with a housing. This first sensor generates a first plurality of signals responsive to movement of the housing relative to two orthogonal axes. A compartment is configured with the housing and is sized to receive the user-removable module. This user-removable module contains a second user interface sensor, which generates a second plurality of signals responsive to user manipulation. Output is provided responsive to signals generated by the first and second user interface sensors. In another embodiment, the housing of an extended functionality mouse itself serves as a module removable from a compartment provided in another physical device. | 1-38. (canceled) 39. A powered user interface device for controlling an external computing device, the powered user interface device comprising:
a housing configured to be movable on a surface; a rechargeable battery contained within the housing; a first sensor configured to detect a position change of the housing on the surface and provide position signals to indicate the position change of the housing on the surface; and a touch sensor carried by the housing, wherein the touch sensor is configured to detect at least two points of contact of at least two contacting fingers and provide touch signals responsive to the at least two contacting fingers, wherein the touch signals provide functionality for interacting with an application running on the external computing device, wherein the powered user interface device is configured to communicate with the external computing device via wireless communication. 40. The powered user interface device of claim 39, wherein the touch sensor is further configured to detect gestures and provide gesture signals responsive to the gestures. 41. The powered user interface device of claim 40, wherein the gesture signals provide further functionality for interacting with the application or another application running on the external computing device. 42. The powered user interface device of claim 39, further comprising a mouse button. 43. The powered user interface device of claim 39, wherein the external computing device comprises a personal computer. 44. A method performed by a powered user interface device for controlling an external computing device, the method comprising:
detecting a position change of the powered user interface device on a surface; providing, to the external computing device via wireless communication, position signals to indicate the position of the powered user interface device on the surface; detecting at least two points of contact of at least two contacting fingers; and providing, to the external computing device via wireless communication, touch signals responsive to the at least two contacting fingers, wherein the touch signals provide functionality for interacting with an application running on the external computing device. 45. The method of claim 44, further comprising:
detecting a gesture; and providing, to the external computing device via wireless communication, gesture signals responsive to the gesture. 46. The method of claim 45, wherein the gesture signals provide further functionality for interacting with the application or another application running on the external computing device. 47. The method of claim 44, further comprising:
detecting an interaction with a mouse button; and providing, to the external computing device via wireless communication, a signal responsive to the interaction with the mouse button. 48. The method of claim 44, wherein the external computing device comprises a personal computer. 49. A powered user interface device for controlling an external computing device, the powered user interface device comprising:
a housing configured to be movable on a surface; a rechargeable battery contained within the housing; a first sensor configured to detect a position change of the housing on the surface and provide position signals to indicate the position change of the housing on the surface; and a touch sensor carried by the housing, wherein the touch sensor is configured to detect gestures and provide gesture signals responsive to the gestures, wherein the gesture signals provide functionality for interacting with an application running on the external computing device, wherein the powered user interface device is configured to communicate with the external computing device via wireless communication. 50. The powered user interface device of claim 49, wherein the touch sensor is further configured to detect at least two points of contact of at least two contacting fingers and provide touch signals responsive to the at least two contacting fingers. 51. The powered user interface device of claim 50, wherein the touch signals provide further functionality for interacting with the application or another application running on the external computing device. 52. The powered user interface device of claim 49, further comprising a mouse button. 53. The powered user interface device of claim 49, wherein the external computing device comprises a personal computer. 54. A method performed by a powered user interface device for controlling an external computing device, the method comprising:
detecting a position change of the powered user interface device on a surface; providing, to the external computing device via wireless communication, position signals to indicate the position of the powered user interface device on the surface; detecting, by a touch sensor of the powered user interface device, a gesture; and providing, to the external computing device via wireless communication, gesture signals responsive to the gesture, wherein the gesture signals provide functionality for interacting with an application running on the external computing device. 55. The method of claim 54, further comprising:
detecting, by the touch sensor, at least two points of contact of at least two contacting fingers; and providing, to the external computing device via wireless communication, touch signals responsive to the at least two contacting fingers. 56. The method of claim 55, wherein the touch signals provide further functionality for interacting with the application or another application running on the external computing device. 57. The method of claim 54, further comprising:
detecting an interaction with a mouse button; and providing, to the external computing device via wireless communication, a signal responsive to the interaction with the mouse button. 58. The method of claim 54, wherein the external computing device comprises a personal computer. 59. A powered user interface device for controlling an external computing device, the powered user interface device comprising:
a housing; a rechargeable battery contained within the housing; a touch sensor carried by the housing, wherein the touch sensor includes a pressure sensor, and wherein the touch sensor is configured to:
detect at least two points of contact of at least two contacting fingers and provide a plurality of touch signals responsive to the at least two contacting fingers; and
detect gestures and provide gesture signals responsive to the gestures,
wherein the touch signals and gesture signals provide functionality for interacting with an application running on the external computing device, and wherein the powered user interface device is configured to communicate with the external computing device via wireless communication. 60. The powered user interface device of claim 59, wherein the external computing device comprises a personal computer. 61. The powered user interface device of claim 59, wherein the external computing device provides power to the powered user interface device via an electrical connection between the external computing device and the powered user interface device. 62. The powered user interface device of claim 61, wherein the electrical connection is provided via a cable. 63. The powered user interface device of claim 59, wherein the external computing device is configured to recharge the rechargeable battery of the powered user interface device via an electrical connection between the external computing device and the powered user interface device. | The functionality of a conventional mouse is extended to provide an extended number of simultaneously adjustable user interface parameters employing one or more user-removable modules. In an embodiment, a user interface for controlling an external device, such as a computer, includes a first user interface sensor configured with a housing. This first sensor generates a first plurality of signals responsive to movement of the housing relative to two orthogonal axes. A compartment is configured with the housing and is sized to receive the user-removable module. This user-removable module contains a second user interface sensor, which generates a second plurality of signals responsive to user manipulation. Output is provided responsive to signals generated by the first and second user interface sensors. In another embodiment, the housing of an extended functionality mouse itself serves as a module removable from a compartment provided in another physical device.1-38. (canceled) 39. A powered user interface device for controlling an external computing device, the powered user interface device comprising:
a housing configured to be movable on a surface; a rechargeable battery contained within the housing; a first sensor configured to detect a position change of the housing on the surface and provide position signals to indicate the position change of the housing on the surface; and a touch sensor carried by the housing, wherein the touch sensor is configured to detect at least two points of contact of at least two contacting fingers and provide touch signals responsive to the at least two contacting fingers, wherein the touch signals provide functionality for interacting with an application running on the external computing device, wherein the powered user interface device is configured to communicate with the external computing device via wireless communication. 40. The powered user interface device of claim 39, wherein the touch sensor is further configured to detect gestures and provide gesture signals responsive to the gestures. 41. The powered user interface device of claim 40, wherein the gesture signals provide further functionality for interacting with the application or another application running on the external computing device. 42. The powered user interface device of claim 39, further comprising a mouse button. 43. The powered user interface device of claim 39, wherein the external computing device comprises a personal computer. 44. A method performed by a powered user interface device for controlling an external computing device, the method comprising:
detecting a position change of the powered user interface device on a surface; providing, to the external computing device via wireless communication, position signals to indicate the position of the powered user interface device on the surface; detecting at least two points of contact of at least two contacting fingers; and providing, to the external computing device via wireless communication, touch signals responsive to the at least two contacting fingers, wherein the touch signals provide functionality for interacting with an application running on the external computing device. 45. The method of claim 44, further comprising:
detecting a gesture; and providing, to the external computing device via wireless communication, gesture signals responsive to the gesture. 46. The method of claim 45, wherein the gesture signals provide further functionality for interacting with the application or another application running on the external computing device. 47. The method of claim 44, further comprising:
detecting an interaction with a mouse button; and providing, to the external computing device via wireless communication, a signal responsive to the interaction with the mouse button. 48. The method of claim 44, wherein the external computing device comprises a personal computer. 49. A powered user interface device for controlling an external computing device, the powered user interface device comprising:
a housing configured to be movable on a surface; a rechargeable battery contained within the housing; a first sensor configured to detect a position change of the housing on the surface and provide position signals to indicate the position change of the housing on the surface; and a touch sensor carried by the housing, wherein the touch sensor is configured to detect gestures and provide gesture signals responsive to the gestures, wherein the gesture signals provide functionality for interacting with an application running on the external computing device, wherein the powered user interface device is configured to communicate with the external computing device via wireless communication. 50. The powered user interface device of claim 49, wherein the touch sensor is further configured to detect at least two points of contact of at least two contacting fingers and provide touch signals responsive to the at least two contacting fingers. 51. The powered user interface device of claim 50, wherein the touch signals provide further functionality for interacting with the application or another application running on the external computing device. 52. The powered user interface device of claim 49, further comprising a mouse button. 53. The powered user interface device of claim 49, wherein the external computing device comprises a personal computer. 54. A method performed by a powered user interface device for controlling an external computing device, the method comprising:
detecting a position change of the powered user interface device on a surface; providing, to the external computing device via wireless communication, position signals to indicate the position of the powered user interface device on the surface; detecting, by a touch sensor of the powered user interface device, a gesture; and providing, to the external computing device via wireless communication, gesture signals responsive to the gesture, wherein the gesture signals provide functionality for interacting with an application running on the external computing device. 55. The method of claim 54, further comprising:
detecting, by the touch sensor, at least two points of contact of at least two contacting fingers; and providing, to the external computing device via wireless communication, touch signals responsive to the at least two contacting fingers. 56. The method of claim 55, wherein the touch signals provide further functionality for interacting with the application or another application running on the external computing device. 57. The method of claim 54, further comprising:
detecting an interaction with a mouse button; and providing, to the external computing device via wireless communication, a signal responsive to the interaction with the mouse button. 58. The method of claim 54, wherein the external computing device comprises a personal computer. 59. A powered user interface device for controlling an external computing device, the powered user interface device comprising:
a housing; a rechargeable battery contained within the housing; a touch sensor carried by the housing, wherein the touch sensor includes a pressure sensor, and wherein the touch sensor is configured to:
detect at least two points of contact of at least two contacting fingers and provide a plurality of touch signals responsive to the at least two contacting fingers; and
detect gestures and provide gesture signals responsive to the gestures,
wherein the touch signals and gesture signals provide functionality for interacting with an application running on the external computing device, and wherein the powered user interface device is configured to communicate with the external computing device via wireless communication. 60. The powered user interface device of claim 59, wherein the external computing device comprises a personal computer. 61. The powered user interface device of claim 59, wherein the external computing device provides power to the powered user interface device via an electrical connection between the external computing device and the powered user interface device. 62. The powered user interface device of claim 61, wherein the electrical connection is provided via a cable. 63. The powered user interface device of claim 59, wherein the external computing device is configured to recharge the rechargeable battery of the powered user interface device via an electrical connection between the external computing device and the powered user interface device. | 2,600 |
10,536 | 10,536 | 15,547,074 | 2,645 | A method for enabling a charging of packet switched data transfer between an authorized customer and a network. The method includes transferring a total data volume via at least a first data path and a second data path between an equipment related to the authorized customer and a hybrid access aggregation point (HAAP) as an access to the network. Each of the at least a first data path and a second data bath use a different access technology for transferring its respective part of the total data volume. The total data volume is determined as a sum of the respective parts of the total data volume transferred in both the first path and the second path. The total data volume is used by a central charging unit for a charging. | 1-10. (canceled) 11. A method for enabling a charging of packet switched data transfer between an authorized customer and a network, the method providing:
transferring a total data volume via at least a first data path and a second data path between an equipment related to the authorized customer and a hybrid access aggregation point (HAAP) as an access to the network, each of the at least a first data path and a second data bath using a different access technology for transferring its respective part of the total data volume; determining the total data volume as a sum of the respective parts of the total data volume transferred in both the first path and the second path; and using the total data volume by a central charging unit for a charging. 12. The method as recited in claim 11, wherein,
the network is the internet or an intranet, the equipment related to the authorized customer is a customer premise equipment (CPE), and the central charging unit is an Online Charging System (OCS). 13. The method as recited in claim 11, further comprising:
separately determining the respective parts of the total data volume transferred in both the first path and the second path via a respective separate counter unit; transferring an information about the separately determined respective parts of the total data volume from each respective separate counter unit to the central charging unit; and using the information transferred by the central charging unit to calculate the total data volume as a sum of the respective parts of the total data volume. 14. The method as recited in claim 13, further comprising:
periodically providing the central charging unit with a fixed maximum volume of data capacity per defined time period, the fixed maximum volume of data capacity per defined time period defining a threshold; and comparing the sum with the threshold in the central charging unit so that an action can be initiated based on whether the threshold is reached or passed. 15. The method as recited in claim 14, further comprising:
inducing, via the central charging unit, an adverse effect of an access facility of the authorized customer. 16. The method as recited in claim 15, wherein the adverse effect is reducing a data transfer rate assigned to the authorized customer. 17. The method as recited in claim 14, further comprising:
splitting the fixed maximum volume of data capacity per defined time period into a plurality of volume blocks via the central charging unit; and assigning the plurality of data blocks in a sequence to a respective one of at least two counter units if a sum of all assigned volume blocks do not exceed the threshold. 18. The method as recited in claim 17, further comprising:
controlling, via a respective one of the at least two counter units, whether data transported in its respective path exceeds a limit of the assigned volume block, and when the limit is reached or passed; assigning a new volume data block to a respective one of the at least two counter units. 19. The method according to claim 17, further comprising:
creating different volume block sizes via the central charging unit, wherein, a decision mechanism of the central charging unit is configured to calculate a next volume block size before assignment thereof to a respective one of the at least two counter units. 20. The method as recited in claim 19, wherein the next volume block size is calculated based on a momentary available maximum bandwidth of a single access technology. 21. A system for enabling a charging of packet switched data transfer between an authorized customer and a network, the system comprising:
an equipment related to the authorized customer; a hybrid access aggregation point (HAAP) as an access to the network; at least a first data path and a second data path which are respectively configured to transfer a part of a data volume, each of the first data path and the second data path using a different access technology for the data transfer; a device configured to determine a total data volume as a sum of the parts of data volume transferred via the first data path and the second data path; and a central charging unit configured to use the total data volume for a charging. 22. The system as recited in claim 21, wherein,
the network is the internet or an intranet, the equipment related to the authorized customer is a customer premise equipment (CPE), and the central charging unit is an Online Charging System (OCS). 23. The system as recited in claim 21, further comprising:
a separate counter unit arranged in each of the first path and the second path, each separate counter unit being configured to determine the part of data volume transferred in the respective first path or the second path and to inform the central charging unit of the part of data volume so determined. | A method for enabling a charging of packet switched data transfer between an authorized customer and a network. The method includes transferring a total data volume via at least a first data path and a second data path between an equipment related to the authorized customer and a hybrid access aggregation point (HAAP) as an access to the network. Each of the at least a first data path and a second data bath use a different access technology for transferring its respective part of the total data volume. The total data volume is determined as a sum of the respective parts of the total data volume transferred in both the first path and the second path. The total data volume is used by a central charging unit for a charging.1-10. (canceled) 11. A method for enabling a charging of packet switched data transfer between an authorized customer and a network, the method providing:
transferring a total data volume via at least a first data path and a second data path between an equipment related to the authorized customer and a hybrid access aggregation point (HAAP) as an access to the network, each of the at least a first data path and a second data bath using a different access technology for transferring its respective part of the total data volume; determining the total data volume as a sum of the respective parts of the total data volume transferred in both the first path and the second path; and using the total data volume by a central charging unit for a charging. 12. The method as recited in claim 11, wherein,
the network is the internet or an intranet, the equipment related to the authorized customer is a customer premise equipment (CPE), and the central charging unit is an Online Charging System (OCS). 13. The method as recited in claim 11, further comprising:
separately determining the respective parts of the total data volume transferred in both the first path and the second path via a respective separate counter unit; transferring an information about the separately determined respective parts of the total data volume from each respective separate counter unit to the central charging unit; and using the information transferred by the central charging unit to calculate the total data volume as a sum of the respective parts of the total data volume. 14. The method as recited in claim 13, further comprising:
periodically providing the central charging unit with a fixed maximum volume of data capacity per defined time period, the fixed maximum volume of data capacity per defined time period defining a threshold; and comparing the sum with the threshold in the central charging unit so that an action can be initiated based on whether the threshold is reached or passed. 15. The method as recited in claim 14, further comprising:
inducing, via the central charging unit, an adverse effect of an access facility of the authorized customer. 16. The method as recited in claim 15, wherein the adverse effect is reducing a data transfer rate assigned to the authorized customer. 17. The method as recited in claim 14, further comprising:
splitting the fixed maximum volume of data capacity per defined time period into a plurality of volume blocks via the central charging unit; and assigning the plurality of data blocks in a sequence to a respective one of at least two counter units if a sum of all assigned volume blocks do not exceed the threshold. 18. The method as recited in claim 17, further comprising:
controlling, via a respective one of the at least two counter units, whether data transported in its respective path exceeds a limit of the assigned volume block, and when the limit is reached or passed; assigning a new volume data block to a respective one of the at least two counter units. 19. The method according to claim 17, further comprising:
creating different volume block sizes via the central charging unit, wherein, a decision mechanism of the central charging unit is configured to calculate a next volume block size before assignment thereof to a respective one of the at least two counter units. 20. The method as recited in claim 19, wherein the next volume block size is calculated based on a momentary available maximum bandwidth of a single access technology. 21. A system for enabling a charging of packet switched data transfer between an authorized customer and a network, the system comprising:
an equipment related to the authorized customer; a hybrid access aggregation point (HAAP) as an access to the network; at least a first data path and a second data path which are respectively configured to transfer a part of a data volume, each of the first data path and the second data path using a different access technology for the data transfer; a device configured to determine a total data volume as a sum of the parts of data volume transferred via the first data path and the second data path; and a central charging unit configured to use the total data volume for a charging. 22. The system as recited in claim 21, wherein,
the network is the internet or an intranet, the equipment related to the authorized customer is a customer premise equipment (CPE), and the central charging unit is an Online Charging System (OCS). 23. The system as recited in claim 21, further comprising:
a separate counter unit arranged in each of the first path and the second path, each separate counter unit being configured to determine the part of data volume transferred in the respective first path or the second path and to inform the central charging unit of the part of data volume so determined. | 2,600 |
10,537 | 10,537 | 15,592,980 | 2,647 | A method performed by a station that includes receiving information from a cell of a Public Land Mobile Network (PLMN), determining whether the information indicates the cell supports circuit switched fallback (CSFB) voice calls, when the CSFB voice calls are not supported, determining whether at least one packet switched voice property of the station satisfies a predetermined condition and attaching to the cell of the PLMN when the predetermined condition is satisfied. A further method performed by a station connected to a PLMN that includes identifying cells of the PLMN available for the station to camp on, determining whether the station is capable of executing packet switched calls, when the station is not capable of executing packet switched calls, determining whether the identified cells have a neighbor cell that supports CSFB voice calls and prioritizing the cells that are identified as having neighbor cells that support CSFB voice calls. | 1. A method comprising:
at a station:
receiving information from a cell of a Public Land Mobile Network (PLMN);
determining whether the information indicates the cell supports circuit switched fallback (CSFB) voice calls;
when the CSFB voice calls are not supported, determining whether at least one packet switched voice property of the station satisfies a predetermined condition; and
attaching to the cell of the PLMN when the at least one packet switched voice property satisfies the predetermined condition. 2. The method of claim 1, wherein the information is received in a System Information Block (SIB) transmitted by the cell, wherein the SIB is one of SIB 1, SIB 6, SIB 7 or SIB 8. 3. The method of claim 1, further comprising:
searching for an alternative PLMN when the at least one packet switched voice property fails to satisfy the predetermined condition. 4. The method of claim 3, further comprising:
de-prioritizing the PLMN when the at least one packet switched voice property fails to satisfy the predetermined condition such that when the station performs a subsequent PLMN search, the PLMN will not be searched until after other non-deprioritized PLMNs are searched. 5. The method of claim 1, wherein the at least one packet switched voice property comprises whether the station is packet switched voice capable. 6. The method of claim 1, wherein the at least one packet switched voice property comprises whether the station is packet switched voice enabled. 7. The method of claim 1, wherein the at least one packet switched voice property comprises whether a carrier configuration profile of the station is packet switched voice enabled. 8. The method of claim 1, wherein the at least one packet switched voice property comprises whether the PLMN is a roaming network and the roaming agreement allows for packet switched voice calls on the roaming network. 9. The method of claim 1, wherein the at least one packet switched voice property is a VoLTE property. 10. The method of claim 1, wherein the method is initiated by one of the station booting up, an airplane mode setting of the station being turned off, a packet switched voice setting in the station being toggled, or the station leaving boundaries of a previously attached to PLMN. 11. The method of claim 1, wherein determining whether the information indicates the cell supports circuit switched fallback (CSFB) voice calls, comprises:
determining whether CSFB voice calls are supported by a first type of radio access network; and only when it is determined that CSFB voice calls are not supported by the first type of radio access network, determining whether CSFB voice calls are supported by a second type of radio access network. 12. A station, comprising:
a transceiver; and a processor configured to execute instructions, wherein the instructions cause the processor to perform operations comprising:
receiving information from a cell of a Public Land Mobile Network (PLMN);
determining whether the information indicates the cell supports circuit switched fallback (CSFB) voice calls;
when the CSFB voice calls are not supported, determining whether at least one VoLTE property of the station satisfies a predetermined condition; and
causing the station to attach to the cell of the PLMN when the at least one VoLTE property satisfies the predetermined condition. 13. The station of claim 12, wherein the information is received in a System Information Block (SIB) transmitted by the cell, wherein the SIB is one of SIB 1, SIB 6, SIB 7 or SIB 8. 14. The station of claim 12, wherein the operations further comprise:
searching for an alternative PLMN when the at least one packet switched voice property fails to satisfy the predetermined condition; and de-prioritizing the PLMN when the at least one packet switched voice property fails to satisfy the predetermined condition such that when the station performs a subsequent PLMN search, the PLMN will not be searched until after other non-deprioritized PLMNs are searched. 15. The method of claim 1, wherein the at least one packet switched voice property comprises one of whether the station is packet switched voice capable, whether the station is packet switched voice enabled, whether a carrier configuration profile of the station is packet switched voice enabled, or whether the PLMN is a roaming network and the roaming agreement allows for packet switched voice calls on the roaming network. 16. A method comprising:
at a station connected to a Public Land Mobile Network (PLMN):
identifying cells of the PLMN that are available for the station to camp on;
determining whether the station is capable of executing packet switched calls;
when the station is not capable of executing packet switched calls, determining whether each of the identified cells have a neighbor cell that supports circuit switched fallback (CSFB) voice calls; and
prioritizing the cells that are identified as having neighbor cells that support CSFB voice calls. 17. The method of claim 16, wherein the determining whether each of the identified cells have neighbor cells that support circuit switched fallback (CSFB) voice calls is based on System Information Blocks (SIBs) received from each of the identified cells. 18. The method of claim 16, wherein the cells that have no identified neighbor cells that support CSFB voice calls are not available for camping if there is at least one available cell that has the neighbor cell that support CSFB voice calls. 19. The method of claim 16, wherein the cells are further prioritized based on additional factors unrelated to the determination of the neighbor cell that supports CSFB voice calls. 20. The method of claim 16, further comprising:
when no cells are identified that have neighbor cells that support CSFB voice calls, camping on one of the cells of the PLMN; and when a new cell of the PLMN is identified by the station, re-performing the method. | A method performed by a station that includes receiving information from a cell of a Public Land Mobile Network (PLMN), determining whether the information indicates the cell supports circuit switched fallback (CSFB) voice calls, when the CSFB voice calls are not supported, determining whether at least one packet switched voice property of the station satisfies a predetermined condition and attaching to the cell of the PLMN when the predetermined condition is satisfied. A further method performed by a station connected to a PLMN that includes identifying cells of the PLMN available for the station to camp on, determining whether the station is capable of executing packet switched calls, when the station is not capable of executing packet switched calls, determining whether the identified cells have a neighbor cell that supports CSFB voice calls and prioritizing the cells that are identified as having neighbor cells that support CSFB voice calls.1. A method comprising:
at a station:
receiving information from a cell of a Public Land Mobile Network (PLMN);
determining whether the information indicates the cell supports circuit switched fallback (CSFB) voice calls;
when the CSFB voice calls are not supported, determining whether at least one packet switched voice property of the station satisfies a predetermined condition; and
attaching to the cell of the PLMN when the at least one packet switched voice property satisfies the predetermined condition. 2. The method of claim 1, wherein the information is received in a System Information Block (SIB) transmitted by the cell, wherein the SIB is one of SIB 1, SIB 6, SIB 7 or SIB 8. 3. The method of claim 1, further comprising:
searching for an alternative PLMN when the at least one packet switched voice property fails to satisfy the predetermined condition. 4. The method of claim 3, further comprising:
de-prioritizing the PLMN when the at least one packet switched voice property fails to satisfy the predetermined condition such that when the station performs a subsequent PLMN search, the PLMN will not be searched until after other non-deprioritized PLMNs are searched. 5. The method of claim 1, wherein the at least one packet switched voice property comprises whether the station is packet switched voice capable. 6. The method of claim 1, wherein the at least one packet switched voice property comprises whether the station is packet switched voice enabled. 7. The method of claim 1, wherein the at least one packet switched voice property comprises whether a carrier configuration profile of the station is packet switched voice enabled. 8. The method of claim 1, wherein the at least one packet switched voice property comprises whether the PLMN is a roaming network and the roaming agreement allows for packet switched voice calls on the roaming network. 9. The method of claim 1, wherein the at least one packet switched voice property is a VoLTE property. 10. The method of claim 1, wherein the method is initiated by one of the station booting up, an airplane mode setting of the station being turned off, a packet switched voice setting in the station being toggled, or the station leaving boundaries of a previously attached to PLMN. 11. The method of claim 1, wherein determining whether the information indicates the cell supports circuit switched fallback (CSFB) voice calls, comprises:
determining whether CSFB voice calls are supported by a first type of radio access network; and only when it is determined that CSFB voice calls are not supported by the first type of radio access network, determining whether CSFB voice calls are supported by a second type of radio access network. 12. A station, comprising:
a transceiver; and a processor configured to execute instructions, wherein the instructions cause the processor to perform operations comprising:
receiving information from a cell of a Public Land Mobile Network (PLMN);
determining whether the information indicates the cell supports circuit switched fallback (CSFB) voice calls;
when the CSFB voice calls are not supported, determining whether at least one VoLTE property of the station satisfies a predetermined condition; and
causing the station to attach to the cell of the PLMN when the at least one VoLTE property satisfies the predetermined condition. 13. The station of claim 12, wherein the information is received in a System Information Block (SIB) transmitted by the cell, wherein the SIB is one of SIB 1, SIB 6, SIB 7 or SIB 8. 14. The station of claim 12, wherein the operations further comprise:
searching for an alternative PLMN when the at least one packet switched voice property fails to satisfy the predetermined condition; and de-prioritizing the PLMN when the at least one packet switched voice property fails to satisfy the predetermined condition such that when the station performs a subsequent PLMN search, the PLMN will not be searched until after other non-deprioritized PLMNs are searched. 15. The method of claim 1, wherein the at least one packet switched voice property comprises one of whether the station is packet switched voice capable, whether the station is packet switched voice enabled, whether a carrier configuration profile of the station is packet switched voice enabled, or whether the PLMN is a roaming network and the roaming agreement allows for packet switched voice calls on the roaming network. 16. A method comprising:
at a station connected to a Public Land Mobile Network (PLMN):
identifying cells of the PLMN that are available for the station to camp on;
determining whether the station is capable of executing packet switched calls;
when the station is not capable of executing packet switched calls, determining whether each of the identified cells have a neighbor cell that supports circuit switched fallback (CSFB) voice calls; and
prioritizing the cells that are identified as having neighbor cells that support CSFB voice calls. 17. The method of claim 16, wherein the determining whether each of the identified cells have neighbor cells that support circuit switched fallback (CSFB) voice calls is based on System Information Blocks (SIBs) received from each of the identified cells. 18. The method of claim 16, wherein the cells that have no identified neighbor cells that support CSFB voice calls are not available for camping if there is at least one available cell that has the neighbor cell that support CSFB voice calls. 19. The method of claim 16, wherein the cells are further prioritized based on additional factors unrelated to the determination of the neighbor cell that supports CSFB voice calls. 20. The method of claim 16, further comprising:
when no cells are identified that have neighbor cells that support CSFB voice calls, camping on one of the cells of the PLMN; and when a new cell of the PLMN is identified by the station, re-performing the method. | 2,600 |
10,538 | 10,538 | 15,813,182 | 2,646 | Described herein is an apparatus comprising a location-awareness determination circuit configured to determine a location of the apparatus; at least one processor configured to determine at least one contextual factor of the apparatus including a type identifier of the determined location; an app-management facilitator configured to activate at least one mobile application (“app”) that is associated with the determined location of the apparatus based, at least in part, upon the determined at least one contextual factor; and deactivate at least one of the selected apps on the apparatus when the apparatus exits the location, based on the determined at least one contextual factor of the apparatus. | 1. An apparatus comprising:
a location-awareness determination circuit configured to determine a location of the apparatus; at least one processor configured to determine at least one contextual factor of the apparatus including a type identifier of the determined location; an app-management facilitator configured to: activate at least one mobile application (“app”) that is associated with the determined location of the apparatus based, at least in part, upon the determined at least one contextual factor; and deactivate at least one of the selected apps on the apparatus when the apparatus exits the location, based on the determined at least one contextual factor of the apparatus. 2. An apparatus as recited by claim 1, wherein the location-awareness determination circuit is further configured to determine the location using, at least in part, geo-location information obtained from a global positioning system (GPS). 3. An apparatus as recited by claim 1, wherein the location-awareness determination circuit is further configured to determine the location using, at least in part, location information obtained from at least one ambient identifiable wireless signal (IWS) source. 4. An apparatus as recited by claim 1, wherein the app-management facilitator is further configured to change a user notification setting for at least one of the selected apps on the apparatus. 5. A computer-readable medium with processor-executable instructions stored thereon which, when executed by a processor, cause the processor to:
determine a location of an apparatus; determine at least one contextual factor of the apparatus including a type identifier of the determined location; activate at least one mobile application (“app”) that is associated with the determined location of the apparatus based, at least in part, upon the determined at least one contextual factor; and deactivate at least one of the selected apps on the apparatus when the apparatus exits the location based on the determined at least one contextual factor of the apparatus. 6. A computer-readable medium as recited by claim 5, wherein the processor-executable instructions further cause the processor to determine the location using, at least in part, geo-location information obtained from a global positioning system (GPS). 7. A computer-readable medium as recited by claim 5, wherein the processor-executable instructions further cause the processor to determine the location using, at least in part, location information obtained from at least one ambient identifiable wireless signal (IWS) source. 8. A computer-readable medium as recited by claim 5, wherein the processor-executable instructions further cause the processor to change a user notification setting for at least one of the selected apps on the apparatus. 9. A mobile device comprising:
at least one antenna, transmitter and receiver; a location-awareness determination circuit configured to determine a location of the mobile device; at least one processor configured to determine at least one contextual factor related to the determined location of the mobile device; an app-management facilitator configured to: activate at least one mobile application (“app”) that is associated with the determined location of the mobile device based, at least in part, upon the determined at least one contextual factor; and deactivate at least one of the selected apps on the mobile device when the mobile device exits the location based on the determined at least one contextual factor of the mobile device. 10. A mobile device as recited by claim 9, wherein the location-awareness determination circuit is further configured to determine the location using, at least in part, geo-location information obtained from a global positioning system (GPS). 11. A mobile device as recited by claim 9, wherein the location-awareness determination circuit is further configured to determine the location using, at least in part, location information obtained from at least one ambient identifiable wireless signal (IWS) source. 12. A mobile device as recited by claim 9, wherein the app-management facilitator is further configured to change a user notification setting for at least one of the selected apps on the mobile device. 13. An apparatus, comprising:
a location-awareness determination circuit, configured to determine a location of a mobile device; at least one processors, configured to determine at least one contextual factor related to the determined location of the mobile device; an app-management facilitator configured to: activate at least one mobile application (“app”) that is associated with the determined location of the mobile device based, at least in part, upon the determined at least one contextual factor; and deactivate at least one of the selected apps on the mobile device when the mobile device exits the location based on the determined at least one contextual factor of the mobile device. 14. The apparatus of claim 13, wherein the location-awareness determination circuit is configured to determine the location using, at least in part, geo-location information obtained from a global positioning system (GPS). 15. The apparatus of claim 13, wherein the location-awareness determination circuit is configured to determine the location using, at least in part, location information obtained from at least one ambient identifiable wireless signal (IWS) source. 16. The apparatus of claim 13, wherein the app-management facilitator is configured to change a user notification setting for at least one of the selected apps on the mobile device. 17. A mobile device comprising:
at least one antenna, transmitter and receiver; a location-awareness determination circuit configured to determine that the mobile device enters a location; at least one processor configured to determine at least one contextual factor of the mobile device including a type identifier of the location; an app-management facilitator configured to: activate at least one mobile application (“app”) that is associated with the determined location of the mobile device based, at least in part, upon the determined at least one contextual factor when the mobile device enters the location; and deactivate at least one of the selected apps on the mobile device when the mobile device exits the location, based on the determined at least one contextual factor of the mobile device. 18. A mobile device as recited by claim 17, wherein the location-awareness determination circuit is further configured to determine the location using, at least in part, geo-location information obtained from a global positioning system (GPS). 19. A mobile device as recited by claim 17, wherein the location-awareness determination circuit is further configured to determine the location using, at least in part, location information obtained from at least one ambient identifiable wireless signal (IWS) source. 20. A mobile device as recited by claim 17, wherein the app-management facilitator is further configured to change a user notification setting for at least one of the selected apps on the mobile device. | Described herein is an apparatus comprising a location-awareness determination circuit configured to determine a location of the apparatus; at least one processor configured to determine at least one contextual factor of the apparatus including a type identifier of the determined location; an app-management facilitator configured to activate at least one mobile application (“app”) that is associated with the determined location of the apparatus based, at least in part, upon the determined at least one contextual factor; and deactivate at least one of the selected apps on the apparatus when the apparatus exits the location, based on the determined at least one contextual factor of the apparatus.1. An apparatus comprising:
a location-awareness determination circuit configured to determine a location of the apparatus; at least one processor configured to determine at least one contextual factor of the apparatus including a type identifier of the determined location; an app-management facilitator configured to: activate at least one mobile application (“app”) that is associated with the determined location of the apparatus based, at least in part, upon the determined at least one contextual factor; and deactivate at least one of the selected apps on the apparatus when the apparatus exits the location, based on the determined at least one contextual factor of the apparatus. 2. An apparatus as recited by claim 1, wherein the location-awareness determination circuit is further configured to determine the location using, at least in part, geo-location information obtained from a global positioning system (GPS). 3. An apparatus as recited by claim 1, wherein the location-awareness determination circuit is further configured to determine the location using, at least in part, location information obtained from at least one ambient identifiable wireless signal (IWS) source. 4. An apparatus as recited by claim 1, wherein the app-management facilitator is further configured to change a user notification setting for at least one of the selected apps on the apparatus. 5. A computer-readable medium with processor-executable instructions stored thereon which, when executed by a processor, cause the processor to:
determine a location of an apparatus; determine at least one contextual factor of the apparatus including a type identifier of the determined location; activate at least one mobile application (“app”) that is associated with the determined location of the apparatus based, at least in part, upon the determined at least one contextual factor; and deactivate at least one of the selected apps on the apparatus when the apparatus exits the location based on the determined at least one contextual factor of the apparatus. 6. A computer-readable medium as recited by claim 5, wherein the processor-executable instructions further cause the processor to determine the location using, at least in part, geo-location information obtained from a global positioning system (GPS). 7. A computer-readable medium as recited by claim 5, wherein the processor-executable instructions further cause the processor to determine the location using, at least in part, location information obtained from at least one ambient identifiable wireless signal (IWS) source. 8. A computer-readable medium as recited by claim 5, wherein the processor-executable instructions further cause the processor to change a user notification setting for at least one of the selected apps on the apparatus. 9. A mobile device comprising:
at least one antenna, transmitter and receiver; a location-awareness determination circuit configured to determine a location of the mobile device; at least one processor configured to determine at least one contextual factor related to the determined location of the mobile device; an app-management facilitator configured to: activate at least one mobile application (“app”) that is associated with the determined location of the mobile device based, at least in part, upon the determined at least one contextual factor; and deactivate at least one of the selected apps on the mobile device when the mobile device exits the location based on the determined at least one contextual factor of the mobile device. 10. A mobile device as recited by claim 9, wherein the location-awareness determination circuit is further configured to determine the location using, at least in part, geo-location information obtained from a global positioning system (GPS). 11. A mobile device as recited by claim 9, wherein the location-awareness determination circuit is further configured to determine the location using, at least in part, location information obtained from at least one ambient identifiable wireless signal (IWS) source. 12. A mobile device as recited by claim 9, wherein the app-management facilitator is further configured to change a user notification setting for at least one of the selected apps on the mobile device. 13. An apparatus, comprising:
a location-awareness determination circuit, configured to determine a location of a mobile device; at least one processors, configured to determine at least one contextual factor related to the determined location of the mobile device; an app-management facilitator configured to: activate at least one mobile application (“app”) that is associated with the determined location of the mobile device based, at least in part, upon the determined at least one contextual factor; and deactivate at least one of the selected apps on the mobile device when the mobile device exits the location based on the determined at least one contextual factor of the mobile device. 14. The apparatus of claim 13, wherein the location-awareness determination circuit is configured to determine the location using, at least in part, geo-location information obtained from a global positioning system (GPS). 15. The apparatus of claim 13, wherein the location-awareness determination circuit is configured to determine the location using, at least in part, location information obtained from at least one ambient identifiable wireless signal (IWS) source. 16. The apparatus of claim 13, wherein the app-management facilitator is configured to change a user notification setting for at least one of the selected apps on the mobile device. 17. A mobile device comprising:
at least one antenna, transmitter and receiver; a location-awareness determination circuit configured to determine that the mobile device enters a location; at least one processor configured to determine at least one contextual factor of the mobile device including a type identifier of the location; an app-management facilitator configured to: activate at least one mobile application (“app”) that is associated with the determined location of the mobile device based, at least in part, upon the determined at least one contextual factor when the mobile device enters the location; and deactivate at least one of the selected apps on the mobile device when the mobile device exits the location, based on the determined at least one contextual factor of the mobile device. 18. A mobile device as recited by claim 17, wherein the location-awareness determination circuit is further configured to determine the location using, at least in part, geo-location information obtained from a global positioning system (GPS). 19. A mobile device as recited by claim 17, wherein the location-awareness determination circuit is further configured to determine the location using, at least in part, location information obtained from at least one ambient identifiable wireless signal (IWS) source. 20. A mobile device as recited by claim 17, wherein the app-management facilitator is further configured to change a user notification setting for at least one of the selected apps on the mobile device. | 2,600 |
10,539 | 10,539 | 15,910,562 | 2,616 | Systems and methods are described to enable the creation and use of one or more interest meshes that may comprise interest values associated with points of interest in a virtual environment. The virtual environment may use a mesh to guide a user toward points of interest. Guiding may comprise haptic, visual, and/or audio cues and may also comprise moving the virtual environment around the user. Further, events may be triggered in the virtual environment that may change one or more meshes and/or create a new point of interest for a user. | 1. A method comprising:
creating a virtual environment comprising a space and one or more digital assets positioned at locations within the space; determining values for one or more locations within the virtual environment, each value representing a level of interest at that location within the virtual environment; generating a mesh based on the determined values, wherein the mesh comprises a plurality of values corresponding to the locations within the virtual environment; determining, based on the mesh, a path through the virtual environment; and directing a user on the determined path. 2. The method of claim 1, wherein said directing the user comprises:
determining an initial position and viewpoint orientation of the user; determining, based on the mesh, a desired position and viewpoint orientation for the user along the determined path; and guiding the user from the initial position and viewpoint orientation to the desired position and viewpoint orientation. 3. The method of claim 2, wherein guiding the user comprises applying a haptic cue in the direction of the desired position and viewpoint orientation. 4. The method of claim 2, wherein guiding the user comprises playing an audio cue in the direction of the desired position and viewpoint orientation. 5. The method of claim 2, wherein guiding the user comprises playing a visual cue in the direction of the desired position and viewpoint orientation. 6. The method of claim 5, wherein the visual cue comprises movement of digital assets in the direction of the desired position and viewpoint orientation. 7. The method of claim 2, wherein guiding the user comprises rotating the virtual environment around the user. 8. The method of claim 1, wherein each of the one or more locations has a corresponding coordinate within the space of the virtual environment. 9. The method of claim 1, wherein the values are assigned or calculated based on defined criteria. 10. The method of claim 1, wherein the mesh comprises a data structure. 11. The method of claim 1, wherein the mesh is stored in computer memory. 12. A method comprising:
generating one or more meshes for a virtual environment, wherein each mesh comprises a plurality of interest values each corresponding to a respective location within the virtual environment; triggering an event in the virtual environment; determining a current position and viewpoint orientation of a user; determining, based at least on the event, a desired position and viewpoint orientation for the user; and guiding, based at least on the one or more meshes, the user from the initial position and viewpoint orientation to the desired position and viewpoint orientation. 13. The method of claim 12, wherein the event is the insertion of an advertisement. 14. The method of claim 12, wherein the event is the insertion of a narrative element. 15. The method of claim 12, further comprising:
determining a new position and viewpoint orientation of the user; determining the new position and viewpoint orientation of the user is not equal to the desired position and viewpoint orientation of the user; determining a current position and viewpoint orientation of a user; determining, based on the event, a desired position and viewpoint orientation for the user; and guiding the user from the initial position and viewpoint orientation to the desired position and viewpoint orientation. 16. The method of claim 12, further comprising:
determining a new position and viewpoint orientation of the user; determining the new position and viewpoint orientation of the user is equal to the desired position and viewpoint orientation of the user; and triggering a new event in the virtual environment. 17. The method of claim 12, further comprising:
defining criteria for assigning interest values to one or more digital assets of the virtual environment; assigning, based on the criteria, interest values for the one or more digital assets; and adjusting, based on the assigned interest values, at least one mesh of the one or more meshes. 18. The method of claim 12, wherein each interest value represents a level of interest at its corresponding location within the virtual environment. 19. The method of claim 12, wherein the plurality of interest values are maintained in a data structure. 20. The method of claim 19, wherein the data structure comprises a matrix. 21. The method of claim 12, wherein each location has a coordinate within the space of the virtual environment and wherein each interest value is associated with the coordinate of its corresponding location. 22. A system comprising:
one or more processors; a non-transitory, computer-readable storage medium in operable communication with at least one processor of the one or more processors, wherein the computer-readable storage medium contains one or more programming instructions that, when executed, cause the processor to: access a virtual environment comprising a space and one or more digital assets positioned at locations within the space; access one or more meshes comprising a plurality of interest values each corresponding to a respective location within the virtual environment; outputting for display the virtual environment; determining, based on the one or more meshes, a path through the virtual environment; and directing a user on the determined path. 23. The system of claim 22, wherein the instructions, when executed, further cause the at least one processor to:
determining an initial position and viewpoint orientation of the user; determining a desired position and viewpoint orientation for the user along the determined path; and guiding the user from the initial position and viewpoint orientation to the desired position and viewpoint orientation. 24. The system of claim 23, wherein the instructions, when executed, further cause the at least one processor to:
determine, based upon a speed and direction of user movement, an estimated ending position and viewpoint orientation of the user; determine a difference between the desired position and viewpoint orientation and the estimated ending position and viewpoint orientation of the user; and move the virtual environment by the determined difference. 25. The system of claim 24, wherein moving the virtual environment comprises rotating the virtual environment around the user. 26. The system of claim 22, wherein the instructions, when executed, further cause the at least one processor to:
scan the virtual environment for predetermined attributes; assign, based on the scan, interest values to the virtual environment; adjust, based on the interest values, a first mesh of the one or more of the meshes. 27. The system of claim 26, wherein the instructions, when executed, further cause the at least one processor to:
output for display, based on the first mesh, an overlay in the virtual environment. 28. The system of claim 22, wherein the instructions, when executed, further cause the at least one processor to:
receive an environment engine comprising a set of rules that define interactions between digital assets in the virtual environment; and determining, based on the set of rules, interactions between digital assets in the virtual environment. | Systems and methods are described to enable the creation and use of one or more interest meshes that may comprise interest values associated with points of interest in a virtual environment. The virtual environment may use a mesh to guide a user toward points of interest. Guiding may comprise haptic, visual, and/or audio cues and may also comprise moving the virtual environment around the user. Further, events may be triggered in the virtual environment that may change one or more meshes and/or create a new point of interest for a user.1. A method comprising:
creating a virtual environment comprising a space and one or more digital assets positioned at locations within the space; determining values for one or more locations within the virtual environment, each value representing a level of interest at that location within the virtual environment; generating a mesh based on the determined values, wherein the mesh comprises a plurality of values corresponding to the locations within the virtual environment; determining, based on the mesh, a path through the virtual environment; and directing a user on the determined path. 2. The method of claim 1, wherein said directing the user comprises:
determining an initial position and viewpoint orientation of the user; determining, based on the mesh, a desired position and viewpoint orientation for the user along the determined path; and guiding the user from the initial position and viewpoint orientation to the desired position and viewpoint orientation. 3. The method of claim 2, wherein guiding the user comprises applying a haptic cue in the direction of the desired position and viewpoint orientation. 4. The method of claim 2, wherein guiding the user comprises playing an audio cue in the direction of the desired position and viewpoint orientation. 5. The method of claim 2, wherein guiding the user comprises playing a visual cue in the direction of the desired position and viewpoint orientation. 6. The method of claim 5, wherein the visual cue comprises movement of digital assets in the direction of the desired position and viewpoint orientation. 7. The method of claim 2, wherein guiding the user comprises rotating the virtual environment around the user. 8. The method of claim 1, wherein each of the one or more locations has a corresponding coordinate within the space of the virtual environment. 9. The method of claim 1, wherein the values are assigned or calculated based on defined criteria. 10. The method of claim 1, wherein the mesh comprises a data structure. 11. The method of claim 1, wherein the mesh is stored in computer memory. 12. A method comprising:
generating one or more meshes for a virtual environment, wherein each mesh comprises a plurality of interest values each corresponding to a respective location within the virtual environment; triggering an event in the virtual environment; determining a current position and viewpoint orientation of a user; determining, based at least on the event, a desired position and viewpoint orientation for the user; and guiding, based at least on the one or more meshes, the user from the initial position and viewpoint orientation to the desired position and viewpoint orientation. 13. The method of claim 12, wherein the event is the insertion of an advertisement. 14. The method of claim 12, wherein the event is the insertion of a narrative element. 15. The method of claim 12, further comprising:
determining a new position and viewpoint orientation of the user; determining the new position and viewpoint orientation of the user is not equal to the desired position and viewpoint orientation of the user; determining a current position and viewpoint orientation of a user; determining, based on the event, a desired position and viewpoint orientation for the user; and guiding the user from the initial position and viewpoint orientation to the desired position and viewpoint orientation. 16. The method of claim 12, further comprising:
determining a new position and viewpoint orientation of the user; determining the new position and viewpoint orientation of the user is equal to the desired position and viewpoint orientation of the user; and triggering a new event in the virtual environment. 17. The method of claim 12, further comprising:
defining criteria for assigning interest values to one or more digital assets of the virtual environment; assigning, based on the criteria, interest values for the one or more digital assets; and adjusting, based on the assigned interest values, at least one mesh of the one or more meshes. 18. The method of claim 12, wherein each interest value represents a level of interest at its corresponding location within the virtual environment. 19. The method of claim 12, wherein the plurality of interest values are maintained in a data structure. 20. The method of claim 19, wherein the data structure comprises a matrix. 21. The method of claim 12, wherein each location has a coordinate within the space of the virtual environment and wherein each interest value is associated with the coordinate of its corresponding location. 22. A system comprising:
one or more processors; a non-transitory, computer-readable storage medium in operable communication with at least one processor of the one or more processors, wherein the computer-readable storage medium contains one or more programming instructions that, when executed, cause the processor to: access a virtual environment comprising a space and one or more digital assets positioned at locations within the space; access one or more meshes comprising a plurality of interest values each corresponding to a respective location within the virtual environment; outputting for display the virtual environment; determining, based on the one or more meshes, a path through the virtual environment; and directing a user on the determined path. 23. The system of claim 22, wherein the instructions, when executed, further cause the at least one processor to:
determining an initial position and viewpoint orientation of the user; determining a desired position and viewpoint orientation for the user along the determined path; and guiding the user from the initial position and viewpoint orientation to the desired position and viewpoint orientation. 24. The system of claim 23, wherein the instructions, when executed, further cause the at least one processor to:
determine, based upon a speed and direction of user movement, an estimated ending position and viewpoint orientation of the user; determine a difference between the desired position and viewpoint orientation and the estimated ending position and viewpoint orientation of the user; and move the virtual environment by the determined difference. 25. The system of claim 24, wherein moving the virtual environment comprises rotating the virtual environment around the user. 26. The system of claim 22, wherein the instructions, when executed, further cause the at least one processor to:
scan the virtual environment for predetermined attributes; assign, based on the scan, interest values to the virtual environment; adjust, based on the interest values, a first mesh of the one or more of the meshes. 27. The system of claim 26, wherein the instructions, when executed, further cause the at least one processor to:
output for display, based on the first mesh, an overlay in the virtual environment. 28. The system of claim 22, wherein the instructions, when executed, further cause the at least one processor to:
receive an environment engine comprising a set of rules that define interactions between digital assets in the virtual environment; and determining, based on the set of rules, interactions between digital assets in the virtual environment. | 2,600 |
10,540 | 10,540 | 15,672,363 | 2,643 | A diversity switch circuit includes first and second switches. The first switch includes a first common terminal connected to a diversity antenna, a first selection terminal connected to a first signal path, and a second selection terminal connected to a second signal path. The second signal path is a path different from the first signal path. The second switch is disposed in the first signal path and includes a second common terminal connected to the first selection terminal and at least two selection terminals. A received signal received by the diversity antenna is transmitted to the first signal path when the first common terminal and the first selection terminal are connected to each other. A sending signal to be sent by the diversity antenna is transmitted to the second signal path when the first common terminal and the second selection terminal are connected to each other. | 1. A diversity switch circuit comprising:
a first switch including a first common terminal connected to a diversity antenna, a first selection terminal connected to a first signal path, and a second selection terminal connected to a second signal path, the second signal path being a path different from the first signal path; and a second switch that is disposed in the first signal path and that includes a second common terminal connected to the first selection terminal and includes at least two selection terminals; wherein a received signal received by the diversity antenna is transmitted to the first signal path when the first common terminal and the first selection terminal are connected to each other; and a sending signal to be sent by the diversity antenna is transmitted to the second signal path when the first common terminal and the second selection terminal are connected to each other. 2. The diversity switch circuit according to claim 1, wherein a terminating resistor is connected to one of the at least two selection terminals of the second switch. 3. The diversity switch circuit according to claim 1, wherein a matching circuit is connected between the first selection terminal and the second common terminal. 4. The diversity switch circuit according to claim 1, wherein
the second switch includes two or more second switches; and a first multiplexer is connected between the first selection terminal and the second common terminal of each of the two or more second switches. 5. The diversity switch circuit according to claim 4, wherein
the first switch includes a third selection terminal connected to the first signal path; a third switch is connected between the first multiplexer and the second common terminal; and a received signal received by the diversity antenna is transmitted to the first signal path without passing through the first multiplexer when the first common terminal and the third selection terminal are connected to each other and when the third switch is OFF. 6. The diversity switch circuit according to claim 1, further comprising:
a fourth switch that is disposed in the second signal path and includes a third common terminal connected to the second selection terminal and includes at least two selection terminals. 7. The diversity switch circuit according to claim 6, wherein a matching circuit is connected between the second selection terminal and the third common terminal. 8. The diversity switch circuit according to claim 6, wherein
the fourth switch includes two or more fourth switches; and a second multiplexer is connected between the second selection terminal and the third common terminal of each of the two or more fourth switches. 9. The diversity switch circuit according to claim 8, wherein
the first switch also includes a fourth selection terminal connected to the second signal path; a fifth switch is connected between the second multiplexer and the third common terminal; and a sending signal to be sent by the diversity antenna is transmitted to the second signal path without passing through the second multiplexer when the first common terminal and the fourth selection terminal are connected to each other and when the fifth switch is OFF. 10. A radio-frequency module comprising:
the diversity switch circuit according to claim 1; a filter connected to the at least two selection terminals of the second switch; and an amplifier circuit connected to the filter. 11. The radio-frequency module according to claim 10, wherein a terminating resistor is connected to one of the at least two selection terminals of the second switch. 12. The radio-frequency module according to claim 10, wherein a matching circuit is connected between the first selection terminal and the second common terminal. 13. The radio-frequency module according to claim 10, wherein
the second switch includes two or more second switches; and a first multiplexer is connected between the first selection terminal and the second common terminal of each of the two or more second switches. 14. The radio-frequency module according to claim 13, wherein
the first switch includes a third selection terminal connected to the first signal path; a third switch is connected between the first multiplexer and the second common terminal; and a received signal received by the diversity antenna is transmitted to the first signal path without passing through the first multiplexer when the first common terminal and the third selection terminal are connected to each other and when the third switch is OFF. 15. The radio-frequency module according to claim 10, further comprising:
a fourth switch that is disposed in the second signal path and includes a third common terminal connected to the second selection terminal and includes at least two selection terminals. 16. The radio-frequency module according to claim 15, wherein a matching circuit is connected between the second selection terminal and the third common terminal. 17. The radio-frequency module according to claim 15, wherein
the fourth switch includes two or more fourth switches; and a second multiplexer is connected between the second selection terminal and the third common terminal of each of the two or more fourth switches. 18. The radio-frequency module according to claim 17, wherein
the first switch also includes a fourth selection terminal connected to the second signal path; a fifth switch is connected between the second multiplexer and the third common terminal; and a sending signal to be sent by the diversity antenna is transmitted to the second signal path without passing through the second multiplexer when the first common terminal and the fourth selection terminal are connected to each other and when the fifth switch is OFF. 19. A communication device comprising:
a radio-frequency signal processing circuit that processes the received signal and the sending signal; and the radio-frequency module according to claim 10 that sends the sending signal and receives the received signal between the diversity antenna and the radio-frequency signal processing circuit. | A diversity switch circuit includes first and second switches. The first switch includes a first common terminal connected to a diversity antenna, a first selection terminal connected to a first signal path, and a second selection terminal connected to a second signal path. The second signal path is a path different from the first signal path. The second switch is disposed in the first signal path and includes a second common terminal connected to the first selection terminal and at least two selection terminals. A received signal received by the diversity antenna is transmitted to the first signal path when the first common terminal and the first selection terminal are connected to each other. A sending signal to be sent by the diversity antenna is transmitted to the second signal path when the first common terminal and the second selection terminal are connected to each other.1. A diversity switch circuit comprising:
a first switch including a first common terminal connected to a diversity antenna, a first selection terminal connected to a first signal path, and a second selection terminal connected to a second signal path, the second signal path being a path different from the first signal path; and a second switch that is disposed in the first signal path and that includes a second common terminal connected to the first selection terminal and includes at least two selection terminals; wherein a received signal received by the diversity antenna is transmitted to the first signal path when the first common terminal and the first selection terminal are connected to each other; and a sending signal to be sent by the diversity antenna is transmitted to the second signal path when the first common terminal and the second selection terminal are connected to each other. 2. The diversity switch circuit according to claim 1, wherein a terminating resistor is connected to one of the at least two selection terminals of the second switch. 3. The diversity switch circuit according to claim 1, wherein a matching circuit is connected between the first selection terminal and the second common terminal. 4. The diversity switch circuit according to claim 1, wherein
the second switch includes two or more second switches; and a first multiplexer is connected between the first selection terminal and the second common terminal of each of the two or more second switches. 5. The diversity switch circuit according to claim 4, wherein
the first switch includes a third selection terminal connected to the first signal path; a third switch is connected between the first multiplexer and the second common terminal; and a received signal received by the diversity antenna is transmitted to the first signal path without passing through the first multiplexer when the first common terminal and the third selection terminal are connected to each other and when the third switch is OFF. 6. The diversity switch circuit according to claim 1, further comprising:
a fourth switch that is disposed in the second signal path and includes a third common terminal connected to the second selection terminal and includes at least two selection terminals. 7. The diversity switch circuit according to claim 6, wherein a matching circuit is connected between the second selection terminal and the third common terminal. 8. The diversity switch circuit according to claim 6, wherein
the fourth switch includes two or more fourth switches; and a second multiplexer is connected between the second selection terminal and the third common terminal of each of the two or more fourth switches. 9. The diversity switch circuit according to claim 8, wherein
the first switch also includes a fourth selection terminal connected to the second signal path; a fifth switch is connected between the second multiplexer and the third common terminal; and a sending signal to be sent by the diversity antenna is transmitted to the second signal path without passing through the second multiplexer when the first common terminal and the fourth selection terminal are connected to each other and when the fifth switch is OFF. 10. A radio-frequency module comprising:
the diversity switch circuit according to claim 1; a filter connected to the at least two selection terminals of the second switch; and an amplifier circuit connected to the filter. 11. The radio-frequency module according to claim 10, wherein a terminating resistor is connected to one of the at least two selection terminals of the second switch. 12. The radio-frequency module according to claim 10, wherein a matching circuit is connected between the first selection terminal and the second common terminal. 13. The radio-frequency module according to claim 10, wherein
the second switch includes two or more second switches; and a first multiplexer is connected between the first selection terminal and the second common terminal of each of the two or more second switches. 14. The radio-frequency module according to claim 13, wherein
the first switch includes a third selection terminal connected to the first signal path; a third switch is connected between the first multiplexer and the second common terminal; and a received signal received by the diversity antenna is transmitted to the first signal path without passing through the first multiplexer when the first common terminal and the third selection terminal are connected to each other and when the third switch is OFF. 15. The radio-frequency module according to claim 10, further comprising:
a fourth switch that is disposed in the second signal path and includes a third common terminal connected to the second selection terminal and includes at least two selection terminals. 16. The radio-frequency module according to claim 15, wherein a matching circuit is connected between the second selection terminal and the third common terminal. 17. The radio-frequency module according to claim 15, wherein
the fourth switch includes two or more fourth switches; and a second multiplexer is connected between the second selection terminal and the third common terminal of each of the two or more fourth switches. 18. The radio-frequency module according to claim 17, wherein
the first switch also includes a fourth selection terminal connected to the second signal path; a fifth switch is connected between the second multiplexer and the third common terminal; and a sending signal to be sent by the diversity antenna is transmitted to the second signal path without passing through the second multiplexer when the first common terminal and the fourth selection terminal are connected to each other and when the fifth switch is OFF. 19. A communication device comprising:
a radio-frequency signal processing circuit that processes the received signal and the sending signal; and the radio-frequency module according to claim 10 that sends the sending signal and receives the received signal between the diversity antenna and the radio-frequency signal processing circuit. | 2,600 |
10,541 | 10,541 | 14,361,116 | 2,661 | Image processing apparatus 110 for applying a mask to an object, comprising an input 120 for obtaining an image 122 , a processor 130 for (i) detecting the object in the image, and (ii) applying the mask to the object in the image for obtaining an output image 60 , and the processor being arranged for said applying the mask to the object by (j) establishing an object contour of the object, (jj) generating, based on the object contour, a mask being smaller than the object, and (jjj) positioning the mask over the object for masking a body of the object while keeping clear a border area of the object. | 1. Image processing apparatus for applying a mask to an object, comprising:
an input for obtaining an image; a processor for (i) detecting the object in the image, and (ii) applying the mask to the object in the image for obtaining an output image; and an output for providing the output image to a display; the processor being arranged for said applying of said mask to the object by (j) establishing an object contour of the object, (jj) generating, based on the object contour, a mask being smaller than the object, and (jjj) positioning the mask over the object for masking a body of 1 the object while leaving clear a border area of the object. 2. Image processing apparatus according to claim 1, wherein the processor is arranged for generating the mask by reducing the object contour in size in an inward direction with respect to the object for obtaining a mask contour of the mask. 3. Image processing apparatus according to claim 2, wherein the processor is arranged for reducing the object contour in size by applying a morphological erosion technique to the object contour. 4. Image processing apparatus according to claim 1, wherein the processor is arranged for detecting, in the image, an object gradient constituting a gradual transition between the object and its surroundings for leaving clear the object gradient and the border area of the object between the mask and the object gradient. 5. Image processing apparatus according to claim 1, wherein the processor is arranged for establishing the border area of the object having a displayed width between 2 mm and 10 mm when the output image is displayed on a display. 6. Image processing apparatus according to claim 1, further comprising a user input (140) for enabling a user to determine a zoom factor for zooming in on or zooming out of the output image, and wherein the processor is arranged for generating the mask, based on the zoom factor for maintaining a displayed width of the border area of the object when the output image is displayed on a display, based on the zoom factor. 7. Image processing apparatus according to claim 1, wherein, when the mask is applied to the object, the processor is arranged for generating a mask gradient in the output image for establishing a gradual transition between the mask and the object. 8. Image processing apparatus according to claim 7, wherein the processor is arranged for generating the mask gradient blending a border area of the mask with the object. 9. Image processing apparatus according to claim 7, wherein the processor is arranged for (i) detecting, in the image, an object gradient constituting a gradual transition between the object and its surroundings, and (ii) generating the mask gradient as differing in width and/or shape from the object gradient for visually differentiating the object gradient from the mask gradient in the output image. 10. Image processing apparatus according to claim 7, further comprising a user input for enabling a user to determine the width and/or shape of the mask gradient. 11. Image processing apparatus according to claim 1, wherein the processor is arranged for masking the body of the object by reducing brightness and/or contrast of the body of the object. 12. Workstation comprising the image processing apparatus of claim 1. 13. Imaging apparatus comprising the image processing apparatus of claim 1. 14. A method of applying a mask to an object, comprising:
obtaining an image; detecting the object in the image; applying the mask to the object in the image for obtaining an output image by (j) establishing an object contour of the object, (jj) generating, based on the object contour, a mask being smaller than the object, and (jjj) positioning the mask over the object for masking in the output image the body of the object while leaving clear a border area of the object; and providing the output image to a display. 15. A computer program product comprising instructions for causing a processor system to perform the method according to claim 14. | Image processing apparatus 110 for applying a mask to an object, comprising an input 120 for obtaining an image 122 , a processor 130 for (i) detecting the object in the image, and (ii) applying the mask to the object in the image for obtaining an output image 60 , and the processor being arranged for said applying the mask to the object by (j) establishing an object contour of the object, (jj) generating, based on the object contour, a mask being smaller than the object, and (jjj) positioning the mask over the object for masking a body of the object while keeping clear a border area of the object.1. Image processing apparatus for applying a mask to an object, comprising:
an input for obtaining an image; a processor for (i) detecting the object in the image, and (ii) applying the mask to the object in the image for obtaining an output image; and an output for providing the output image to a display; the processor being arranged for said applying of said mask to the object by (j) establishing an object contour of the object, (jj) generating, based on the object contour, a mask being smaller than the object, and (jjj) positioning the mask over the object for masking a body of 1 the object while leaving clear a border area of the object. 2. Image processing apparatus according to claim 1, wherein the processor is arranged for generating the mask by reducing the object contour in size in an inward direction with respect to the object for obtaining a mask contour of the mask. 3. Image processing apparatus according to claim 2, wherein the processor is arranged for reducing the object contour in size by applying a morphological erosion technique to the object contour. 4. Image processing apparatus according to claim 1, wherein the processor is arranged for detecting, in the image, an object gradient constituting a gradual transition between the object and its surroundings for leaving clear the object gradient and the border area of the object between the mask and the object gradient. 5. Image processing apparatus according to claim 1, wherein the processor is arranged for establishing the border area of the object having a displayed width between 2 mm and 10 mm when the output image is displayed on a display. 6. Image processing apparatus according to claim 1, further comprising a user input (140) for enabling a user to determine a zoom factor for zooming in on or zooming out of the output image, and wherein the processor is arranged for generating the mask, based on the zoom factor for maintaining a displayed width of the border area of the object when the output image is displayed on a display, based on the zoom factor. 7. Image processing apparatus according to claim 1, wherein, when the mask is applied to the object, the processor is arranged for generating a mask gradient in the output image for establishing a gradual transition between the mask and the object. 8. Image processing apparatus according to claim 7, wherein the processor is arranged for generating the mask gradient blending a border area of the mask with the object. 9. Image processing apparatus according to claim 7, wherein the processor is arranged for (i) detecting, in the image, an object gradient constituting a gradual transition between the object and its surroundings, and (ii) generating the mask gradient as differing in width and/or shape from the object gradient for visually differentiating the object gradient from the mask gradient in the output image. 10. Image processing apparatus according to claim 7, further comprising a user input for enabling a user to determine the width and/or shape of the mask gradient. 11. Image processing apparatus according to claim 1, wherein the processor is arranged for masking the body of the object by reducing brightness and/or contrast of the body of the object. 12. Workstation comprising the image processing apparatus of claim 1. 13. Imaging apparatus comprising the image processing apparatus of claim 1. 14. A method of applying a mask to an object, comprising:
obtaining an image; detecting the object in the image; applying the mask to the object in the image for obtaining an output image by (j) establishing an object contour of the object, (jj) generating, based on the object contour, a mask being smaller than the object, and (jjj) positioning the mask over the object for masking in the output image the body of the object while leaving clear a border area of the object; and providing the output image to a display. 15. A computer program product comprising instructions for causing a processor system to perform the method according to claim 14. | 2,600 |
10,542 | 10,542 | 15,961,016 | 2,691 | Electrical equipment such as a tablet computer cover, a laptop computer, or other equipment may include keys. Each key may have a key member. Coatings such as opaque coating layers may be formed on the key members. Opaque coating layers may be patterned to form symbol-shaped openings associated with key labels. Opaque coating layers may also have recessed peripheral portions and other features to enhance the appearance of the keys. Metal coating layers such as physical vapor deposition metal layers may be incorporated into the keys. Key members may have outer surfaces and opposing inner surface on which patterned coating layers may be formed. Peripheral edge portions of the key members may extend between the outer and inner surfaces and may be coated with metal coating layers to form reflective metal trim structures. Backlight illumination for the keys may be formed form light sources such as light-emitting diodes. | 1. Electrical equipment, comprising:
an array of keys each of which includes a key press sensor and a key member; and a layer of material having a first surface facing the key members and having an opposing second surface facing the key press sensors; and light sources that provide backlight illumination for the keys, wherein each key member includes a coating layer with a symbol-shaped opening and includes a metal trim. 2. The electrical equipment defined in claim 1 wherein each key member has a peripheral edge surface and wherein the metal trim of each key member is formed from a metal coating layer on the peripheral edge surface. 3. The electrical equipment defined in claim 2 wherein each key member has opposing outer and inner surfaces and wherein a portion of the metal coating layer on each key member extends from the peripheral edge surface over a portion of the inner surface of that key member. 4. The electrical equipment defined in claim 2 wherein each key member has opposing outer and inner surfaces, the inner surface facing the layer of material, and wherein the coating layer with the symbol-shaped opening of each key member is formed on the outer surface. 5. The electrical equipment defined in claim 2 wherein each key member has opposing inner and outer surfaces, the inner surface facing the layer of material, and wherein the coating layer with the symbol-shaped opening of each key member is formed on the inner surface. 6. The electrical equipment defined in claim 2 wherein the metal coating layer on the peripheral edge surface of each key member comprises a physical vapor deposition metal coating layer. 7. The electrical equipment defined in claim 2 wherein the key member comprises clear polymer and wherein the layer of material comprises fabric with an opening overlapped by the symbol-shaped opening. 8. The electrical equipment defined in claim 1 wherein each key member has a peripheral edge surface, wherein each key member has an inner surface facing the layer of material and has an opposing outer surface, and wherein the coating layer with the symbol-shaped opening is formed on the outer surface and is recessed from the peripheral edge surface. 9. The electrical equipment defined in claim 1 wherein each key member comprises transparent polymer. 10. The electrical equipment defined in claim 9 wherein the layer of material comprises woven fabric. 11. The electrical equipment defined in claim 1 wherein the light sources comprise light-emitting diodes, the electrical equipment further comprising a printed circuit coupled to the light-emitting diodes and the key press sensors, wherein the key press sensors comprise dome switches. 12. The electrical equipment defined in claim 1 wherein the sidewall surfaces have stepped cross-sectional profiles. 13. The electrical equipment defined in claim 1 wherein the sidewall surfaces are angled to form a tapered cross-sectional profile for each key member. 14. The electrical equipment defined in claim 1 wherein the layer of material comprises a layer of fabric and wherein each key member comprises a polymer member with a concave outer surface and an opposing planar inner surface facing the layer of fabric. 15. An apparatus, comprising:
a printed circuit; a key press sensor on the printed circuit; a layer of material that overlaps the key press sensor; a key member on the layer of material, wherein the layer of material is interposed between the key member and the key press sensor; and metal on the key member. 16. The apparatus defined in claim 15 wherein each key member has an outer surface and an opposing inner surface facing the layer of material, wherein each key member has a peripheral edge surface extending between the outer surface and opposing inner surface of that key member, and wherein the peripheral edge surface of each key member is covered by the metal of that key member. 17. The apparatus defined in a claim 16 wherein the layer of material comprises fabric. 18. A keyboard, comprising:
a fabric layer; an array of clear key members attached to the fabric layer, wherein each clear key member has a peripheral edge surface; a trim on each peripheral edge surface; a coating layer on each clear key member that forms a symbol; and light sources that provide backlight illumination to the array of clear key members. 19. The keyboard defined in claim 18 wherein the trim on each edge surfaces comprises a metal coating layer. 20. The keyboard defined in claim 19 wherein the coating layer on each clear key member comprises an opaque coating layer having a symbol-shaped opening that forms the symbol, wherein the fabric layer has openings, and wherein the symbol-shaped opening in the coating layer on each clear key member is overlapped by a respective one of the openings in the fabric layer. | Electrical equipment such as a tablet computer cover, a laptop computer, or other equipment may include keys. Each key may have a key member. Coatings such as opaque coating layers may be formed on the key members. Opaque coating layers may be patterned to form symbol-shaped openings associated with key labels. Opaque coating layers may also have recessed peripheral portions and other features to enhance the appearance of the keys. Metal coating layers such as physical vapor deposition metal layers may be incorporated into the keys. Key members may have outer surfaces and opposing inner surface on which patterned coating layers may be formed. Peripheral edge portions of the key members may extend between the outer and inner surfaces and may be coated with metal coating layers to form reflective metal trim structures. Backlight illumination for the keys may be formed form light sources such as light-emitting diodes.1. Electrical equipment, comprising:
an array of keys each of which includes a key press sensor and a key member; and a layer of material having a first surface facing the key members and having an opposing second surface facing the key press sensors; and light sources that provide backlight illumination for the keys, wherein each key member includes a coating layer with a symbol-shaped opening and includes a metal trim. 2. The electrical equipment defined in claim 1 wherein each key member has a peripheral edge surface and wherein the metal trim of each key member is formed from a metal coating layer on the peripheral edge surface. 3. The electrical equipment defined in claim 2 wherein each key member has opposing outer and inner surfaces and wherein a portion of the metal coating layer on each key member extends from the peripheral edge surface over a portion of the inner surface of that key member. 4. The electrical equipment defined in claim 2 wherein each key member has opposing outer and inner surfaces, the inner surface facing the layer of material, and wherein the coating layer with the symbol-shaped opening of each key member is formed on the outer surface. 5. The electrical equipment defined in claim 2 wherein each key member has opposing inner and outer surfaces, the inner surface facing the layer of material, and wherein the coating layer with the symbol-shaped opening of each key member is formed on the inner surface. 6. The electrical equipment defined in claim 2 wherein the metal coating layer on the peripheral edge surface of each key member comprises a physical vapor deposition metal coating layer. 7. The electrical equipment defined in claim 2 wherein the key member comprises clear polymer and wherein the layer of material comprises fabric with an opening overlapped by the symbol-shaped opening. 8. The electrical equipment defined in claim 1 wherein each key member has a peripheral edge surface, wherein each key member has an inner surface facing the layer of material and has an opposing outer surface, and wherein the coating layer with the symbol-shaped opening is formed on the outer surface and is recessed from the peripheral edge surface. 9. The electrical equipment defined in claim 1 wherein each key member comprises transparent polymer. 10. The electrical equipment defined in claim 9 wherein the layer of material comprises woven fabric. 11. The electrical equipment defined in claim 1 wherein the light sources comprise light-emitting diodes, the electrical equipment further comprising a printed circuit coupled to the light-emitting diodes and the key press sensors, wherein the key press sensors comprise dome switches. 12. The electrical equipment defined in claim 1 wherein the sidewall surfaces have stepped cross-sectional profiles. 13. The electrical equipment defined in claim 1 wherein the sidewall surfaces are angled to form a tapered cross-sectional profile for each key member. 14. The electrical equipment defined in claim 1 wherein the layer of material comprises a layer of fabric and wherein each key member comprises a polymer member with a concave outer surface and an opposing planar inner surface facing the layer of fabric. 15. An apparatus, comprising:
a printed circuit; a key press sensor on the printed circuit; a layer of material that overlaps the key press sensor; a key member on the layer of material, wherein the layer of material is interposed between the key member and the key press sensor; and metal on the key member. 16. The apparatus defined in claim 15 wherein each key member has an outer surface and an opposing inner surface facing the layer of material, wherein each key member has a peripheral edge surface extending between the outer surface and opposing inner surface of that key member, and wherein the peripheral edge surface of each key member is covered by the metal of that key member. 17. The apparatus defined in a claim 16 wherein the layer of material comprises fabric. 18. A keyboard, comprising:
a fabric layer; an array of clear key members attached to the fabric layer, wherein each clear key member has a peripheral edge surface; a trim on each peripheral edge surface; a coating layer on each clear key member that forms a symbol; and light sources that provide backlight illumination to the array of clear key members. 19. The keyboard defined in claim 18 wherein the trim on each edge surfaces comprises a metal coating layer. 20. The keyboard defined in claim 19 wherein the coating layer on each clear key member comprises an opaque coating layer having a symbol-shaped opening that forms the symbol, wherein the fabric layer has openings, and wherein the symbol-shaped opening in the coating layer on each clear key member is overlapped by a respective one of the openings in the fabric layer. | 2,600 |
10,543 | 10,543 | 15,178,929 | 2,674 | A method and method for natural language generation employ a natural language generation model which has been trained to assign an utterance label to a new text sequence, based on features extracted from the text sequence, such as parts-of-speech. The model assigns an utterance label to the new text sequence, based on the extracted features. The utterance label is used to guide the generation of a natural language utterance, such as a question, from the new text sequence. The system and method find application in dialog systems for generating utterances, to be sent to a user, from brief descriptions of problems or solutions in a knowledge base. | 1. A method for natural language generation comprising:
providing a natural language generation model which has been trained to assign an utterance label to a text sequence based on features extracted from the text sequence; receiving a new text sequence; extracting features from the text sequence; assigning an utterance label to the new text sequence, based on the extracted features, with the trained natural language generation model; generating a natural language utterance from the new text sequence, using the assigned utterance label to guide the generation of the natural language utterance, wherein at least one of the extracting and generating is performed with a processor. 2. The method of claim 1, wherein the natural language generation model comprises a sequential decision model. 3. The method of claim 3, wherein the sequential decision model is selected from a Conditional Random Field model, a recurrent neural network model, and a combination thereof. 4. The method of claim 1, wherein the method further comprises training the natural language generation model on features extracted from a collection of text sequences and corresponding utterance labels, the utterance label of each of the text sequences in the collection being generated from a subset of the words of a natural language utterance provided by an annotator for the text sequence. 5. The method of claim 4, wherein the natural language utterance provided by an annotator is an interrogatory form of the respective text sequence. 6. The method of claim 1, wherein the utterance label comprises a sequence of words terminating in an auxiliary verb or a pronoun. 7. The method of claim 1, wherein the natural language utterance is in the form of a question. 8. The method of claim 1, wherein the extracting features from the text sequence includes identifying parts-of-speech for tokens of the text sequence, at least some of the features comprising the identified parts-of-speech. 9. The method of claim 8, wherein the parts-of-speech include parts-of-speech for different types of verb. 10. The method of claim 9, wherein the parts-of-speech for different types of verb are selected from the group consisting of:
a verb which is in the 3rd person; a verb which is a gerund; a verb which is a past-participle; and a verb which is an infinitive. 11. The method of claim 8, wherein the extracted features further include features derived from the tokens. 12. The method of claim 1, wherein the generating a natural language utterance from the text sequence comprises using a generative grammar or set of automata, parts of which being labeled with utterance labels from a set of utterance labels applied by the natural language generation model. 13. The method of claim 1, wherein the assigning of an utterance label to the text sequence comprises labeling each word in the text sequence with the assigned utterance label. 14. The method of claim 1, wherein the receiving of the new text sequence comprises receiving the new text sequence from a knowledge base which includes problems and corresponding solutions and the method further comprises outputting the natural language utterance to a client device during a dialog with a user. 15. The method of claim 1, wherein the new text sequence is no more than a single sentence. 16. A computer program product comprising a non-transitory recording medium storing instructions, which when executed on a computer, causes the computer to perform the method of claim 1. 17. A system comprising memory which stores instructions for performing the method of claim 1 and a processor in communication with the memory for executing the instructions. 18. A system for natural language generation comprising:
memory which stores a natural language generation model which has been trained to assign an utterance label to a text sequence based on features extracted from the text sequence; a features extractor which extracts features from an input text sequence; a labeling component which assigns an utterance label to the input text sequence, based on the extracted features, with the trained natural language generation model; a surface realization component which generates a natural language utterance from the input text sequence, using the assigned utterance label to guide the generation of the natural language utterance; and a processor which implements the feature extractor, labeling component, and surface realization component. 19. The system of claim 18, further comprising a training component for training the natural language generation model. 20. A method for generating a natural language generation system, comprising:
receiving a collection of text sequences and for each text sequence, a natural language utterance in a communicative form; extracting utterance labels from the natural language utterances, each utterance label comprising a sequence of at least one word and including an auxiliary verb; extracting features from each of the text sequences; training a natural language generation model to assign an utterance label to a new text sequence, based on the extracted features from each of the text sequences and the extracted utterance labels; indexing parts of a realization model according to respective utterance labels for guiding the generation of a natural language utterance from a new text sequence, using an assigned utterance label; wherein at least one of the extracting utterance labels, extracting features, training the natural language generation model and indexing parts of the realization model is performed with a processor. 21. A system comprising memory which stores instructions for performing the method of claim 20 and a processor in communication with the memory for executing the instructions. | A method and method for natural language generation employ a natural language generation model which has been trained to assign an utterance label to a new text sequence, based on features extracted from the text sequence, such as parts-of-speech. The model assigns an utterance label to the new text sequence, based on the extracted features. The utterance label is used to guide the generation of a natural language utterance, such as a question, from the new text sequence. The system and method find application in dialog systems for generating utterances, to be sent to a user, from brief descriptions of problems or solutions in a knowledge base.1. A method for natural language generation comprising:
providing a natural language generation model which has been trained to assign an utterance label to a text sequence based on features extracted from the text sequence; receiving a new text sequence; extracting features from the text sequence; assigning an utterance label to the new text sequence, based on the extracted features, with the trained natural language generation model; generating a natural language utterance from the new text sequence, using the assigned utterance label to guide the generation of the natural language utterance, wherein at least one of the extracting and generating is performed with a processor. 2. The method of claim 1, wherein the natural language generation model comprises a sequential decision model. 3. The method of claim 3, wherein the sequential decision model is selected from a Conditional Random Field model, a recurrent neural network model, and a combination thereof. 4. The method of claim 1, wherein the method further comprises training the natural language generation model on features extracted from a collection of text sequences and corresponding utterance labels, the utterance label of each of the text sequences in the collection being generated from a subset of the words of a natural language utterance provided by an annotator for the text sequence. 5. The method of claim 4, wherein the natural language utterance provided by an annotator is an interrogatory form of the respective text sequence. 6. The method of claim 1, wherein the utterance label comprises a sequence of words terminating in an auxiliary verb or a pronoun. 7. The method of claim 1, wherein the natural language utterance is in the form of a question. 8. The method of claim 1, wherein the extracting features from the text sequence includes identifying parts-of-speech for tokens of the text sequence, at least some of the features comprising the identified parts-of-speech. 9. The method of claim 8, wherein the parts-of-speech include parts-of-speech for different types of verb. 10. The method of claim 9, wherein the parts-of-speech for different types of verb are selected from the group consisting of:
a verb which is in the 3rd person; a verb which is a gerund; a verb which is a past-participle; and a verb which is an infinitive. 11. The method of claim 8, wherein the extracted features further include features derived from the tokens. 12. The method of claim 1, wherein the generating a natural language utterance from the text sequence comprises using a generative grammar or set of automata, parts of which being labeled with utterance labels from a set of utterance labels applied by the natural language generation model. 13. The method of claim 1, wherein the assigning of an utterance label to the text sequence comprises labeling each word in the text sequence with the assigned utterance label. 14. The method of claim 1, wherein the receiving of the new text sequence comprises receiving the new text sequence from a knowledge base which includes problems and corresponding solutions and the method further comprises outputting the natural language utterance to a client device during a dialog with a user. 15. The method of claim 1, wherein the new text sequence is no more than a single sentence. 16. A computer program product comprising a non-transitory recording medium storing instructions, which when executed on a computer, causes the computer to perform the method of claim 1. 17. A system comprising memory which stores instructions for performing the method of claim 1 and a processor in communication with the memory for executing the instructions. 18. A system for natural language generation comprising:
memory which stores a natural language generation model which has been trained to assign an utterance label to a text sequence based on features extracted from the text sequence; a features extractor which extracts features from an input text sequence; a labeling component which assigns an utterance label to the input text sequence, based on the extracted features, with the trained natural language generation model; a surface realization component which generates a natural language utterance from the input text sequence, using the assigned utterance label to guide the generation of the natural language utterance; and a processor which implements the feature extractor, labeling component, and surface realization component. 19. The system of claim 18, further comprising a training component for training the natural language generation model. 20. A method for generating a natural language generation system, comprising:
receiving a collection of text sequences and for each text sequence, a natural language utterance in a communicative form; extracting utterance labels from the natural language utterances, each utterance label comprising a sequence of at least one word and including an auxiliary verb; extracting features from each of the text sequences; training a natural language generation model to assign an utterance label to a new text sequence, based on the extracted features from each of the text sequences and the extracted utterance labels; indexing parts of a realization model according to respective utterance labels for guiding the generation of a natural language utterance from a new text sequence, using an assigned utterance label; wherein at least one of the extracting utterance labels, extracting features, training the natural language generation model and indexing parts of the realization model is performed with a processor. 21. A system comprising memory which stores instructions for performing the method of claim 20 and a processor in communication with the memory for executing the instructions. | 2,600 |
10,544 | 10,544 | 15,691,923 | 2,672 | In order to automatically group or cluster printing devices in a fleet of such devices for servicing, print status data and print state data are received, which comprises alert information for each of the plurality of printing devices, from each of a plurality of printing devices in a fleet of devices. Devices with similar print statuses and alerts are automatically clustered, and the clustered output result displayed on a user interface of a device management application (DMA). The clustered output result is also integrated into the DMA, and a print status alert module generates automated print status alerts for printers identified during clustering. The automated print status alerts are then output for display on the user interface. | 1. A method for automatically grouping print status alerts for a plurality of printing devices, comprising:
receiving print status data and print state data, which comprises alert information for each of the plurality of printing devices, from each of a plurality of printing devices in a fleet of devices; retrieving print status bits from each device; comparing the retrieved print status bits to one or more pre-generated error state masks to determine a state of each device; automatically clustering devices with similar print statuses and alerts; displaying the clustered output result on a user interface of a device management application (DMA); integrating the clustered output result into the DMA; executing a print status alert module that generates automated print status alerts for printers identified during clustering; outputting the automated print status alerts for display on the user interface. 2. The method according to claim 1, further comprising executing a print status module configured to:
retrieve print status bits from each device on scheduled intervals; pre-generate a list of print status ID type masks each comprising at least one status bit; identify all print status bits that match a print status ID mask; display each identified print status bit in an order in which the print status bit was identified. 3. The method according to claim 1, further comprising obtaining print state level information by:
determining whether or not to generate an indication of an error state, warning state or normal state; if the comparison indicates that the print status shows an error, displaying an error indication; and if the comparison does not indicate that the print status shows an error, comparing the print status to a warning state mask. 4. The method according to claim 3, further comprising:
if the comparison indicates that the print status shows a warning, displaying the warning indication on a user interface; if the print status does not match either of the warning state mask or the error state mask, displaying a “normal” status indication on the user interface. 5. The method according to claim 1, wherein the print status data collected from the devices includes information describing one or more of:
device configuration and settings; usage information; supply levels; and alerts. 6. The method according to claim 1, wherein the automatic clustering of devices is performed via a hierarchical block clustering module. 7. The method according to claim 1, further comprising generating a group or subgroups of devices automatically based on common print statuses and alert levels using the print status data collected at a given time, or over a certain period of time. 8. A system that facilitates automatically grouping print status alerts for a plurality of printing devices, comprising:
a remote print status and print state information extraction module configured to receive print status data and print state data from each of a plurality of printing devices in a fleet of devices; an automatic grouping module that automatically clusters devices having similar print statuses and alerts; a user interface configured to display a clustered output result; and a processor configured to: retrieve print status bits from each device; compare the retrieved print status bits to one or more pre-generated error state masks to determine a state of each device; integrate the clustered output result into a device management application (DMA); execute a print status alert module that generates clustered print status alerts for printers identified during clustering; and output the clustered print status alerts for display on the user interface. 9. The system according to claim 8, wherein the processor is further configured to execute a print status module configured to:
retrieve print status bits from each device on scheduled intervals; pre-generate a list of print status ID type masks each comprising at least one status bit; identify all printing devices having print status bits that match a print status ID type mask; display each identified print status bit in an order in which the print status bit was identified. 10. The system according to claim 8, wherein the processor is further configured to obtain print state level information by:
determining whether or not to generate an indication of an error state, warning state or normal state; if the comparison indicates that the print status shows an error, displaying an error indication; and if the comparison does not indicate that the print status shows an error, comparing the print status to a warning state mask. 11. The system according to claim 10, wherein the processor is further configured to:
if the comparison indicates that the print status shows a warning, display the warning indication on the user interface; if the print status does not match either of the warning state mask or the error state mask, display a “normal” status indication on the user interface. 12. The system according to claim 8, wherein the print status data and print state data received from the devices includes information describing one or more of:
device configuration; device settings; usage information; supply levels; and device alerts. 13. The system according to claim 8, wherein the processor is further configured to execute a hierarchical block clustering module when clustering devices. 14. The system according to claim 8, wherein the processor is further configured to generate a group or subgroups of devices automatically based on common print statuses and alert levels using the print status data collected at a given time, or over a certain period of time. 15. A processor configured to automatically group print status alerts for a plurality of printing devices, the processor being configured to:
receive print status data and print state data, which comprises alert information for each of the plurality of printing devices, from each of a plurality of printing devices in a fleet of devices; retrieve print status bits from each device; compare the retrieved print status bits to one or more pre-generated error state masks to determine a state of each device automatically cluster devices with similar print statuses and alerts; display the clustered output result on a user interface of the device management application (DMA); integrate the clustered output result into the DMA; execute a print status alert module that generates automated print status alerts for printers identified during clustering; output the automated print status alerts for display on the user interface. 16. The processor according to claim 15, wherein the processor is further configured to execute a print status module configured to:
retrieve print status bits from each device on scheduled intervals; pre-generate a list of print status ID type masks each comprising at least one status bit; identify all print status bits that match a print status ID mask; display each identified print status bit in an order in which the print status bit was identified. 17. The processor according to claim 15, wherein the processor is further configured to obtain print state level information by:
determining whether or not to generate an indication of an error state, warning state or normal state; if the comparison indicates that the print status shows an error, displaying an error indication; and if the comparison does not indicate that the print status shows an error, comparing the print status to a warning state mask. 18. The processor according to claim 16, wherein the processor is further configured to:
if the comparison indicates that the print status shows a warning, display the warning indication on a user interface; if the print status does not match either of the warning state mask or the error state mask, display a “normal” status indication on the user interface. 19. The processor according to claim 15, wherein the print status data and print state data collected from the devices includes information describing one or more of:
device configuration; device settings; usage information; supply levels; and device alerts. 20. The processor according to claim 15, wherein the processor is further configured to execute a hierarchical block clustering module when performing automatic clustering of devices. 21. The processor according to claim 15, wherein the processor is further configured to generate a group or subgroups of devices automatically based on common print statuses and alert levels using the print status data collected at a given time, or over a certain period of time. | In order to automatically group or cluster printing devices in a fleet of such devices for servicing, print status data and print state data are received, which comprises alert information for each of the plurality of printing devices, from each of a plurality of printing devices in a fleet of devices. Devices with similar print statuses and alerts are automatically clustered, and the clustered output result displayed on a user interface of a device management application (DMA). The clustered output result is also integrated into the DMA, and a print status alert module generates automated print status alerts for printers identified during clustering. The automated print status alerts are then output for display on the user interface.1. A method for automatically grouping print status alerts for a plurality of printing devices, comprising:
receiving print status data and print state data, which comprises alert information for each of the plurality of printing devices, from each of a plurality of printing devices in a fleet of devices; retrieving print status bits from each device; comparing the retrieved print status bits to one or more pre-generated error state masks to determine a state of each device; automatically clustering devices with similar print statuses and alerts; displaying the clustered output result on a user interface of a device management application (DMA); integrating the clustered output result into the DMA; executing a print status alert module that generates automated print status alerts for printers identified during clustering; outputting the automated print status alerts for display on the user interface. 2. The method according to claim 1, further comprising executing a print status module configured to:
retrieve print status bits from each device on scheduled intervals; pre-generate a list of print status ID type masks each comprising at least one status bit; identify all print status bits that match a print status ID mask; display each identified print status bit in an order in which the print status bit was identified. 3. The method according to claim 1, further comprising obtaining print state level information by:
determining whether or not to generate an indication of an error state, warning state or normal state; if the comparison indicates that the print status shows an error, displaying an error indication; and if the comparison does not indicate that the print status shows an error, comparing the print status to a warning state mask. 4. The method according to claim 3, further comprising:
if the comparison indicates that the print status shows a warning, displaying the warning indication on a user interface; if the print status does not match either of the warning state mask or the error state mask, displaying a “normal” status indication on the user interface. 5. The method according to claim 1, wherein the print status data collected from the devices includes information describing one or more of:
device configuration and settings; usage information; supply levels; and alerts. 6. The method according to claim 1, wherein the automatic clustering of devices is performed via a hierarchical block clustering module. 7. The method according to claim 1, further comprising generating a group or subgroups of devices automatically based on common print statuses and alert levels using the print status data collected at a given time, or over a certain period of time. 8. A system that facilitates automatically grouping print status alerts for a plurality of printing devices, comprising:
a remote print status and print state information extraction module configured to receive print status data and print state data from each of a plurality of printing devices in a fleet of devices; an automatic grouping module that automatically clusters devices having similar print statuses and alerts; a user interface configured to display a clustered output result; and a processor configured to: retrieve print status bits from each device; compare the retrieved print status bits to one or more pre-generated error state masks to determine a state of each device; integrate the clustered output result into a device management application (DMA); execute a print status alert module that generates clustered print status alerts for printers identified during clustering; and output the clustered print status alerts for display on the user interface. 9. The system according to claim 8, wherein the processor is further configured to execute a print status module configured to:
retrieve print status bits from each device on scheduled intervals; pre-generate a list of print status ID type masks each comprising at least one status bit; identify all printing devices having print status bits that match a print status ID type mask; display each identified print status bit in an order in which the print status bit was identified. 10. The system according to claim 8, wherein the processor is further configured to obtain print state level information by:
determining whether or not to generate an indication of an error state, warning state or normal state; if the comparison indicates that the print status shows an error, displaying an error indication; and if the comparison does not indicate that the print status shows an error, comparing the print status to a warning state mask. 11. The system according to claim 10, wherein the processor is further configured to:
if the comparison indicates that the print status shows a warning, display the warning indication on the user interface; if the print status does not match either of the warning state mask or the error state mask, display a “normal” status indication on the user interface. 12. The system according to claim 8, wherein the print status data and print state data received from the devices includes information describing one or more of:
device configuration; device settings; usage information; supply levels; and device alerts. 13. The system according to claim 8, wherein the processor is further configured to execute a hierarchical block clustering module when clustering devices. 14. The system according to claim 8, wherein the processor is further configured to generate a group or subgroups of devices automatically based on common print statuses and alert levels using the print status data collected at a given time, or over a certain period of time. 15. A processor configured to automatically group print status alerts for a plurality of printing devices, the processor being configured to:
receive print status data and print state data, which comprises alert information for each of the plurality of printing devices, from each of a plurality of printing devices in a fleet of devices; retrieve print status bits from each device; compare the retrieved print status bits to one or more pre-generated error state masks to determine a state of each device automatically cluster devices with similar print statuses and alerts; display the clustered output result on a user interface of the device management application (DMA); integrate the clustered output result into the DMA; execute a print status alert module that generates automated print status alerts for printers identified during clustering; output the automated print status alerts for display on the user interface. 16. The processor according to claim 15, wherein the processor is further configured to execute a print status module configured to:
retrieve print status bits from each device on scheduled intervals; pre-generate a list of print status ID type masks each comprising at least one status bit; identify all print status bits that match a print status ID mask; display each identified print status bit in an order in which the print status bit was identified. 17. The processor according to claim 15, wherein the processor is further configured to obtain print state level information by:
determining whether or not to generate an indication of an error state, warning state or normal state; if the comparison indicates that the print status shows an error, displaying an error indication; and if the comparison does not indicate that the print status shows an error, comparing the print status to a warning state mask. 18. The processor according to claim 16, wherein the processor is further configured to:
if the comparison indicates that the print status shows a warning, display the warning indication on a user interface; if the print status does not match either of the warning state mask or the error state mask, display a “normal” status indication on the user interface. 19. The processor according to claim 15, wherein the print status data and print state data collected from the devices includes information describing one or more of:
device configuration; device settings; usage information; supply levels; and device alerts. 20. The processor according to claim 15, wherein the processor is further configured to execute a hierarchical block clustering module when performing automatic clustering of devices. 21. The processor according to claim 15, wherein the processor is further configured to generate a group or subgroups of devices automatically based on common print statuses and alert levels using the print status data collected at a given time, or over a certain period of time. | 2,600 |
10,545 | 10,545 | 16,055,673 | 2,645 | Certain features relate to systems and methods for optimizing the radio frequency characteristics of signals transmitted between a radio base station (RBS) and a distributed antenna system (DAS). A self-optimizing network (SON) entity can determine adjustments to radio frequency operations and management parameters at the RBS based on radio frequency parameters measured by a measurement and configuration module at the DAS. Adjustments to radio frequency operations and management parameters can include adjustments configured to compensate for signal latency caused by the DAS. Adjustments to radio frequency operations and management parameters can also include adjustments to signal gain due to noise rise caused by the DAS. The SON entity can also measure nominal receive power levels for the RBS for purposes of open loop power control. | 1. A method, comprising:
receiving measurements of radio frequency parameters of a distributed antenna system (DAS radio frequency parameters); determining adjustments to operations and management parameters of a radio base station (RBS) using the measurements of the DAS radio frequency parameters; and sending commands to a radio base station (RBS) for changing the operations and management parameters of the RBS using the determined adjustments. 2. The method of claim 1, wherein receiving the measurements of the DAS radio frequency parameters comprises receiving measurements of at least one of an uplink distributed antenna system (DAS) gain, a downlink DAS gain, and an uplink DAS noise power; and
wherein determining the adjustments to the operations and management parameters of the RBS comprises determining an updated value corresponding to an uplink RBS gain. 3. The method of claim 2, wherein determining the updated value corresponding to the uplink RBS gain comprises calculating an uplink DAS noise rise, and summing the uplink DAS noise rise with a difference of the uplink DAS gain and the downlink DAS gain. 4. The method of claim 1, wherein determining the adjustments to the operations and management parameters of the RBS comprises determining an adjustment to a nominal receive power level of the RBS. 5. The method of claim 1, further comprising:
receiving measurements of radio frequency parameters of a RBS (RBS radio frequency parameters); determining adjustments to operations and management parameters of the DAS using the measurements of the RBS radio frequency parameters; and sending commands to the DAS for changing the operations and management parameters of the DAS using the determined adjustments. 6. The method of claim 1, wherein receiving the RBS radio frequency parameters comprises receiving a transmit power level of the RBS;
wherein determining the adjustments to the operations and management parameters of the DAS using the measurements of the RBS radio frequency parameters comprises determining a radio frequency attenuation level of the DAS using the transmit power level of the RBS; and wherein sending commands to the DAS for changing the operations and management parameters of the DAS using the determined adjustments comprises sending a command to the DAS to change a radio frequency attenuation level of the DAS using the determined radio frequency attenuation level of the DAS. 7. A telecommunications system, comprising:
one or more remote units of a distributed antenna system (DAS); a head-end unit of the DAS, the head-end unit being configured to measure radio frequency parameters of the DAS (DAS radio frequency parameters), and to provide wireless communications to the one or more remote units of the DAS; a self-optimizing network (SON) entity communicatively coupled to the head-end unit and a radio base station (RBS); wherein the SON entity comprises processor circuitry coupled to memory circuitry; wherein the SON entity is configured to:
(a) receive measurements of the DAS radio frequency parameters;
(b) determine adjustments to the operations and management parameters of a radio base station (RBS) using the measurements of the DAS radio frequency parameters; and
(c) send commands to the RBS for changing operations and management parameters of the RBS using the determined adjustments. 8. The telecommunications system of claim 7, wherein the radio frequency parameters include at least one of an uplink DAS gain, a downlink DAS gain, and an uplink DAS noise power; and
wherein the adjustments to the operations and management parameters include an updated value corresponding to an uplink RBS gain. 9. The telecommunications system of claim 8, wherein the updated value corresponding to the uplink RBS gain is determined by calculating an uplink DAS noise rise, and summing the uplink DAS noise rise with a difference of the uplink DAS gain and the downlink DAS gain. 10. The telecommunications system of claim 7, wherein the adjustments to the operations and management parameters include an updated value for a nominal receive power level of the RBS. 11. The telecommunications system of claim 7, wherein the SON entity is further configured to receive measurements of radio frequency parameters of the RBS;
determine adjustments to operations and management parameters of the DAS using the measurements of the radio frequency parameters of the RBS; and send commands to the DAS for changing the operations and management parameters of the DAS using the determined adjustments. 12. The telecommunications system of claim 11, wherein receive the radio frequency parameters of the RBS comprises receive a transmit power level of the RBS;
wherein determine the adjustments to the operations and management parameters of the DAS using the measurements of the radio frequency parameters of the RBS comprises determine a radio frequency attenuation level of the DAS using the transmit power level of the RBS; and wherein send commands to the DAS for changing the operations and management parameters of the DAS using the determined adjustments comprises send a command to the DAS to change the radio frequency attenuation level of the DAS. 13. The telecommunications system of claim 7, further comprising a bus, where the bus communicatively couples the processing circuitry and the memory circuitry. 14. The telecommunications system of claim 7, wherein the SON entity is included in at least one of the head-end unit and the RBS. 15. A non-transitory computer readable medium storing a program causing a computer to perform a method, the method comprising:
receiving measurements of radio frequency parameters of a distributed antenna system (DAS radio frequency parameters); determining adjustments to operations and management parameters of a radio base station (RBS) using the measurements of the DAS radio frequency parameters; and sending commands to a radio base station (RBS) for changing the operations and management parameters of the RBS using the determined adjustments. 16. The non-transitory computer readable medium of claim 15, wherein receiving the measurements of the DAS radio frequency parameters comprises receiving measurements of at least one of an uplink distributed antenna system (DAS) gain, a downlink DAS gain, and an uplink DAS noise power; and
wherein determining the adjustments to the operations and management parameters of the RBS comprises determining an updated value corresponding to an uplink RBS gain. 17. The non-transitory computer readable medium of claim 16, wherein determining the updated value corresponding to the uplink RBS gain comprises calculating an uplink DAS noise rise, and summing the uplink DAS noise rise with a difference of the uplink DAS gain and the downlink DAS gain. 18. The non-transitory computer readable medium of claim 15, wherein determining the adjustments to the operations and management parameters of the RBS comprises determining an adjustment to a nominal receive power level of the RBS. 19. The non-transitory computer readable medium of claim 15, further comprising:
receiving measurements of radio frequency parameters of a RBS (RBS radio frequency parameters); determining adjustments to operations and management parameters of the DAS using the measurements of the RBS radio frequency parameters; and sending commands to the DAS for changing the operations and management parameters of the DAS using the determined adjustments. 20. The non-transitory computer readable medium of claim 19, wherein receiving the RBS radio frequency parameters comprises receiving a transmit power level of the RBS;
wherein determining the adjustments to the operations and management parameters of the DAS using the measurements of the RBS radio frequency parameters comprises determining a radio frequency attenuation level of the DAS using the transmit power level of the RBS; and wherein sending commands to the DAS for changing the operations and management parameters of the DAS using the determined adjustments comprises sending a command to the DAS to change a radio frequency attenuation level of the DAS using the determined radio frequency attenuation level of the DAS. | Certain features relate to systems and methods for optimizing the radio frequency characteristics of signals transmitted between a radio base station (RBS) and a distributed antenna system (DAS). A self-optimizing network (SON) entity can determine adjustments to radio frequency operations and management parameters at the RBS based on radio frequency parameters measured by a measurement and configuration module at the DAS. Adjustments to radio frequency operations and management parameters can include adjustments configured to compensate for signal latency caused by the DAS. Adjustments to radio frequency operations and management parameters can also include adjustments to signal gain due to noise rise caused by the DAS. The SON entity can also measure nominal receive power levels for the RBS for purposes of open loop power control.1. A method, comprising:
receiving measurements of radio frequency parameters of a distributed antenna system (DAS radio frequency parameters); determining adjustments to operations and management parameters of a radio base station (RBS) using the measurements of the DAS radio frequency parameters; and sending commands to a radio base station (RBS) for changing the operations and management parameters of the RBS using the determined adjustments. 2. The method of claim 1, wherein receiving the measurements of the DAS radio frequency parameters comprises receiving measurements of at least one of an uplink distributed antenna system (DAS) gain, a downlink DAS gain, and an uplink DAS noise power; and
wherein determining the adjustments to the operations and management parameters of the RBS comprises determining an updated value corresponding to an uplink RBS gain. 3. The method of claim 2, wherein determining the updated value corresponding to the uplink RBS gain comprises calculating an uplink DAS noise rise, and summing the uplink DAS noise rise with a difference of the uplink DAS gain and the downlink DAS gain. 4. The method of claim 1, wherein determining the adjustments to the operations and management parameters of the RBS comprises determining an adjustment to a nominal receive power level of the RBS. 5. The method of claim 1, further comprising:
receiving measurements of radio frequency parameters of a RBS (RBS radio frequency parameters); determining adjustments to operations and management parameters of the DAS using the measurements of the RBS radio frequency parameters; and sending commands to the DAS for changing the operations and management parameters of the DAS using the determined adjustments. 6. The method of claim 1, wherein receiving the RBS radio frequency parameters comprises receiving a transmit power level of the RBS;
wherein determining the adjustments to the operations and management parameters of the DAS using the measurements of the RBS radio frequency parameters comprises determining a radio frequency attenuation level of the DAS using the transmit power level of the RBS; and wherein sending commands to the DAS for changing the operations and management parameters of the DAS using the determined adjustments comprises sending a command to the DAS to change a radio frequency attenuation level of the DAS using the determined radio frequency attenuation level of the DAS. 7. A telecommunications system, comprising:
one or more remote units of a distributed antenna system (DAS); a head-end unit of the DAS, the head-end unit being configured to measure radio frequency parameters of the DAS (DAS radio frequency parameters), and to provide wireless communications to the one or more remote units of the DAS; a self-optimizing network (SON) entity communicatively coupled to the head-end unit and a radio base station (RBS); wherein the SON entity comprises processor circuitry coupled to memory circuitry; wherein the SON entity is configured to:
(a) receive measurements of the DAS radio frequency parameters;
(b) determine adjustments to the operations and management parameters of a radio base station (RBS) using the measurements of the DAS radio frequency parameters; and
(c) send commands to the RBS for changing operations and management parameters of the RBS using the determined adjustments. 8. The telecommunications system of claim 7, wherein the radio frequency parameters include at least one of an uplink DAS gain, a downlink DAS gain, and an uplink DAS noise power; and
wherein the adjustments to the operations and management parameters include an updated value corresponding to an uplink RBS gain. 9. The telecommunications system of claim 8, wherein the updated value corresponding to the uplink RBS gain is determined by calculating an uplink DAS noise rise, and summing the uplink DAS noise rise with a difference of the uplink DAS gain and the downlink DAS gain. 10. The telecommunications system of claim 7, wherein the adjustments to the operations and management parameters include an updated value for a nominal receive power level of the RBS. 11. The telecommunications system of claim 7, wherein the SON entity is further configured to receive measurements of radio frequency parameters of the RBS;
determine adjustments to operations and management parameters of the DAS using the measurements of the radio frequency parameters of the RBS; and send commands to the DAS for changing the operations and management parameters of the DAS using the determined adjustments. 12. The telecommunications system of claim 11, wherein receive the radio frequency parameters of the RBS comprises receive a transmit power level of the RBS;
wherein determine the adjustments to the operations and management parameters of the DAS using the measurements of the radio frequency parameters of the RBS comprises determine a radio frequency attenuation level of the DAS using the transmit power level of the RBS; and wherein send commands to the DAS for changing the operations and management parameters of the DAS using the determined adjustments comprises send a command to the DAS to change the radio frequency attenuation level of the DAS. 13. The telecommunications system of claim 7, further comprising a bus, where the bus communicatively couples the processing circuitry and the memory circuitry. 14. The telecommunications system of claim 7, wherein the SON entity is included in at least one of the head-end unit and the RBS. 15. A non-transitory computer readable medium storing a program causing a computer to perform a method, the method comprising:
receiving measurements of radio frequency parameters of a distributed antenna system (DAS radio frequency parameters); determining adjustments to operations and management parameters of a radio base station (RBS) using the measurements of the DAS radio frequency parameters; and sending commands to a radio base station (RBS) for changing the operations and management parameters of the RBS using the determined adjustments. 16. The non-transitory computer readable medium of claim 15, wherein receiving the measurements of the DAS radio frequency parameters comprises receiving measurements of at least one of an uplink distributed antenna system (DAS) gain, a downlink DAS gain, and an uplink DAS noise power; and
wherein determining the adjustments to the operations and management parameters of the RBS comprises determining an updated value corresponding to an uplink RBS gain. 17. The non-transitory computer readable medium of claim 16, wherein determining the updated value corresponding to the uplink RBS gain comprises calculating an uplink DAS noise rise, and summing the uplink DAS noise rise with a difference of the uplink DAS gain and the downlink DAS gain. 18. The non-transitory computer readable medium of claim 15, wherein determining the adjustments to the operations and management parameters of the RBS comprises determining an adjustment to a nominal receive power level of the RBS. 19. The non-transitory computer readable medium of claim 15, further comprising:
receiving measurements of radio frequency parameters of a RBS (RBS radio frequency parameters); determining adjustments to operations and management parameters of the DAS using the measurements of the RBS radio frequency parameters; and sending commands to the DAS for changing the operations and management parameters of the DAS using the determined adjustments. 20. The non-transitory computer readable medium of claim 19, wherein receiving the RBS radio frequency parameters comprises receiving a transmit power level of the RBS;
wherein determining the adjustments to the operations and management parameters of the DAS using the measurements of the RBS radio frequency parameters comprises determining a radio frequency attenuation level of the DAS using the transmit power level of the RBS; and wherein sending commands to the DAS for changing the operations and management parameters of the DAS using the determined adjustments comprises sending a command to the DAS to change a radio frequency attenuation level of the DAS using the determined radio frequency attenuation level of the DAS. | 2,600 |
10,546 | 10,546 | 15,710,063 | 2,621 | A reflective display tile can be used in connection with a static or video display. The tile includes a transparent substrate having a front surface, a rear surface, and a peripheral edge surrounding the front and rear surfaces, an electrostatic shutter array disposed at the front surface of said substrate, a reflective medium disposed at the rear surface of the substrate, and drive electronics disposed rearward of the reflective medium. The shutter array and drive electronics are configured so that the tile is tileable and can be abutted at any of its peripheral edges against identical tiles to form a display with substantially no perceived optical interface between adjacent tiles. Another embodiment includes an opaque substrate such as a circuit board. A further embodiment has a polymeric film coated on one side with a metal film and on another side with ink or blackening material. | 1-2. (canceled) 3. The reflective display tile of claim 14, wherein the drive electronics are multiplexed with conventional or thin film transistors. 4. The reflective display tile of claim 14, wherein the drive electronics are direct drive. 5-6. (canceled) 7. The reflective display tile of claim 14, wherein the shutters of the shutter array are arranged in a grid having rows and columns, adjacent rows each separated by a first distance and adjacent columns each separated by a second distance, the rows closest to the peripheral edge are each separated from the peripheral edge by half of the first distance, and the columns closest to the peripheral edge are each separated from the peripheral edge by half of the second distance. 8. The reflective display tile of claim 14, wherein the drive electronics connect with the shutter array through vias in the substrate. 9-13. (canceled) 14. A reflective display tile for use in connection with a static or video display comprising:
a substrate having a front surface, a rear surface, and a peripheral edge surrounding the front and rear surfaces; drive electronics disposed at the front or rear surface of the substrate; a reflective medium separate and distinct from the substrate and disposed on and in front of both the drive electronics and the entire substrate; and an electrostatic shutter array disposed on the reflective medium, wherein the electrostatic shutter array is the only element of the reflective display tile in front of the reflective medium. 15. The reflective display tile of claim 14, wherein the shutter array and drive electronics are configured so that the tile can be abutted at any of its peripheral edges against identical tiles to form a display with substantially no perceived optical interface between adjacent tiles. 16. The reflective display tile of claim 14, wherein the substrate is a circuit board. 17. The reflective display tile of claim 14, wherein each shutter in the shutter array is a polymer rollout with ink or blackening material for contrast. 18. The reflective display tile of claim 14, wherein the reflective medium comprises a multi-color pattern. 19. The reflective display tile of claim 14, wherein the reflective medium is a retroreflector. 20. A large static or video display comprising:
a plurality of the reflective display tiles each according to claim 14 and arranged to form the display, wherein the spacing among the shutters of a single display tile is consistent and is substantially identical to the spacing among shutters across a boundary between abutted display tiles. 21-22. (canceled) 23. A reflective display tile for use in connection with a static or video display comprising:
a substrate having a front surface, a rear surface, and a peripheral edge surrounding the front and rear surfaces; a conductive pattern on the front surface of the substrate; a reflective layer on and in front of the conductive pattern, wherein the reflective layer is disposed in front of the entire substrate; a polymeric film attached to the reflective layer, the polymeric film coated on one side with a conductive metal or transparent film and on another side with ink or blackening material, the polymeric film cut into flaps that form shutters; and drive electronics to excite each shutter, wherein the shutters are the most forward element and the reflective medium is the second most forward element of the reflective display tile. 24. The reflective display of claim 23, wherein the shutters and drive electronics are configured so that the tile can be abutted at any of its peripheral edges against identical tiles to form a display with substantially no perceived optical interface between adjacent tiles. 25. (canceled) 26. The reflective display tile of claim 23, wherein the substrate is a circuit board. 27. (canceled) 28. The reflective display tile of claim 23, wherein the drive electronics are multiplexed with conventional or thin film transistors. 29. The reflective display tile of claim 23, wherein the reflective layer includes a multi-color pattern. 30. The reflective display tile of claim 23, wherein the reflective layer is a retroreflector. 31. The reflective display tile of claim 23, wherein the drive electronics are direct drive. 32. The reflective display tile of claim 23, wherein the shutters of the shutter array are arranged in a grid having rows and columns, adjacent rows each separated by a first distance and adjacent columns each separated by a second distance, the rows closest to the peripheral edge are each separated from the peripheral edge by half of the first distance, and the columns closest to the peripheral edge are each separated from the peripheral edge by half of the second distance. 33. The reflective display tile of claim 23, wherein the drive electronics connect with the shutter array through vias in the substrate. 34. A large static or video display comprising:
a plurality of the reflective display tiles each according to claim 23 and arranged to form the display, wherein the spacing among the shutters of a single display tile is consistent and is substantially identical to the spacing among shutters across a boundary between abutted display tiles. 35. The static or video display of claim 34, further comprising a cover substrate that covers each tile or the entire area of the display. 36. The static or video display of claim 35, wherein the cover substrate is made of polymer or glass. 37. The reflective display tile of claim 14, wherein the reflective medium includes a TiO2 or BaSO4 coated surface or is comprised of ultra-white plastic. 38. The reflective display tile of claim 23, wherein the reflective layer includes a TiO2 or BaSO4 coated surface or is comprised of ultra-white plastic. 39. The static or video display of claim 20, further comprising a cover substrate that covers each tile or the entire area of the display. 40. The static or video display of claim 39, wherein the cover substrate is made of polymer or glass. 41. The large static or video display of claim 20, further comprising front surface lighting. 42. The large static or video display of claim 34, further comprising front surface lighting. 43. A reflective display tile for use in connection with a static or video display comprising:
a circuit board having a front surface, a rear surface, and a peripheral edge surrounding the front and rear surfaces; drive electronics disposed at the front or rear surface of the circuit board; a reflective medium separate and distinct from the circuit board and disposed on and in front of both the front surface of the circuit board and the drive electronics; and an electrostatic shutter array disposed on the reflective medium. 44. The reflective display tile of claim 43, wherein the circuit board is an opaque circuit board. 45. The reflective display tile of claim 43, wherein at least 97% of light reflects off the reflective medium. 46. A reflective display tile for use in connection with a static or video display comprising:
a circuit board having a front surface, a rear surface, and a peripheral edge surrounding the front and rear surfaces; drive electronics disposed at the front or rear surface of the circuit board; a reflective medium disposed on the front surface of the circuit board, wherein the reflective medium is not metallic; and an electrostatic shutter array disposed on the reflective medium, wherein the reflective display tile is not transmissive to light. 47. The reflective display tile of claim 46, wherein the circuit board is an opaque circuit board. 48. The reflective display tile of claim 46, wherein the reflective medium includes TiO2 or BaSO4. 49. The reflective display tile of claim 46, wherein the reflective medium is ultra-white plastic. | A reflective display tile can be used in connection with a static or video display. The tile includes a transparent substrate having a front surface, a rear surface, and a peripheral edge surrounding the front and rear surfaces, an electrostatic shutter array disposed at the front surface of said substrate, a reflective medium disposed at the rear surface of the substrate, and drive electronics disposed rearward of the reflective medium. The shutter array and drive electronics are configured so that the tile is tileable and can be abutted at any of its peripheral edges against identical tiles to form a display with substantially no perceived optical interface between adjacent tiles. Another embodiment includes an opaque substrate such as a circuit board. A further embodiment has a polymeric film coated on one side with a metal film and on another side with ink or blackening material.1-2. (canceled) 3. The reflective display tile of claim 14, wherein the drive electronics are multiplexed with conventional or thin film transistors. 4. The reflective display tile of claim 14, wherein the drive electronics are direct drive. 5-6. (canceled) 7. The reflective display tile of claim 14, wherein the shutters of the shutter array are arranged in a grid having rows and columns, adjacent rows each separated by a first distance and adjacent columns each separated by a second distance, the rows closest to the peripheral edge are each separated from the peripheral edge by half of the first distance, and the columns closest to the peripheral edge are each separated from the peripheral edge by half of the second distance. 8. The reflective display tile of claim 14, wherein the drive electronics connect with the shutter array through vias in the substrate. 9-13. (canceled) 14. A reflective display tile for use in connection with a static or video display comprising:
a substrate having a front surface, a rear surface, and a peripheral edge surrounding the front and rear surfaces; drive electronics disposed at the front or rear surface of the substrate; a reflective medium separate and distinct from the substrate and disposed on and in front of both the drive electronics and the entire substrate; and an electrostatic shutter array disposed on the reflective medium, wherein the electrostatic shutter array is the only element of the reflective display tile in front of the reflective medium. 15. The reflective display tile of claim 14, wherein the shutter array and drive electronics are configured so that the tile can be abutted at any of its peripheral edges against identical tiles to form a display with substantially no perceived optical interface between adjacent tiles. 16. The reflective display tile of claim 14, wherein the substrate is a circuit board. 17. The reflective display tile of claim 14, wherein each shutter in the shutter array is a polymer rollout with ink or blackening material for contrast. 18. The reflective display tile of claim 14, wherein the reflective medium comprises a multi-color pattern. 19. The reflective display tile of claim 14, wherein the reflective medium is a retroreflector. 20. A large static or video display comprising:
a plurality of the reflective display tiles each according to claim 14 and arranged to form the display, wherein the spacing among the shutters of a single display tile is consistent and is substantially identical to the spacing among shutters across a boundary between abutted display tiles. 21-22. (canceled) 23. A reflective display tile for use in connection with a static or video display comprising:
a substrate having a front surface, a rear surface, and a peripheral edge surrounding the front and rear surfaces; a conductive pattern on the front surface of the substrate; a reflective layer on and in front of the conductive pattern, wherein the reflective layer is disposed in front of the entire substrate; a polymeric film attached to the reflective layer, the polymeric film coated on one side with a conductive metal or transparent film and on another side with ink or blackening material, the polymeric film cut into flaps that form shutters; and drive electronics to excite each shutter, wherein the shutters are the most forward element and the reflective medium is the second most forward element of the reflective display tile. 24. The reflective display of claim 23, wherein the shutters and drive electronics are configured so that the tile can be abutted at any of its peripheral edges against identical tiles to form a display with substantially no perceived optical interface between adjacent tiles. 25. (canceled) 26. The reflective display tile of claim 23, wherein the substrate is a circuit board. 27. (canceled) 28. The reflective display tile of claim 23, wherein the drive electronics are multiplexed with conventional or thin film transistors. 29. The reflective display tile of claim 23, wherein the reflective layer includes a multi-color pattern. 30. The reflective display tile of claim 23, wherein the reflective layer is a retroreflector. 31. The reflective display tile of claim 23, wherein the drive electronics are direct drive. 32. The reflective display tile of claim 23, wherein the shutters of the shutter array are arranged in a grid having rows and columns, adjacent rows each separated by a first distance and adjacent columns each separated by a second distance, the rows closest to the peripheral edge are each separated from the peripheral edge by half of the first distance, and the columns closest to the peripheral edge are each separated from the peripheral edge by half of the second distance. 33. The reflective display tile of claim 23, wherein the drive electronics connect with the shutter array through vias in the substrate. 34. A large static or video display comprising:
a plurality of the reflective display tiles each according to claim 23 and arranged to form the display, wherein the spacing among the shutters of a single display tile is consistent and is substantially identical to the spacing among shutters across a boundary between abutted display tiles. 35. The static or video display of claim 34, further comprising a cover substrate that covers each tile or the entire area of the display. 36. The static or video display of claim 35, wherein the cover substrate is made of polymer or glass. 37. The reflective display tile of claim 14, wherein the reflective medium includes a TiO2 or BaSO4 coated surface or is comprised of ultra-white plastic. 38. The reflective display tile of claim 23, wherein the reflective layer includes a TiO2 or BaSO4 coated surface or is comprised of ultra-white plastic. 39. The static or video display of claim 20, further comprising a cover substrate that covers each tile or the entire area of the display. 40. The static or video display of claim 39, wherein the cover substrate is made of polymer or glass. 41. The large static or video display of claim 20, further comprising front surface lighting. 42. The large static or video display of claim 34, further comprising front surface lighting. 43. A reflective display tile for use in connection with a static or video display comprising:
a circuit board having a front surface, a rear surface, and a peripheral edge surrounding the front and rear surfaces; drive electronics disposed at the front or rear surface of the circuit board; a reflective medium separate and distinct from the circuit board and disposed on and in front of both the front surface of the circuit board and the drive electronics; and an electrostatic shutter array disposed on the reflective medium. 44. The reflective display tile of claim 43, wherein the circuit board is an opaque circuit board. 45. The reflective display tile of claim 43, wherein at least 97% of light reflects off the reflective medium. 46. A reflective display tile for use in connection with a static or video display comprising:
a circuit board having a front surface, a rear surface, and a peripheral edge surrounding the front and rear surfaces; drive electronics disposed at the front or rear surface of the circuit board; a reflective medium disposed on the front surface of the circuit board, wherein the reflective medium is not metallic; and an electrostatic shutter array disposed on the reflective medium, wherein the reflective display tile is not transmissive to light. 47. The reflective display tile of claim 46, wherein the circuit board is an opaque circuit board. 48. The reflective display tile of claim 46, wherein the reflective medium includes TiO2 or BaSO4. 49. The reflective display tile of claim 46, wherein the reflective medium is ultra-white plastic. | 2,600 |
10,547 | 10,547 | 14,212,678 | 2,625 | Computing interface systems and methods are disclosed. Some implementations include a first accelerometer attached to a first fastening article that is capable of holding the first accelerometer in place on a portion of a thumb of a user. Some implementations may also include a second accelerometer attached to a second fastening article that is capable of holding the second accelerometer in place on a portion of a wrist of a user. Some implementations may additionally or alternatively include magnetometers and/or gyroscopes attached to the first and second fastening articles. Some implementations may also include a processing device configured to receive measurements from the accelerometers, magnetometers, and/or gyroscopes and identify, based on the measurements, symbols associated with motions of a user's hand and/or the orientation of the hand. Some implementations may allow a user to control a cursor in a three dimensional virtual space and interact with objects in that space. | 1. A method comprising:
receiving a first set of acceleration measurements from a first accelerometer that is attached to a thumb of a user; detecting, based at least in part on the first set of acceleration measurements, a first event corresponding to engagement of a working surface; during the first event, tracking motion of the first accelerometer, based at least in part on the first set of acceleration measurements; determining, based at least in part on the tracked motion of the first accelerometer during the first event, image data; and transmitting, storing, or displaying the image data. 2. The method of claim 1, in which the working surface corresponds to a physical surface. 3. The method of claim 1, comprising determining an orientation of the working surface. 4. The method of claim 3, in which determining the orientation of the working surface comprises:
determining a path through three dimensional space traversed by the first accelerometer during a working surface definition gesture; and fitting a plane to the path. 5. The method of claim 3, comprising determining a three dimensional position of at least one point on the working surface. 6. The method of claim 1, comprising:
detecting a working surface definition gesture; and determining an orientation of the working surface based at least in part on a portion of the first set of acceleration measurements corresponding to the working surface definition gesture. 7. The method of claim 6, comprising estimating an orientation of a gravity vector based on a portion of the first set of acceleration measurements corresponding to a time period at the beginning of the working surface definition gesture. 8. The method of claim 6, comprising:
receiving a second set of acceleration measurements from a second accelerometer that is attached to a wrist of the user; and wherein detecting the working surface definition gesture comprises detecting, based at least in part on the first set of acceleration measurements and the second set of acceleration measurements, when the thumb of the user is placed in contact with the medial segment of an index finger of the user. 9. The method of claim 1, comprising:
receiving a second set of acceleration measurements from a second accelerometer that is attached to a wrist of the user; and wherein detecting the first event comprises detecting, based at least in part on the first set of acceleration measurements and the second set of acceleration measurements, when the thumb of the user is placed in contact with the medial segment of an index finger of the user. 10. The method of claim 9, comprising detecting a termination of the first event by detecting, based at least in part on the first set of acceleration measurements and the second set of acceleration measurements, when the thumb of the user is place in an orientation approximately orthogonal to the length of a forearm of the user. 11. The method of claim 1, comprising:
receiving a second set of acceleration measurements from a second accelerometer that is attached to a wrist of the user; receiving a first set of magnetic flux measurements from a first magnetometer that is attached to the thumb of the user; receiving a second set of magnetic flux measurements from a second magnetometer that is attached to the wrist of the user; and wherein detecting the first event comprises detecting, based at least in part on the first set of acceleration measurements and the second set of acceleration measurements and the first set of magnetic flux measurements and the second set of magnetic flux measurements, when the thumb of the user is placed in an orientation indicating engagement of the working surface. 12. The method of claim 1, in which detecting the first event comprises:
tracking the position of the first accelerometer; and detecting when a distance between the first accelerometer and the working surface is below a threshold. 13. The method of claim 1, comprising:
detecting, based at least in part on the first set of acceleration measurements, a tap of the thumb of the user against a tap target on a finger of the user; and configuring, based in part on the tap detected, a virtual writing utensil for editing an image based on the tracked motion of the first accelerometer during the first event. 14. The method of claim 1, in which determining the image data comprises:
receiving a first set of angular rate measurements from a first gyroscope that is attached to the thumb of the user; and compensating, based at least in part on the first set of angular rate measurements, for variations in the orientation of the first accelerometer with respect to the orientation of the working surface. 15. The method of claim 1, comprising:
receiving a second set of acceleration measurements from a second accelerometer that is attached to a wrist of the user; detecting, based at least in part on the second set of acceleration measurements, a sequence of taps against a sensor module housing the second accelerometer; and initiating a hand-writing mode upon detection of the sequence of taps against the sensor module. 16. The method of claim 15, in which initiating hand-writing mode comprises prompting the user to perform a working surface definition gesture. 17. The method of claim 1, in which the image data is encoded as text. 18. A system comprising:
a data processing apparatus; a data storage device storing instructions executable by the data processing apparatus that upon execution by the data processing apparatus cause the data processing apparatus to perform operations comprising:
receiving a first set of acceleration measurements from a first accelerometer that is attached to a thumb of a user;
detecting, based at least in part on the first set of acceleration measurements, a first event corresponding to engagement of a working surface;
during the first event, tracking motion of the first accelerometer, based at least in part on the first set of acceleration measurements;
determining, based at least in part on the tracked motion of the first accelerometer during the first event, image data; and
transmitting, storing, or displaying the image data. 19. The method of claim 18, in which the working surface corresponds to a physical surface. 20. The system of claim 18, in which the operations comprise:
determining an orientation of the working surface. 21. The system of claim 20, in which determining the orientation of the working surface comprises:
determining a path through three dimensional space traversed by the first accelerometer during a working surface definition gesture; and fitting a plane to the path. 22. The system of claim 20, in which the operations comprise:
determining a three dimensional position of at least one point on the working surface. 23. The system of claim 18, in which the operations comprise:
detecting a working surface definition gesture; and determining an orientation of the working surface based at least in part on a portion of the first set of acceleration measurements corresponding to the working surface definition gesture. 24. The system of claim 23, in which the operations comprise:
estimating an orientation of a gravity vector based on a portion of the first set of acceleration measurements corresponding to a time period at the beginning of the working surface definition gesture. 25. The system of claim 23, in which the operations comprise:
receiving a second set of acceleration measurements from a second accelerometer that is attached to a wrist of the user; and wherein detecting the working surface definition gesture comprises detecting, based at least in part on the first set of acceleration measurements and the second set of acceleration measurements, when the thumb of the user is placed in contact with the medial segment of an index finger of the user. 26. The system of claim 18, in which the operations comprise:
receiving a second set of acceleration measurements from a second accelerometer that is attached to a wrist of the user; and wherein detecting the first event comprises detecting, based at least in part on the first set of acceleration measurements and the second set of acceleration measurements, when the thumb of the user is placed in contact with the medial segment of an index finger of the user. 27. The system of claim 26, in which the operations comprise:
detecting a termination of the first event by detecting, based at least in part on the first set of acceleration measurements and the second set of acceleration measurements, when the thumb of the user is place in an orientation approximately orthogonal to the length of a forearm of the user. 28. The system of claim 18, in which the operations comprise:
receiving a second set of acceleration measurements from a second accelerometer that is attached to a wrist of the user; receiving a first set of magnetic flux measurements from a first magnetometer that is attached to the thumb of the user; receiving a second set of magnetic flux measurements from a second magnetometer that is attached to the wrist of the user; and wherein detecting the first event comprises detecting, based at least in part on the first set of acceleration measurements and the second set of acceleration measurements and the first set of magnetic flux measurements and the second set of magnetic flux measurements, when the thumb of the user is placed in an orientation indicating engagement of the working surface. 29. The system of claim 18, in which detecting the first event comprises:
tracking the position of the first accelerometer; and detecting when a distance between the first accelerometer and the working surface is below a threshold. 30. The system of claim 18, in which the operations comprise:
detecting, based at least in part on the first set of acceleration measurements, a tap of the thumb of the user against a tap target on a finger of the user; and configuring, based in part on the tap detected, a virtual writing utensil for editing an image based on the tracked motion of the first accelerometer during the first event. 31. The system of claim 18, in which determining the image data comprises:
receiving a first set of angular rate measurements from a first gyroscope that is attached to the thumb of the user; and compensating, based at least in part on the first set of angular rate measurements, for variations in the orientation of the first accelerometer with respect to the orientation of the working surface. 32. The system of claim 18, in which the operations comprise:
receiving a second set of acceleration measurements from a second accelerometer that is attached to a wrist of the user; detecting, based at least in part on the second set of acceleration measurements, a sequence of taps against a sensor module housing the second accelerometer; and initiating a hand-writing mode upon detection of the sequence of taps against the sensor module. 33. The system of claim 32, in which initiating hand-writing mode comprises prompting the user to perform a working surface definition gesture. 34. The system of claim 18, in which the image data is encoded as text. | Computing interface systems and methods are disclosed. Some implementations include a first accelerometer attached to a first fastening article that is capable of holding the first accelerometer in place on a portion of a thumb of a user. Some implementations may also include a second accelerometer attached to a second fastening article that is capable of holding the second accelerometer in place on a portion of a wrist of a user. Some implementations may additionally or alternatively include magnetometers and/or gyroscopes attached to the first and second fastening articles. Some implementations may also include a processing device configured to receive measurements from the accelerometers, magnetometers, and/or gyroscopes and identify, based on the measurements, symbols associated with motions of a user's hand and/or the orientation of the hand. Some implementations may allow a user to control a cursor in a three dimensional virtual space and interact with objects in that space.1. A method comprising:
receiving a first set of acceleration measurements from a first accelerometer that is attached to a thumb of a user; detecting, based at least in part on the first set of acceleration measurements, a first event corresponding to engagement of a working surface; during the first event, tracking motion of the first accelerometer, based at least in part on the first set of acceleration measurements; determining, based at least in part on the tracked motion of the first accelerometer during the first event, image data; and transmitting, storing, or displaying the image data. 2. The method of claim 1, in which the working surface corresponds to a physical surface. 3. The method of claim 1, comprising determining an orientation of the working surface. 4. The method of claim 3, in which determining the orientation of the working surface comprises:
determining a path through three dimensional space traversed by the first accelerometer during a working surface definition gesture; and fitting a plane to the path. 5. The method of claim 3, comprising determining a three dimensional position of at least one point on the working surface. 6. The method of claim 1, comprising:
detecting a working surface definition gesture; and determining an orientation of the working surface based at least in part on a portion of the first set of acceleration measurements corresponding to the working surface definition gesture. 7. The method of claim 6, comprising estimating an orientation of a gravity vector based on a portion of the first set of acceleration measurements corresponding to a time period at the beginning of the working surface definition gesture. 8. The method of claim 6, comprising:
receiving a second set of acceleration measurements from a second accelerometer that is attached to a wrist of the user; and wherein detecting the working surface definition gesture comprises detecting, based at least in part on the first set of acceleration measurements and the second set of acceleration measurements, when the thumb of the user is placed in contact with the medial segment of an index finger of the user. 9. The method of claim 1, comprising:
receiving a second set of acceleration measurements from a second accelerometer that is attached to a wrist of the user; and wherein detecting the first event comprises detecting, based at least in part on the first set of acceleration measurements and the second set of acceleration measurements, when the thumb of the user is placed in contact with the medial segment of an index finger of the user. 10. The method of claim 9, comprising detecting a termination of the first event by detecting, based at least in part on the first set of acceleration measurements and the second set of acceleration measurements, when the thumb of the user is place in an orientation approximately orthogonal to the length of a forearm of the user. 11. The method of claim 1, comprising:
receiving a second set of acceleration measurements from a second accelerometer that is attached to a wrist of the user; receiving a first set of magnetic flux measurements from a first magnetometer that is attached to the thumb of the user; receiving a second set of magnetic flux measurements from a second magnetometer that is attached to the wrist of the user; and wherein detecting the first event comprises detecting, based at least in part on the first set of acceleration measurements and the second set of acceleration measurements and the first set of magnetic flux measurements and the second set of magnetic flux measurements, when the thumb of the user is placed in an orientation indicating engagement of the working surface. 12. The method of claim 1, in which detecting the first event comprises:
tracking the position of the first accelerometer; and detecting when a distance between the first accelerometer and the working surface is below a threshold. 13. The method of claim 1, comprising:
detecting, based at least in part on the first set of acceleration measurements, a tap of the thumb of the user against a tap target on a finger of the user; and configuring, based in part on the tap detected, a virtual writing utensil for editing an image based on the tracked motion of the first accelerometer during the first event. 14. The method of claim 1, in which determining the image data comprises:
receiving a first set of angular rate measurements from a first gyroscope that is attached to the thumb of the user; and compensating, based at least in part on the first set of angular rate measurements, for variations in the orientation of the first accelerometer with respect to the orientation of the working surface. 15. The method of claim 1, comprising:
receiving a second set of acceleration measurements from a second accelerometer that is attached to a wrist of the user; detecting, based at least in part on the second set of acceleration measurements, a sequence of taps against a sensor module housing the second accelerometer; and initiating a hand-writing mode upon detection of the sequence of taps against the sensor module. 16. The method of claim 15, in which initiating hand-writing mode comprises prompting the user to perform a working surface definition gesture. 17. The method of claim 1, in which the image data is encoded as text. 18. A system comprising:
a data processing apparatus; a data storage device storing instructions executable by the data processing apparatus that upon execution by the data processing apparatus cause the data processing apparatus to perform operations comprising:
receiving a first set of acceleration measurements from a first accelerometer that is attached to a thumb of a user;
detecting, based at least in part on the first set of acceleration measurements, a first event corresponding to engagement of a working surface;
during the first event, tracking motion of the first accelerometer, based at least in part on the first set of acceleration measurements;
determining, based at least in part on the tracked motion of the first accelerometer during the first event, image data; and
transmitting, storing, or displaying the image data. 19. The method of claim 18, in which the working surface corresponds to a physical surface. 20. The system of claim 18, in which the operations comprise:
determining an orientation of the working surface. 21. The system of claim 20, in which determining the orientation of the working surface comprises:
determining a path through three dimensional space traversed by the first accelerometer during a working surface definition gesture; and fitting a plane to the path. 22. The system of claim 20, in which the operations comprise:
determining a three dimensional position of at least one point on the working surface. 23. The system of claim 18, in which the operations comprise:
detecting a working surface definition gesture; and determining an orientation of the working surface based at least in part on a portion of the first set of acceleration measurements corresponding to the working surface definition gesture. 24. The system of claim 23, in which the operations comprise:
estimating an orientation of a gravity vector based on a portion of the first set of acceleration measurements corresponding to a time period at the beginning of the working surface definition gesture. 25. The system of claim 23, in which the operations comprise:
receiving a second set of acceleration measurements from a second accelerometer that is attached to a wrist of the user; and wherein detecting the working surface definition gesture comprises detecting, based at least in part on the first set of acceleration measurements and the second set of acceleration measurements, when the thumb of the user is placed in contact with the medial segment of an index finger of the user. 26. The system of claim 18, in which the operations comprise:
receiving a second set of acceleration measurements from a second accelerometer that is attached to a wrist of the user; and wherein detecting the first event comprises detecting, based at least in part on the first set of acceleration measurements and the second set of acceleration measurements, when the thumb of the user is placed in contact with the medial segment of an index finger of the user. 27. The system of claim 26, in which the operations comprise:
detecting a termination of the first event by detecting, based at least in part on the first set of acceleration measurements and the second set of acceleration measurements, when the thumb of the user is place in an orientation approximately orthogonal to the length of a forearm of the user. 28. The system of claim 18, in which the operations comprise:
receiving a second set of acceleration measurements from a second accelerometer that is attached to a wrist of the user; receiving a first set of magnetic flux measurements from a first magnetometer that is attached to the thumb of the user; receiving a second set of magnetic flux measurements from a second magnetometer that is attached to the wrist of the user; and wherein detecting the first event comprises detecting, based at least in part on the first set of acceleration measurements and the second set of acceleration measurements and the first set of magnetic flux measurements and the second set of magnetic flux measurements, when the thumb of the user is placed in an orientation indicating engagement of the working surface. 29. The system of claim 18, in which detecting the first event comprises:
tracking the position of the first accelerometer; and detecting when a distance between the first accelerometer and the working surface is below a threshold. 30. The system of claim 18, in which the operations comprise:
detecting, based at least in part on the first set of acceleration measurements, a tap of the thumb of the user against a tap target on a finger of the user; and configuring, based in part on the tap detected, a virtual writing utensil for editing an image based on the tracked motion of the first accelerometer during the first event. 31. The system of claim 18, in which determining the image data comprises:
receiving a first set of angular rate measurements from a first gyroscope that is attached to the thumb of the user; and compensating, based at least in part on the first set of angular rate measurements, for variations in the orientation of the first accelerometer with respect to the orientation of the working surface. 32. The system of claim 18, in which the operations comprise:
receiving a second set of acceleration measurements from a second accelerometer that is attached to a wrist of the user; detecting, based at least in part on the second set of acceleration measurements, a sequence of taps against a sensor module housing the second accelerometer; and initiating a hand-writing mode upon detection of the sequence of taps against the sensor module. 33. The system of claim 32, in which initiating hand-writing mode comprises prompting the user to perform a working surface definition gesture. 34. The system of claim 18, in which the image data is encoded as text. | 2,600 |
10,548 | 10,548 | 15,863,398 | 2,689 | An apparatus ( 100 ) for operation in a tubular channel ( 199 ), the apparatus comprising a first part and a second part connected to the first part, wherein the second part comprises a first electronic device adapted to generate a data signal and a first communications device for wirelessly transmitting the generated data signal via a wireless communications channel, wherein the first part comprises a second communications device for wirelessly receiving the transmitted data signal via said radio-frequency communications channel. | 1. A downhole apparatus configured to move through a well in rock for operation in a drilled bore, the downhole apparatus configured to be installed temporarily or permanently in the drilled bore, the apparatus comprising a first part and a second part connected to the first part, wherein the second part comprises a first electronic device configured to generate a data signal and a first communications device for wirelessly transmitting the generated data signal via a wireless communications channel, wherein the first part comprises a second communications device for wirelessly receiving the transmitted data signal via said wireless communications channel. 2. The apparatus according to claim 1, wherein the data signal is a sensor signal, and wherein the first electronic device is a sensor for generating a sensor signal indicative of a measured property. 3. The apparatus according to claim 1, wherein the first part further comprises a second electronic device configured to process the received data signal. 4. The apparatus according to claim 3, wherein the second electronic device is a control unit for generating a control signal for controlling a controllable function of the apparatus. 5. The apparatus according to claim 4, wherein the controllable function includes a relative movement of the second part relative to the first part. 6. The apparatus according to claim 4, wherein the controllable function is a controllable function of the second part, wherein the second communications device is further configured to wirelessly transmit the control signal, wherein the first communications device is further configured to receive the transmitted control signal, and wherein the second part comprises a control unit for controlling the controllable function of the second part. 7. The apparatus according to claim 1, wherein the first and second parts include respective metallic housings and wherein the first and second communications devices are arranged inside the respective metallic housings. 8. The apparatus according to claim 1, wherein the first and second communications devices are configured to communicate with each other via a direct radio-frequency communications link or a communications link only including one or more relay communications devices comprised in the apparatus. 9. The apparatus according to claim 1, wherein the first and second communications devices are configured to communicate with each other via a short-range radio-frequency communications channel. 10. The apparatus according to claim 1, wherein the first and second communications devices are configured to communicate with each other via a radio-frequency communications channel using a protocol according to the IEEE 802.11 or IEEE 802.15 standard 11. The apparatus according to claim 1, wherein the second part is movably connected to the first part. 12.-14. (canceled) 15. The apparatus according to claim 1, comprising two gripping means fluidly connected via a pump;
wherein a first of the two gripping means comprises a fluid; wherein the first gripping means are attached to the first part, and the second gripping means are attached to the second part; wherein the pump (400) is configured to inflate a second one of the gripping means by pumping the fluid from the first of the two gripping means to the second of the two gripping means; and wherein the gripping means comprise a flexible member contained in a woven member, wherein the flexible member provides fluid-tightness and the woven member provides the shape of the gripping means. 16. The apparatus according to claim 15, wherein the first part comprises a reservoir comprising a fluid and sealed from a pressure chamber comprising a fluid and a piston dividing the pressure chamber into a first and a second piston pressure chamber fluidly coupled via a pump;
wherein the second part is attached to the first part via a hollow tubular member extending from the reservoir through the pressure chamber; and wherein the hollow tubular member is attached to the piston such that translation of the piston via a pressure difference between the first and a second piston pressure chamber established by the pump results in translation of the hollow tubular member and the second part. 17. The apparatus according to claim 15, wherein inflation of the second gripping means attached to the second part is performed by pumping the fluid from the first gripping means via the reservoir and the hollow tubular member to the second gripping means. 18. The apparatus according to claim 15, wherein the apparatus further comprises a pressure relief valve fluidly coupled to the pump to determine a maximal pressure pumped into the gripping means. 19. The apparatus according to claim 1, wherein the first part comprises a reservoir comprising a fluid and sealed from a pressure chamber comprising a fluid and a piston dividing the pressure chamber into a first and a second piston pressure chamber fluidly coupled via a pump;
wherein the second part is attached to the first part via a hollow tubular member extending from the reservoir through the pressure chamber; and wherein the hollow tubular member is attached to the piston such that translation of the piston via a pressure difference between the first and a second piston pressure chamber established by the pump results in translation of the hollow tubular member and the second part. 20. The apparatus according to claim 19 further comprising, a first gripping means attached to the first part and a second gripping part attached to the second part and wherein the two gripping means are fluidly coupled via the pump;
wherein a first of the two gripping means comprises a fluid;
wherein the pump is configured to inflate a second of the gripping means by pumping the fluid from the first of the two gripping means to the second of the two gripping means; and
wherein the gripping means comprises a flexible member contained in a woven member, wherein the flexible member provides fluid-tightness and the woven member provides the shape of the gripping means. 21.-38. (canceled) 39. The apparatus according to claim 20, wherein inflation of the second gripping means attached to the second part is performable by pumping the fluid from the first gripping means via the reservoir and the hollow tubular member to the second gripping means. 40. The apparatus according to claim 20, wherein the apparatus further comprises a pressure relief valve fluidly coupled to the pump to determine a maximal pressure pumped into the gripping means. 41. The apparatus according to claim 1, wherein the apparatus further comprises at least one sensor communicatively coupled via the wireless communications channel to a control unit contained in the first part, and wherein the control unit is configured to generate a control signal for controlling the pump based on data from the at least one sensor. 42. The apparatus according to claim 41, wherein the apparatus further comprises an acoustic modem communicatively coupled to the control unit such that the control unit is configured to transmit date received from the at least on sensor to a receiver at an entrance of the drilled bore. 43. The apparatus according to claim 1, further comprising at least one directional means comprising a lever attached at one end to an outer side of the apparatus and activated by an actuator attached at one end to the outer side of the apparatus and the other end to the lever. 44. The apparatus according to claim 1, comprising a three-way valve, buoyancy means, pressure means, a vent line, at least one sensor and computation means;
wherein the three-way valve is configured to control the fluid flow between the pressure means and the buoyancy means and between the buoyancy means and the vent line; wherein the computation means are communicatively coupled to the at least one sensor and configured to generate a control signal based on data received from the at least one sensor; and wherein the pressure means are fluidly coupled to the buoyancy means via the three-way valve such that a fluid may flow from the pressure means to the buoyancy means or from the buoyancy means to the surroundings of the device via the vent line; and wherein the computation means are communicatively coupled to the three-way valve and controls said three-way valve via the control signal; wherein the computation means are communicatively coupled to at least one of the three-way valve and the at least one sensor via the wireless communications channel. 45. The apparatus according to claim 44, wherein the buoyancy means are contained in a first part of the apparatus; the pressure means are contained in a second part of the apparatus; another buoyancy means are contained in a third part of the apparatus; and wherein the first part and the third part connected via said second part and wherein the second part comprises of two hollow pieces joined via a ball joint. 46. The apparatus according to claim 45, wherein a first of the two hollow pieces comprises a spring and a bar, and wherein one end of the bar is connected to the ball joint and another end of the bar is connected to the spring, which spring is configured to keep the two hollow pieces of the second part in a straight line. 47. The apparatus according to claim 45, wherein the apparatus further comprises a plurality of flexible arms having one end connected to the circumference of the device and another end extending radially out from the apparatus at a radius larger than the radius of the apparatus and a maximal outer diameter determined by a texture stretched between the flexible arms. 48. The apparatus according to claim 47, wherein the apparatus is configured to contract the other end of the plurality of flexible arms to a radius of approximately the radius of the apparatus when receiving a control signal from the computation means. 49. The apparatus according to claim 45, further comprising a plurality of nozzles fluidly coupled to the pressure means such that a pressure fluid from the pressure means may be ejected via at least one of the plurality of nozzles. 50. The apparatus according to claim 49, wherein the computation means are configured to control the fluid coupling between the pressure means and the plurality of nozzles via the control signal. 51. The apparatus according to claim 45, further comprising communication means communicatively coupled to an external communication unit such as to transmit data from the at least one sensor to the external communication unit. 52. The apparatus according to claim 51, wherein the communication means are further configured to receive the control signal from the external communication unit such as to control the device from the external communication unit. 53. The apparatus according to claim 1, wherein the apparatus is a movable downhole apparatus configured to move or be moved along the drilled bore. 54. The apparatus according to claim 53, wherein the apparatus is a tractor configured to move along the drilled bore. 55. The apparatus according to claim 1, wherein the first part is a movable part and comprises the first electronic device configured to generate a data signal. 56. The apparatus according to claim 1, wherein the apparatus further comprises inflatable and deflatable gripping means. 57. The downhole apparatus according to claim 1, wherein the generated data signal is continuously or cyclically transmitted to the first part. 58. A method for communicating data between a first part and a second part of an apparatus operating in the drilled bore, the second part of the apparatus being connected to the first part of the apparatus, the method comprising:
generating a data signal by a first electronic device comprised in the second part; wirelessly transmitting the generated data signal from a first communications device comprised in the second part via a wireless communications channel to a second communications device comprised in the second part. | An apparatus ( 100 ) for operation in a tubular channel ( 199 ), the apparatus comprising a first part and a second part connected to the first part, wherein the second part comprises a first electronic device adapted to generate a data signal and a first communications device for wirelessly transmitting the generated data signal via a wireless communications channel, wherein the first part comprises a second communications device for wirelessly receiving the transmitted data signal via said radio-frequency communications channel.1. A downhole apparatus configured to move through a well in rock for operation in a drilled bore, the downhole apparatus configured to be installed temporarily or permanently in the drilled bore, the apparatus comprising a first part and a second part connected to the first part, wherein the second part comprises a first electronic device configured to generate a data signal and a first communications device for wirelessly transmitting the generated data signal via a wireless communications channel, wherein the first part comprises a second communications device for wirelessly receiving the transmitted data signal via said wireless communications channel. 2. The apparatus according to claim 1, wherein the data signal is a sensor signal, and wherein the first electronic device is a sensor for generating a sensor signal indicative of a measured property. 3. The apparatus according to claim 1, wherein the first part further comprises a second electronic device configured to process the received data signal. 4. The apparatus according to claim 3, wherein the second electronic device is a control unit for generating a control signal for controlling a controllable function of the apparatus. 5. The apparatus according to claim 4, wherein the controllable function includes a relative movement of the second part relative to the first part. 6. The apparatus according to claim 4, wherein the controllable function is a controllable function of the second part, wherein the second communications device is further configured to wirelessly transmit the control signal, wherein the first communications device is further configured to receive the transmitted control signal, and wherein the second part comprises a control unit for controlling the controllable function of the second part. 7. The apparatus according to claim 1, wherein the first and second parts include respective metallic housings and wherein the first and second communications devices are arranged inside the respective metallic housings. 8. The apparatus according to claim 1, wherein the first and second communications devices are configured to communicate with each other via a direct radio-frequency communications link or a communications link only including one or more relay communications devices comprised in the apparatus. 9. The apparatus according to claim 1, wherein the first and second communications devices are configured to communicate with each other via a short-range radio-frequency communications channel. 10. The apparatus according to claim 1, wherein the first and second communications devices are configured to communicate with each other via a radio-frequency communications channel using a protocol according to the IEEE 802.11 or IEEE 802.15 standard 11. The apparatus according to claim 1, wherein the second part is movably connected to the first part. 12.-14. (canceled) 15. The apparatus according to claim 1, comprising two gripping means fluidly connected via a pump;
wherein a first of the two gripping means comprises a fluid; wherein the first gripping means are attached to the first part, and the second gripping means are attached to the second part; wherein the pump (400) is configured to inflate a second one of the gripping means by pumping the fluid from the first of the two gripping means to the second of the two gripping means; and wherein the gripping means comprise a flexible member contained in a woven member, wherein the flexible member provides fluid-tightness and the woven member provides the shape of the gripping means. 16. The apparatus according to claim 15, wherein the first part comprises a reservoir comprising a fluid and sealed from a pressure chamber comprising a fluid and a piston dividing the pressure chamber into a first and a second piston pressure chamber fluidly coupled via a pump;
wherein the second part is attached to the first part via a hollow tubular member extending from the reservoir through the pressure chamber; and wherein the hollow tubular member is attached to the piston such that translation of the piston via a pressure difference between the first and a second piston pressure chamber established by the pump results in translation of the hollow tubular member and the second part. 17. The apparatus according to claim 15, wherein inflation of the second gripping means attached to the second part is performed by pumping the fluid from the first gripping means via the reservoir and the hollow tubular member to the second gripping means. 18. The apparatus according to claim 15, wherein the apparatus further comprises a pressure relief valve fluidly coupled to the pump to determine a maximal pressure pumped into the gripping means. 19. The apparatus according to claim 1, wherein the first part comprises a reservoir comprising a fluid and sealed from a pressure chamber comprising a fluid and a piston dividing the pressure chamber into a first and a second piston pressure chamber fluidly coupled via a pump;
wherein the second part is attached to the first part via a hollow tubular member extending from the reservoir through the pressure chamber; and wherein the hollow tubular member is attached to the piston such that translation of the piston via a pressure difference between the first and a second piston pressure chamber established by the pump results in translation of the hollow tubular member and the second part. 20. The apparatus according to claim 19 further comprising, a first gripping means attached to the first part and a second gripping part attached to the second part and wherein the two gripping means are fluidly coupled via the pump;
wherein a first of the two gripping means comprises a fluid;
wherein the pump is configured to inflate a second of the gripping means by pumping the fluid from the first of the two gripping means to the second of the two gripping means; and
wherein the gripping means comprises a flexible member contained in a woven member, wherein the flexible member provides fluid-tightness and the woven member provides the shape of the gripping means. 21.-38. (canceled) 39. The apparatus according to claim 20, wherein inflation of the second gripping means attached to the second part is performable by pumping the fluid from the first gripping means via the reservoir and the hollow tubular member to the second gripping means. 40. The apparatus according to claim 20, wherein the apparatus further comprises a pressure relief valve fluidly coupled to the pump to determine a maximal pressure pumped into the gripping means. 41. The apparatus according to claim 1, wherein the apparatus further comprises at least one sensor communicatively coupled via the wireless communications channel to a control unit contained in the first part, and wherein the control unit is configured to generate a control signal for controlling the pump based on data from the at least one sensor. 42. The apparatus according to claim 41, wherein the apparatus further comprises an acoustic modem communicatively coupled to the control unit such that the control unit is configured to transmit date received from the at least on sensor to a receiver at an entrance of the drilled bore. 43. The apparatus according to claim 1, further comprising at least one directional means comprising a lever attached at one end to an outer side of the apparatus and activated by an actuator attached at one end to the outer side of the apparatus and the other end to the lever. 44. The apparatus according to claim 1, comprising a three-way valve, buoyancy means, pressure means, a vent line, at least one sensor and computation means;
wherein the three-way valve is configured to control the fluid flow between the pressure means and the buoyancy means and between the buoyancy means and the vent line; wherein the computation means are communicatively coupled to the at least one sensor and configured to generate a control signal based on data received from the at least one sensor; and wherein the pressure means are fluidly coupled to the buoyancy means via the three-way valve such that a fluid may flow from the pressure means to the buoyancy means or from the buoyancy means to the surroundings of the device via the vent line; and wherein the computation means are communicatively coupled to the three-way valve and controls said three-way valve via the control signal; wherein the computation means are communicatively coupled to at least one of the three-way valve and the at least one sensor via the wireless communications channel. 45. The apparatus according to claim 44, wherein the buoyancy means are contained in a first part of the apparatus; the pressure means are contained in a second part of the apparatus; another buoyancy means are contained in a third part of the apparatus; and wherein the first part and the third part connected via said second part and wherein the second part comprises of two hollow pieces joined via a ball joint. 46. The apparatus according to claim 45, wherein a first of the two hollow pieces comprises a spring and a bar, and wherein one end of the bar is connected to the ball joint and another end of the bar is connected to the spring, which spring is configured to keep the two hollow pieces of the second part in a straight line. 47. The apparatus according to claim 45, wherein the apparatus further comprises a plurality of flexible arms having one end connected to the circumference of the device and another end extending radially out from the apparatus at a radius larger than the radius of the apparatus and a maximal outer diameter determined by a texture stretched between the flexible arms. 48. The apparatus according to claim 47, wherein the apparatus is configured to contract the other end of the plurality of flexible arms to a radius of approximately the radius of the apparatus when receiving a control signal from the computation means. 49. The apparatus according to claim 45, further comprising a plurality of nozzles fluidly coupled to the pressure means such that a pressure fluid from the pressure means may be ejected via at least one of the plurality of nozzles. 50. The apparatus according to claim 49, wherein the computation means are configured to control the fluid coupling between the pressure means and the plurality of nozzles via the control signal. 51. The apparatus according to claim 45, further comprising communication means communicatively coupled to an external communication unit such as to transmit data from the at least one sensor to the external communication unit. 52. The apparatus according to claim 51, wherein the communication means are further configured to receive the control signal from the external communication unit such as to control the device from the external communication unit. 53. The apparatus according to claim 1, wherein the apparatus is a movable downhole apparatus configured to move or be moved along the drilled bore. 54. The apparatus according to claim 53, wherein the apparatus is a tractor configured to move along the drilled bore. 55. The apparatus according to claim 1, wherein the first part is a movable part and comprises the first electronic device configured to generate a data signal. 56. The apparatus according to claim 1, wherein the apparatus further comprises inflatable and deflatable gripping means. 57. The downhole apparatus according to claim 1, wherein the generated data signal is continuously or cyclically transmitted to the first part. 58. A method for communicating data between a first part and a second part of an apparatus operating in the drilled bore, the second part of the apparatus being connected to the first part of the apparatus, the method comprising:
generating a data signal by a first electronic device comprised in the second part; wirelessly transmitting the generated data signal from a first communications device comprised in the second part via a wireless communications channel to a second communications device comprised in the second part. | 2,600 |
10,549 | 10,549 | 16,263,371 | 2,689 | Systems and methods for controlling a vehicle based on driver engagement are disclosed. In one embodiment, a method of controlling a vehicle includes determining, using driver data from one or more driver sensors, a driver state estimation of a driver of the vehicle, determining, using environment data from one or more environmental sensors, one or more environment anomalies within an environment of the vehicle, and determining an anomaly category for at least one of the one or more environment anomalies. The method further includes, based on the driver state estimation and one or more anomaly categories, selecting a failure mode, based on the failure mode, selecting at least one failsafe action, and determining an operation of the vehicle in accordance with the failure mode. | 1. A method of controlling a vehicle, the method comprising:
determining, using driver data from one or more driver sensors, a driver state estimation of a driver of the vehicle; determining, using environment data from one or more environmental sensors, one or more environment anomalies within an environment of the vehicle; determining an anomaly category for at least one of the one or more environment anomalies; based on the driver state estimation and one or more anomaly categories, selecting a failure mode; based on the failure mode, selecting at least one failsafe action; and determining an operation of the vehicle in accordance with the failure mode. 2. The method of claim 1, wherein determining the driver state estimation further comprises:
determining driver behavior information of the driver corresponding to one or more driver characteristics of the driver; and obtaining driver physiological information of the driver. 3. The method of claim 2, wherein the one or more driver characteristics of the driver are determined by feature extraction and one or more classification algorithms. 4. The method of claim 3, wherein the one or more driver characteristics of the driver comprise at least one of gaze location, head pose, hand position, foot position, and body pose. 5. The method of claim 4, wherein determining one or more characteristics of the driver further comprises:
generating a parametric skeleton figure of the driver; and determining one or more characteristics of the driver comprises determining a body posture from the parametric skeleton figure of the driver. 6. The method of claim 2, wherein:
the driver state estimation is determined by a driver state estimation module that combines the driver behavior information and the driver physiological information; and the driver state estimation module uses a Hidden Markov Model to represent the driver state estimation in real time. 7. The method of claim 6, wherein the driver state estimation module determines an initial observed driver state estimation based on the driver behavior information and determines a transition probability to determine a transfer driver state estimation using a combination of the driver behavior information and the driver physiological information. 8. The method of claim 1, wherein determining the one or more environment anomalies comprises detecting one or more roadway features and one or more surrounding objects, using the one or more environmental sensors. 9. The method of claim 8, wherein:
the one or more environment anomalies are detected by comparing the one or more detected roadway features and the one or more detected surrounding objects to one or more predetermined roadway features and one or more predetermined surrounding objects stored in a localized map database, respectively; and determining the one or environment anomalies comprises determining a confidence value based on the comparison between the one or more detected roadway features and the one or more detected surrounding objects to the one or more predetermined roadway features and the one or more predetermined surrounding objects, respectively. 10. The method of claim 9, further comprising setting an anomaly flag when the confidence value is below a confidence threshold level. 11. The method of claim 9, wherein the anomaly category is based on the confidence value. 12. (canceled) 13. The method of claim 1, wherein controlling operation of the vehicle comprises:
receiving one or more vehicle threat handling characteristics from a vehicle threat handling characteristics database; receiving one or more driver threat avoidance characteristics from a vehicle threat avoidance characteristics database; and determining whether the driver is capable of controlling the vehicle based on the one or more vehicle threat handling characteristics, the one or more driver threat avoidance characteristics, and the failure mode, wherein: when the driver is capable of controlling the vehicle, driver control of the vehicle is enabled; and when the driver is not capable of controlling the vehicle, computer control of the vehicle is enabled and the vehicle is autonomously controlled in accordance with the at least one failsafe action. 14. A system for controlling a vehicle, the system comprising:
one or more driver sensors for producing driver data; one or more environmental sensors for producing environment data; one or more processors; and non-transitory memory storing computer-readable instructions that, when executed by the one or more processors, causes the one or more processors to:
receive the driver data from the one or more driver sensors;
receive the environmental data from the one or more environmental sensors;
determine, using the driver data, a driver state estimation of a driver of the vehicle;
determine, using the environment data, one or more environment anomalies within an environment of the vehicle;
determine an anomaly category for at least one of the one or more environment anomalies;
based on the driver state estimation and one or more anomaly categories, select a failure mode;
based on the failure mode, select at least one failsafe action; and
determine an operation of the vehicle in accordance with the failure mode. 15. The system of claim 14, wherein the driver state estimation is determined by:
determining driver behavior information of the driver corresponding to one or more driver characteristics of the driver; and obtaining driver physiological information of the driver. 16. The system of claim 15, wherein the one or more characteristics are determined by:
generating a parametric skeleton figure of the driver; and determining one or more driver characteristics of the driver comprises determining a body posture from the parametric skeleton figure of the driver. 17. The system of claim 15, wherein:
the driver state estimation is determined by a driver state estimation module that combines the driver behavior information and the driver physiological information; and the driver state estimation module uses a Hidden Markov Model to represent the driver state estimation in real time. 18. The system of claim 17, wherein the driver state estimation module determines an initial observed driver state estimation based on the driver behavior information and determines a transition probability to determine a transfer driver state estimation using a combination of the driver behavior information and the driver physiological information. 19. The system of claim 14, wherein:
the one or more environment anomalies are detected by comparing one or more detected roadway features and one or more detected surrounding objects to one or more predetermined roadway features and one or more predetermined surrounding objects stored in a localized map database, respectively; determining the one or environment anomalies comprises determining a confidence value based on the comparison between the one or more detected roadway features and the one or more detected surrounding objects to the one or more predetermined roadway features and the one or more predetermined surrounding objects, respectively; and the anomaly category is based on the confidence value. 20. The system of claim 14, wherein controlling operation of the vehicle comprises:
receiving one or more vehicle threat handling characteristics from a vehicle threat handling characteristics database; receiving one or more driver threat avoidance characteristics from a vehicle threat avoidance characteristics database; and determining whether the driver is capable of controlling the vehicle based on the one or more vehicle threat handling characteristics, the one or more driver threat avoidance characteristics, and the failure mode, wherein:
when the driver is capable of controlling the vehicle, driver control of the vehicle is enabled; and
when the driver is not capable of controlling the vehicle, computer control of the vehicle is enabled and the vehicle is autonomously controlled in accordance with the at least one failsafe action. 21. The method of claim 1, wherein determining the one or more environment anomalies further comprises:
detecting one or more roadway features and one or more surrounding objects, using the one or more environmental sensors; determining a confidence value based on a comparison between the one or more detected roadway features and the one or more detected surrounding objects to one or more predetermined roadway features and one or more predetermined surrounding objects respectively; and setting an anomaly flag when the confidence value is below a confidence threshold level. | Systems and methods for controlling a vehicle based on driver engagement are disclosed. In one embodiment, a method of controlling a vehicle includes determining, using driver data from one or more driver sensors, a driver state estimation of a driver of the vehicle, determining, using environment data from one or more environmental sensors, one or more environment anomalies within an environment of the vehicle, and determining an anomaly category for at least one of the one or more environment anomalies. The method further includes, based on the driver state estimation and one or more anomaly categories, selecting a failure mode, based on the failure mode, selecting at least one failsafe action, and determining an operation of the vehicle in accordance with the failure mode.1. A method of controlling a vehicle, the method comprising:
determining, using driver data from one or more driver sensors, a driver state estimation of a driver of the vehicle; determining, using environment data from one or more environmental sensors, one or more environment anomalies within an environment of the vehicle; determining an anomaly category for at least one of the one or more environment anomalies; based on the driver state estimation and one or more anomaly categories, selecting a failure mode; based on the failure mode, selecting at least one failsafe action; and determining an operation of the vehicle in accordance with the failure mode. 2. The method of claim 1, wherein determining the driver state estimation further comprises:
determining driver behavior information of the driver corresponding to one or more driver characteristics of the driver; and obtaining driver physiological information of the driver. 3. The method of claim 2, wherein the one or more driver characteristics of the driver are determined by feature extraction and one or more classification algorithms. 4. The method of claim 3, wherein the one or more driver characteristics of the driver comprise at least one of gaze location, head pose, hand position, foot position, and body pose. 5. The method of claim 4, wherein determining one or more characteristics of the driver further comprises:
generating a parametric skeleton figure of the driver; and determining one or more characteristics of the driver comprises determining a body posture from the parametric skeleton figure of the driver. 6. The method of claim 2, wherein:
the driver state estimation is determined by a driver state estimation module that combines the driver behavior information and the driver physiological information; and the driver state estimation module uses a Hidden Markov Model to represent the driver state estimation in real time. 7. The method of claim 6, wherein the driver state estimation module determines an initial observed driver state estimation based on the driver behavior information and determines a transition probability to determine a transfer driver state estimation using a combination of the driver behavior information and the driver physiological information. 8. The method of claim 1, wherein determining the one or more environment anomalies comprises detecting one or more roadway features and one or more surrounding objects, using the one or more environmental sensors. 9. The method of claim 8, wherein:
the one or more environment anomalies are detected by comparing the one or more detected roadway features and the one or more detected surrounding objects to one or more predetermined roadway features and one or more predetermined surrounding objects stored in a localized map database, respectively; and determining the one or environment anomalies comprises determining a confidence value based on the comparison between the one or more detected roadway features and the one or more detected surrounding objects to the one or more predetermined roadway features and the one or more predetermined surrounding objects, respectively. 10. The method of claim 9, further comprising setting an anomaly flag when the confidence value is below a confidence threshold level. 11. The method of claim 9, wherein the anomaly category is based on the confidence value. 12. (canceled) 13. The method of claim 1, wherein controlling operation of the vehicle comprises:
receiving one or more vehicle threat handling characteristics from a vehicle threat handling characteristics database; receiving one or more driver threat avoidance characteristics from a vehicle threat avoidance characteristics database; and determining whether the driver is capable of controlling the vehicle based on the one or more vehicle threat handling characteristics, the one or more driver threat avoidance characteristics, and the failure mode, wherein: when the driver is capable of controlling the vehicle, driver control of the vehicle is enabled; and when the driver is not capable of controlling the vehicle, computer control of the vehicle is enabled and the vehicle is autonomously controlled in accordance with the at least one failsafe action. 14. A system for controlling a vehicle, the system comprising:
one or more driver sensors for producing driver data; one or more environmental sensors for producing environment data; one or more processors; and non-transitory memory storing computer-readable instructions that, when executed by the one or more processors, causes the one or more processors to:
receive the driver data from the one or more driver sensors;
receive the environmental data from the one or more environmental sensors;
determine, using the driver data, a driver state estimation of a driver of the vehicle;
determine, using the environment data, one or more environment anomalies within an environment of the vehicle;
determine an anomaly category for at least one of the one or more environment anomalies;
based on the driver state estimation and one or more anomaly categories, select a failure mode;
based on the failure mode, select at least one failsafe action; and
determine an operation of the vehicle in accordance with the failure mode. 15. The system of claim 14, wherein the driver state estimation is determined by:
determining driver behavior information of the driver corresponding to one or more driver characteristics of the driver; and obtaining driver physiological information of the driver. 16. The system of claim 15, wherein the one or more characteristics are determined by:
generating a parametric skeleton figure of the driver; and determining one or more driver characteristics of the driver comprises determining a body posture from the parametric skeleton figure of the driver. 17. The system of claim 15, wherein:
the driver state estimation is determined by a driver state estimation module that combines the driver behavior information and the driver physiological information; and the driver state estimation module uses a Hidden Markov Model to represent the driver state estimation in real time. 18. The system of claim 17, wherein the driver state estimation module determines an initial observed driver state estimation based on the driver behavior information and determines a transition probability to determine a transfer driver state estimation using a combination of the driver behavior information and the driver physiological information. 19. The system of claim 14, wherein:
the one or more environment anomalies are detected by comparing one or more detected roadway features and one or more detected surrounding objects to one or more predetermined roadway features and one or more predetermined surrounding objects stored in a localized map database, respectively; determining the one or environment anomalies comprises determining a confidence value based on the comparison between the one or more detected roadway features and the one or more detected surrounding objects to the one or more predetermined roadway features and the one or more predetermined surrounding objects, respectively; and the anomaly category is based on the confidence value. 20. The system of claim 14, wherein controlling operation of the vehicle comprises:
receiving one or more vehicle threat handling characteristics from a vehicle threat handling characteristics database; receiving one or more driver threat avoidance characteristics from a vehicle threat avoidance characteristics database; and determining whether the driver is capable of controlling the vehicle based on the one or more vehicle threat handling characteristics, the one or more driver threat avoidance characteristics, and the failure mode, wherein:
when the driver is capable of controlling the vehicle, driver control of the vehicle is enabled; and
when the driver is not capable of controlling the vehicle, computer control of the vehicle is enabled and the vehicle is autonomously controlled in accordance with the at least one failsafe action. 21. The method of claim 1, wherein determining the one or more environment anomalies further comprises:
detecting one or more roadway features and one or more surrounding objects, using the one or more environmental sensors; determining a confidence value based on a comparison between the one or more detected roadway features and the one or more detected surrounding objects to one or more predetermined roadway features and one or more predetermined surrounding objects respectively; and setting an anomaly flag when the confidence value is below a confidence threshold level. | 2,600 |
10,550 | 10,550 | 16,074,748 | 2,632 | Some embodiments may include an apparatus comprising: a first universal key for unlocking, locking, and/or starting a motor vehicle; a server; and a mobile radio terminal. The first universal key is trained to a particular motor vehicle using a first secret. The server stores a further secret for unlocking, locking, and/or starting the motor vehicle. The server is configured to transmit the further secret to a universal key using two interfaces and the mobile radio terminal. | 1. An apparatus comprising:
a first universal key for unlocking, locking, and/or starting a motor vehicle; a server; and a mobile radio terminal; wherein the first universal key is trained to a particular motor vehicle using a first secret; the server stores a further secret for unlocking, locking, and/or starting the motor vehicle; and the server is configured to transmit the further secret to a universal key using two interfaces and the mobile radio terminal. 2. The apparatus as claimed in claim 1, further comprising
a controller for checking the further secret received from any universal key against a further secret stored in the motor vehicle; wherein the control device is programmed, only if the two further secrets match, to allow and/or prompt unlocking, locking, and/or starting the motor vehicle. 3. The apparatus as claimed in claim 1, wherein the first universal key comprises a respective radio interface to communicate with the mobile radio terminal and/or with the motor vehicle. 4. The apparatus as claimed in claim 1, wherein the universal key includes a timer to monitor a temporal validity, predefined for said timer, of a further secret communicated to said timer, for opening, or closing, or starting the motor vehicle. 5. The apparatus as claimed in claim 1, further comprising more than one universal key. 6. A method comprising:
receiving a first secret from a server at a universal key; transmitting a further secret for unlocking, locking, and/or starting (a motor vehicle from the motor vehicle to the universal key; transmitting the further secret from the universal key to the server; and storing the further secret in the server. 7. The method as claimed in claim 6, further comprising:
transmitting the first secret from a server to the universal key; and thereafter transmitting a further secret from the motor vehicle to a server using the universal key and an associated mobile radio terminal. 8. The method as claimed in claim 6,
wherein the server uses three interfaces and the universal key to transmit the first secret to the motor vehicle; the motor vehicle uses three interfaces and the universal key to transmit the further secret to the server; and the three interfaces comprise radio interfaces. 9. A method as claimed in claim 6, further comprising transmitting a further secret from a server to the universal key using two interfaces comprising radio interfaces. 10. The method as claimed in claim 9,
wherein at least one of the two interfaces comprises a mobile radio interface or at least one of the two interfaces comprises a Bluetooth radio interface. 11. The method as claimed in claim 9, wherein multiple further secrets are each usable for opening, closing, and/or starting a respective motor vehicle;
further comprising transmitting the multiple further secrets to the universal key. 12. The method as claimed in claim 9, further comprising transmitting a further secret usable for only some of the functions to the universal key. 13. The method as claimed in claim 9, further comprising storing multiple protocols or multiple further secrets in the universal key for one or more motor vehicles at the same time. 14. The method as claimed in claim 9, further comprising training multiple universal keys for the motor vehicle at the same time to allow multiple users to operate, enter, and/or close at the same time. 15. The method as claimed in claim 9, further comprising training more than one universal key. 16. The apparatus as claimed in claim 1, further comprising more than one first secrets and/or further secrets. 17. The method as claimed in claim 9, further comprising using more than one first secrets and/or further secrets. | Some embodiments may include an apparatus comprising: a first universal key for unlocking, locking, and/or starting a motor vehicle; a server; and a mobile radio terminal. The first universal key is trained to a particular motor vehicle using a first secret. The server stores a further secret for unlocking, locking, and/or starting the motor vehicle. The server is configured to transmit the further secret to a universal key using two interfaces and the mobile radio terminal.1. An apparatus comprising:
a first universal key for unlocking, locking, and/or starting a motor vehicle; a server; and a mobile radio terminal; wherein the first universal key is trained to a particular motor vehicle using a first secret; the server stores a further secret for unlocking, locking, and/or starting the motor vehicle; and the server is configured to transmit the further secret to a universal key using two interfaces and the mobile radio terminal. 2. The apparatus as claimed in claim 1, further comprising
a controller for checking the further secret received from any universal key against a further secret stored in the motor vehicle; wherein the control device is programmed, only if the two further secrets match, to allow and/or prompt unlocking, locking, and/or starting the motor vehicle. 3. The apparatus as claimed in claim 1, wherein the first universal key comprises a respective radio interface to communicate with the mobile radio terminal and/or with the motor vehicle. 4. The apparatus as claimed in claim 1, wherein the universal key includes a timer to monitor a temporal validity, predefined for said timer, of a further secret communicated to said timer, for opening, or closing, or starting the motor vehicle. 5. The apparatus as claimed in claim 1, further comprising more than one universal key. 6. A method comprising:
receiving a first secret from a server at a universal key; transmitting a further secret for unlocking, locking, and/or starting (a motor vehicle from the motor vehicle to the universal key; transmitting the further secret from the universal key to the server; and storing the further secret in the server. 7. The method as claimed in claim 6, further comprising:
transmitting the first secret from a server to the universal key; and thereafter transmitting a further secret from the motor vehicle to a server using the universal key and an associated mobile radio terminal. 8. The method as claimed in claim 6,
wherein the server uses three interfaces and the universal key to transmit the first secret to the motor vehicle; the motor vehicle uses three interfaces and the universal key to transmit the further secret to the server; and the three interfaces comprise radio interfaces. 9. A method as claimed in claim 6, further comprising transmitting a further secret from a server to the universal key using two interfaces comprising radio interfaces. 10. The method as claimed in claim 9,
wherein at least one of the two interfaces comprises a mobile radio interface or at least one of the two interfaces comprises a Bluetooth radio interface. 11. The method as claimed in claim 9, wherein multiple further secrets are each usable for opening, closing, and/or starting a respective motor vehicle;
further comprising transmitting the multiple further secrets to the universal key. 12. The method as claimed in claim 9, further comprising transmitting a further secret usable for only some of the functions to the universal key. 13. The method as claimed in claim 9, further comprising storing multiple protocols or multiple further secrets in the universal key for one or more motor vehicles at the same time. 14. The method as claimed in claim 9, further comprising training multiple universal keys for the motor vehicle at the same time to allow multiple users to operate, enter, and/or close at the same time. 15. The method as claimed in claim 9, further comprising training more than one universal key. 16. The apparatus as claimed in claim 1, further comprising more than one first secrets and/or further secrets. 17. The method as claimed in claim 9, further comprising using more than one first secrets and/or further secrets. | 2,600 |
10,551 | 10,551 | 15,267,552 | 2,688 | A dynamic wake-up alarm is provided-for, including a clock, a contactless biometric sensor, a processor, memory, and a speaker. The processor may be configured to receive a wake-up rule based on at least two wake-up criteria including a time from the clock and data from the biometric sensor, and evaluate whether the criteria are met to activate an alarm. | 1. A dynamic wake-up alarm comprising:
a clock; a contactless biometric sensor for determining at least one of a heart rate, a respiratory rate, a presence of a user, or movement of a user; a microphone; a processor; memory; and a speaker; wherein said processor is configured to receive a wake-up rule based on at least two wake-up criteria, at least one of said at least two wake-up criteria being based on a voice activated user command detected at said microphone, and said at least two wake-up criteria including a biometric criteria based on data from the contactless biometric sensor and a time criteria; and evaluate whether said at least two wake-up criteria are met, and activate an alarm based on a determination that said at least two wake-up criteria are met; wherein said wake-up rule allows the alarm to be activated based on said data from the contactless biometric sensor only after a wake-up time specified by the time criteria is passed. 2. The dynamic wake-up alarm of claim 1, wherein the processor is further configured to determine from the data from said contactless biometric sensor a state of the user's sleep, and the biometric criteria is that the user is in a state of light sleep. 3. The dynamic wake-up alarm of claim 1, further comprising network communications hardware for retrieving information regarding additional wake-up criteria via a network. 4. The dynamic wake-up alarm of claim 1, wherein said processor is further configured to evaluate said wake-up rule and to preclude activation of said alarm based on a determination that said at least two wake-up criteria of the wake-up rule are met. 5. The dynamic wake-up alarm of claim 1, wherein said processor is further configured to receive information regarding a default timeframe for waking. 6. The dynamic wake-up alarm of claim 1, wherein said wake-up rule comprises at least one additional wake-up criteria for evaluation, and comprises an alarm type. 7. The dynamic wake-up alarm of claim 6, wherein said additional wake-up criteria comprises a traffic condition, a weather condition, a home automation condition, a calendar entry, a biometric indicator of a second user, or a current sleep score. 8. The dynamic wake-up alarm of claim 6, wherein said alarm type specifies that said alarm may be shut off by at least one of a mechanical switch, vacating a sleep space, or waking up. 9. The dynamic wake-up alarm of claim 1, wherein said processor is configured to receive at least one home automation rule based on at least one additional wake-up criteria; and
wherein said processor is further configured to evaluate said at least one home automation rule and transmit a signal for activating at least one home appliance based on a determination that the additional wake-up criteria associated with the home automation rule is met. 10. The dynamic wake-up alarm of claim 1, wherein said data from the contactless biometric sensor comprises a sleep score. 11. A method for providing a wake-up alarm, comprising:
receiving a wake-up rule based on at least two wake-up criteria, at least one of said at least two wake-up criteria being based on a voice activated user command detected at a microphone, and said at least two wake-up criteria including a biometric criteria based on data from a biometric sensor and a time criteria; storing said wake-up rule in an alarm profile; receiving information regarding a current time; retrieving said data from said biometric sensor from the biometric sensor; determining whether said at least two wake-up criteria are met; and if the at least two wake-up criteria are met, activating an alarm. 12. The method of claim 11, wherein the step of retrieving said data from said biometric sensor from the biometric sensor comprises reading said biometric sensor and determining at least one of a heart rate, a respiratory rate, a presence of a user, or movement of a user. 13. The method of claim 11, wherein said wake-up rule is based on at least a third wake-up criteria, the third wake-up criteria including a traffic condition, a weather condition, a home automation condition, a calendar entry, or a biometric indicator of a second user. 14. The method of claim 13, further comprising retrieving information regarding said at least a third wake-up criteria from a remote database via a network. 15. The method of claim 11, further comprising receiving at least one alarm type associated with the wake-up rule. 16. The method of claim 11, further comprising:
receiving at least one home automation rule based on at least a third wake-up criteria; evaluating whether the third wake-up criteria associated with said at least one home automation rule has been met; and transmitting a signal for controlling at least one home appliance based on a determination that the third wake-up criteria associated with said at least one home automation rule has been met. 17. The method of claim 11, wherein said data from a biometric sensor comprises a sleep score. 18. A wake-up alarm system comprising:
a clock; a microphone; a processor; memory; and a speaker; wherein said processor is configured to receive at least one wake-up rule based on at least two wake-up criteria, at least one of said at least two wake-up criteria being based on a voice activated user command detected at the microphone, and the at least two wake-up criteria including a biometric criteria based on data from a biometric sensor and a time criteria; and wherein said processor is further configured to determine whether said at least two wake-up criteria are met, and activate an alarm if the at least two wake-up criteria are met. 19. The wake-up alarm system of claim 18, further comprising a contactless biometric sensor for determining at least one of a heart rate, a respiratory rate, a presence of a user, or movement of a user. 20. The wake-up alarm system of claim 18, wherein said at least one wake-up rule is based on at least a third wake-up criteria, said third wake-up criteria including a traffic condition, a weather condition, a home automation condition, a calendar entry, or a biometric indicator of a second user. 21. The wake-up alarm system of claim 20, further comprising network communications hardware for retrieving information regarding said third wake-up criteria via a network. 22. The wake-up alarm system of claim 18, wherein said processor is further configured to receive an alarm type associated with said at least one wake-up rule. 23. The wake-up alarm system of claim 22, wherein said alarm type specifies that a snooze feature is disabled or a snooze feature is enabled. 24. The wake-up alarm system of claim 22, wherein said alarm type specifies that said alarm may be shut off by at least one of a mechanical switch, vacating a sleep space, or waking up. 25. The wake-up alarm system of claim 19, wherein said processor is configured to receive at least one home automation rule based on at least a third wake-up criteria; and
wherein said processor is further configured to evaluate whether the third wake-up criteria associated with said at least one home automation rule has been met and to transmit a signal for controlling at least one home appliance based on a determination that the third wake-up criteria associated with said at least one home automation rule has been met. 26. The wake-up alarm system of claim 18, wherein said data from said biometric sensor comprises a sleep score. | A dynamic wake-up alarm is provided-for, including a clock, a contactless biometric sensor, a processor, memory, and a speaker. The processor may be configured to receive a wake-up rule based on at least two wake-up criteria including a time from the clock and data from the biometric sensor, and evaluate whether the criteria are met to activate an alarm.1. A dynamic wake-up alarm comprising:
a clock; a contactless biometric sensor for determining at least one of a heart rate, a respiratory rate, a presence of a user, or movement of a user; a microphone; a processor; memory; and a speaker; wherein said processor is configured to receive a wake-up rule based on at least two wake-up criteria, at least one of said at least two wake-up criteria being based on a voice activated user command detected at said microphone, and said at least two wake-up criteria including a biometric criteria based on data from the contactless biometric sensor and a time criteria; and evaluate whether said at least two wake-up criteria are met, and activate an alarm based on a determination that said at least two wake-up criteria are met; wherein said wake-up rule allows the alarm to be activated based on said data from the contactless biometric sensor only after a wake-up time specified by the time criteria is passed. 2. The dynamic wake-up alarm of claim 1, wherein the processor is further configured to determine from the data from said contactless biometric sensor a state of the user's sleep, and the biometric criteria is that the user is in a state of light sleep. 3. The dynamic wake-up alarm of claim 1, further comprising network communications hardware for retrieving information regarding additional wake-up criteria via a network. 4. The dynamic wake-up alarm of claim 1, wherein said processor is further configured to evaluate said wake-up rule and to preclude activation of said alarm based on a determination that said at least two wake-up criteria of the wake-up rule are met. 5. The dynamic wake-up alarm of claim 1, wherein said processor is further configured to receive information regarding a default timeframe for waking. 6. The dynamic wake-up alarm of claim 1, wherein said wake-up rule comprises at least one additional wake-up criteria for evaluation, and comprises an alarm type. 7. The dynamic wake-up alarm of claim 6, wherein said additional wake-up criteria comprises a traffic condition, a weather condition, a home automation condition, a calendar entry, a biometric indicator of a second user, or a current sleep score. 8. The dynamic wake-up alarm of claim 6, wherein said alarm type specifies that said alarm may be shut off by at least one of a mechanical switch, vacating a sleep space, or waking up. 9. The dynamic wake-up alarm of claim 1, wherein said processor is configured to receive at least one home automation rule based on at least one additional wake-up criteria; and
wherein said processor is further configured to evaluate said at least one home automation rule and transmit a signal for activating at least one home appliance based on a determination that the additional wake-up criteria associated with the home automation rule is met. 10. The dynamic wake-up alarm of claim 1, wherein said data from the contactless biometric sensor comprises a sleep score. 11. A method for providing a wake-up alarm, comprising:
receiving a wake-up rule based on at least two wake-up criteria, at least one of said at least two wake-up criteria being based on a voice activated user command detected at a microphone, and said at least two wake-up criteria including a biometric criteria based on data from a biometric sensor and a time criteria; storing said wake-up rule in an alarm profile; receiving information regarding a current time; retrieving said data from said biometric sensor from the biometric sensor; determining whether said at least two wake-up criteria are met; and if the at least two wake-up criteria are met, activating an alarm. 12. The method of claim 11, wherein the step of retrieving said data from said biometric sensor from the biometric sensor comprises reading said biometric sensor and determining at least one of a heart rate, a respiratory rate, a presence of a user, or movement of a user. 13. The method of claim 11, wherein said wake-up rule is based on at least a third wake-up criteria, the third wake-up criteria including a traffic condition, a weather condition, a home automation condition, a calendar entry, or a biometric indicator of a second user. 14. The method of claim 13, further comprising retrieving information regarding said at least a third wake-up criteria from a remote database via a network. 15. The method of claim 11, further comprising receiving at least one alarm type associated with the wake-up rule. 16. The method of claim 11, further comprising:
receiving at least one home automation rule based on at least a third wake-up criteria; evaluating whether the third wake-up criteria associated with said at least one home automation rule has been met; and transmitting a signal for controlling at least one home appliance based on a determination that the third wake-up criteria associated with said at least one home automation rule has been met. 17. The method of claim 11, wherein said data from a biometric sensor comprises a sleep score. 18. A wake-up alarm system comprising:
a clock; a microphone; a processor; memory; and a speaker; wherein said processor is configured to receive at least one wake-up rule based on at least two wake-up criteria, at least one of said at least two wake-up criteria being based on a voice activated user command detected at the microphone, and the at least two wake-up criteria including a biometric criteria based on data from a biometric sensor and a time criteria; and wherein said processor is further configured to determine whether said at least two wake-up criteria are met, and activate an alarm if the at least two wake-up criteria are met. 19. The wake-up alarm system of claim 18, further comprising a contactless biometric sensor for determining at least one of a heart rate, a respiratory rate, a presence of a user, or movement of a user. 20. The wake-up alarm system of claim 18, wherein said at least one wake-up rule is based on at least a third wake-up criteria, said third wake-up criteria including a traffic condition, a weather condition, a home automation condition, a calendar entry, or a biometric indicator of a second user. 21. The wake-up alarm system of claim 20, further comprising network communications hardware for retrieving information regarding said third wake-up criteria via a network. 22. The wake-up alarm system of claim 18, wherein said processor is further configured to receive an alarm type associated with said at least one wake-up rule. 23. The wake-up alarm system of claim 22, wherein said alarm type specifies that a snooze feature is disabled or a snooze feature is enabled. 24. The wake-up alarm system of claim 22, wherein said alarm type specifies that said alarm may be shut off by at least one of a mechanical switch, vacating a sleep space, or waking up. 25. The wake-up alarm system of claim 19, wherein said processor is configured to receive at least one home automation rule based on at least a third wake-up criteria; and
wherein said processor is further configured to evaluate whether the third wake-up criteria associated with said at least one home automation rule has been met and to transmit a signal for controlling at least one home appliance based on a determination that the third wake-up criteria associated with said at least one home automation rule has been met. 26. The wake-up alarm system of claim 18, wherein said data from said biometric sensor comprises a sleep score. | 2,600 |
10,552 | 10,552 | 14,568,119 | 2,616 | A display system including a display device and a light combining device is provided. The display device includes a first display region and a second display region. The light combining device has a first surface and a second surface opposite to the first surface. At least one part of a first light beam from the first display region is reflected by the first surface to an observing region so as to form a first virtual image. At least one part of a second light beam from the second display region penetrates through the first surface, is reflected by the second surface, penetrates through the first surface again, and is transmitted to the observing region in sequence, so as to form a second virtual image. | 1. A display system comprising:
a display device comprising a first display region and a second display region; and a light combining device having a first surface and a second surface opposite to the first surface, wherein at least one part of a first light beam from the first display region is reflected by the first surface to an observing region so as to form a first virtual image, and at least one part of a second light beam from the second display region penetrates through the first surface, is reflected by the second surface, penetrates through the first surface again, and is transmitted to the observing region in sequence, so as to form a second virtual image. 2. The display system according to claim 1, wherein the first surface is between the second surface and the observing region, the second surface is disposed between the first virtual image and the first surface, and the second surface is disposed between the second virtual image and the first surface. 3. The display system according to claim 2, wherein the first virtual image is between the second virtual image and the second surface. 4. The display system according to claim 1, wherein a first part of the first light beam from the first display region is reflected by the first surface to the observing region, and a second part of the first light beam penetrates the first surface, is reflected by the second surface, penetrates through the first surface again, and is transmitted toward a direction deviating from the observing region in sequence. 5. The display system according to claim 1, wherein a first part of the second beam from the second display region penetrates through the first surface, is reflected by the second surface, penetrates through the first surface again, and is transmitted to the observing region in sequence, and a second part of the second beam from the second display region is reflected by the first surface toward a direction deviating from the observing region. 6. The display system according to claim 1 further comprising a control unit configured to switch the display device to a first state and a second state, wherein when the display device is switched to the first state, the first display region emits the first light beam and the second display region does not emit the second light beam, and wherein when the display device is switched to the second state, the second display region emits the second light beam and the first display region does not emit the first light beam. 7. The display system according to claim 1, wherein the control unit is configured to receive an external signal and determine to switch the display device to the first state or the second state according to the external signal. 8. The display system according to claim 7, wherein the external signal comprises a signal from a radar, a signal from a camera configured to detect at least one eye of a user, a signal from a camera configured to detect a front car, or a combination thereof. 9. The display system according to claim 1 further comprising an optical element disposed on paths of the first light beam and the second light beam between the display device and the light combining device. 10. The display system according to claim 9, wherein the optical element comprises a curved mirror, a lens, a plane mirror, or a combination thereof. 11. The display system according to claim 1, wherein the first surface and the second surface are curved surfaces. 12. The display system according to claim 11, wherein the first surface and the second surface are freeform surfaces or aspheric surfaces. 13. The display system according to claim 11, wherein the first surface is a concave surface, and the second surface is a convex surface. 14. The display system according to claim 1, wherein the light combining device is made of a transparent material. 15. A display system comprising:
a display device comprising a first display region and a second display region; and a light combining device configured to deflect at least one part of a first light beam from the first display region to an observing region so as to form a first virtual image, and configured to deflect at least one part of a second light beam from the second display region to the observing region so as to form a second virtual image. 16. The display system according to claim 15, wherein the first virtual image is between the second virtual image and the light combining device. 17. The display system according to claim 15, wherein a first part of the first light beam from the first display region is deflected by the light combining device to the observing region so as to form the first virtual image, a second part of the first light beam is deflected by the light combining device toward a direction deviating from the observing region, a first part of the second beam from the second display region is deflected by the light combining device to the observing region so as to form the second virtual image, and a second part of the second beam from the second display region is deflected by the first combining device toward a direction deviating from the observing region. 18. The display system according to claim 15 further comprising a control unit configured to switch the display device to a first state and a second state, wherein when the display device is switched to the first state, the first display region emits the first light beam and the second display region does not emit the second light beam, and wherein when the display device is switched to the second state, the second display region emits the second light beam and the first display region does not emit the first light beam. 19. The display system according to claim 15, wherein the control unit is configured to receive an external signal and determine to switch the display device to the first state or the second state according to the external signal. 20. The display system according to claim 19, wherein the external signal comprises a signal from a radar, a signal from a camera configured to detect at least one eye of a user, a signal from a camera configured to detect a front car, or a combination thereof. | A display system including a display device and a light combining device is provided. The display device includes a first display region and a second display region. The light combining device has a first surface and a second surface opposite to the first surface. At least one part of a first light beam from the first display region is reflected by the first surface to an observing region so as to form a first virtual image. At least one part of a second light beam from the second display region penetrates through the first surface, is reflected by the second surface, penetrates through the first surface again, and is transmitted to the observing region in sequence, so as to form a second virtual image.1. A display system comprising:
a display device comprising a first display region and a second display region; and a light combining device having a first surface and a second surface opposite to the first surface, wherein at least one part of a first light beam from the first display region is reflected by the first surface to an observing region so as to form a first virtual image, and at least one part of a second light beam from the second display region penetrates through the first surface, is reflected by the second surface, penetrates through the first surface again, and is transmitted to the observing region in sequence, so as to form a second virtual image. 2. The display system according to claim 1, wherein the first surface is between the second surface and the observing region, the second surface is disposed between the first virtual image and the first surface, and the second surface is disposed between the second virtual image and the first surface. 3. The display system according to claim 2, wherein the first virtual image is between the second virtual image and the second surface. 4. The display system according to claim 1, wherein a first part of the first light beam from the first display region is reflected by the first surface to the observing region, and a second part of the first light beam penetrates the first surface, is reflected by the second surface, penetrates through the first surface again, and is transmitted toward a direction deviating from the observing region in sequence. 5. The display system according to claim 1, wherein a first part of the second beam from the second display region penetrates through the first surface, is reflected by the second surface, penetrates through the first surface again, and is transmitted to the observing region in sequence, and a second part of the second beam from the second display region is reflected by the first surface toward a direction deviating from the observing region. 6. The display system according to claim 1 further comprising a control unit configured to switch the display device to a first state and a second state, wherein when the display device is switched to the first state, the first display region emits the first light beam and the second display region does not emit the second light beam, and wherein when the display device is switched to the second state, the second display region emits the second light beam and the first display region does not emit the first light beam. 7. The display system according to claim 1, wherein the control unit is configured to receive an external signal and determine to switch the display device to the first state or the second state according to the external signal. 8. The display system according to claim 7, wherein the external signal comprises a signal from a radar, a signal from a camera configured to detect at least one eye of a user, a signal from a camera configured to detect a front car, or a combination thereof. 9. The display system according to claim 1 further comprising an optical element disposed on paths of the first light beam and the second light beam between the display device and the light combining device. 10. The display system according to claim 9, wherein the optical element comprises a curved mirror, a lens, a plane mirror, or a combination thereof. 11. The display system according to claim 1, wherein the first surface and the second surface are curved surfaces. 12. The display system according to claim 11, wherein the first surface and the second surface are freeform surfaces or aspheric surfaces. 13. The display system according to claim 11, wherein the first surface is a concave surface, and the second surface is a convex surface. 14. The display system according to claim 1, wherein the light combining device is made of a transparent material. 15. A display system comprising:
a display device comprising a first display region and a second display region; and a light combining device configured to deflect at least one part of a first light beam from the first display region to an observing region so as to form a first virtual image, and configured to deflect at least one part of a second light beam from the second display region to the observing region so as to form a second virtual image. 16. The display system according to claim 15, wherein the first virtual image is between the second virtual image and the light combining device. 17. The display system according to claim 15, wherein a first part of the first light beam from the first display region is deflected by the light combining device to the observing region so as to form the first virtual image, a second part of the first light beam is deflected by the light combining device toward a direction deviating from the observing region, a first part of the second beam from the second display region is deflected by the light combining device to the observing region so as to form the second virtual image, and a second part of the second beam from the second display region is deflected by the first combining device toward a direction deviating from the observing region. 18. The display system according to claim 15 further comprising a control unit configured to switch the display device to a first state and a second state, wherein when the display device is switched to the first state, the first display region emits the first light beam and the second display region does not emit the second light beam, and wherein when the display device is switched to the second state, the second display region emits the second light beam and the first display region does not emit the first light beam. 19. The display system according to claim 15, wherein the control unit is configured to receive an external signal and determine to switch the display device to the first state or the second state according to the external signal. 20. The display system according to claim 19, wherein the external signal comprises a signal from a radar, a signal from a camera configured to detect at least one eye of a user, a signal from a camera configured to detect a front car, or a combination thereof. | 2,600 |
10,553 | 10,553 | 14,553,369 | 2,665 | Arrangements described herein relate to electronic communications and, more particularly, to the exchange of information over a telephonic communication channel. For example, during a telephone call session established over a telephonic communication channel between a user communicating using a first communication device and a customer service system, a content recognition system can receive media content or a digital signature, transmitted by the first communication device during the telephone call session. The media content or digital signature can be communicated over the telephonic communication channel used to support audio communication between the first communication device and the customer service system in the telephone call session. The digital signature can be processed to authenticate an identity of the user. | 1. A method, comprising:
during a telephone call session established over a telephonic communication channel between a user communicating using a first communication device and a customer service system, receiving by a content recognition system, executed by a processor, a digital signature transmitted by the first communication device during the telephone call session, the digital signature communicated over the telephonic communication channel used to support audio communication between the first communication device and the customer service system in the telephone call session; and responsive to receiving by the content recognition system the digital signature, processing the digital signature to authenticate an identity of the user. 2. The method of claim 1, wherein the audio communication between the user and the customer service system over the telephonic communication channel occur in the telephone call session prior to and after the digital signature being transmitted. 3. The method of claim 1, wherein the audio communication between the user and the customer service system comprises audio communication between the user and an interactive voice response system. 4. The method of claim 1, wherein the audio communication between the user and the customer service system comprises audio communication between the user and a customer service representative. 5. The method of claim 1, wherein the digital signature comprises a credit card number. 6. The method of claim 1, wherein the digital signature comprises at least one type of information selected from a group consisting of a social security number, a maiden name, a date of birth, a name that is different than the user's name and a password. 7. A method, comprising:
during a telephone call session established over a telephonic communication channel between a first user communicating using a first communication device and at least a second entity, receiving by a content recognition system, executed by a processor, media content transmitted by the first communication device over the telephonic communication channel during the telephone call session, wherein audio communication between the first user and the second entity occurs over the telephonic communication channel in the telephone call session prior to and after the media content being transmitted; and responsive to receiving by the content recognition system the media content, initiating at least one event. 8. The method of claim 7, wherein:
the second entity is a second user; and initiating at least one event comprises transmitting the media content to a second communication device used by the second user participating in a conference call with the first user. 9. The method of claim 8, wherein the first communication device communicates in the conference call using a first communication protocol and the second communication device communicates in the conference call using a second communication protocol, wherein the second communication protocol is different than the first communication protocol;
the method further comprising provisioning a cloud storage to provide a transient workspace, the transient workspace configured to store the media content and the transient workspace configured to be accessed by at least the second entity to retrieve the media content from the transient workspace over the telephonic communication channel. 10. The method of claim 7, wherein initiating at least one event comprises:
parsing information from the media content; and processing the information parsed from the media content to authenticate an identity of the first user. 11. The method of claim 10, wherein the information parsed from the media content comprises at least one type of information selected from a group consisting of a social security number, a credit card number, a maiden name, a date of birth, and a name that is different than the user's name. 12. The method of claim 7, wherein initiating at least one event comprises:
parsing a credit card number from the media content; and processing the credit card number parsed from the media content to process payment for an order for a product or service requested by the first user. 13. A system, comprising:
a processor programmed to initiate executable operations comprising: during a telephone call session established over a telephonic communication channel between a user communicating using a first communication device and a customer service system, receiving by a content recognition system, executed by the processor, a digital signature transmitted by the first communication device during the telephone call session, the digital signature communicated over the telephonic communication channel used to support audio communication between the first communication device and the customer service system in the telephone call session; and processing the digital signature to authenticate an identity of the user. 14. The system of claim 13, wherein the audio communication between the user and the customer service system over the telephonic communication channel occur in the telephone call session prior to and after the digital signature being transmitted. 15. The system of claim 13, wherein the audio communication between the user and the customer service system comprises audio communication between the user and an interactive voice response system. 16. The system of claim 13, wherein the audio communication between the user and the customer service system comprises audio communication between the user and a customer service representative. 17. The system of claim 13, wherein the digital signature comprises a credit card number. 18. The system of claim 13, wherein the digital signature comprises at least one type of information selected from a group consisting of a social security number, a maiden name, a date of birth, a name that is different than the user's name and a password. 19. A system, comprising:
a processor programmed to initiate executable operations comprising: during a telephone call session established over a telephonic communication channel between a first user communicating using a first communication device and at least a second entity, receiving by a content recognition system, executed by the processor, media content transmitted by the first communication device over the telephonic communication channel during the telephone call session, wherein audio communication between the first user and the second entity occurs over the telephonic communication channel in the telephone call session prior to and after the media content being transmitted; and responsive to receiving by the content recognition system the media content, initiating at least one event. 20. The system of claim 19, wherein:
the second entity is a second user; and initiating at least one event comprises transmitting the media content to a second communication device used by the second user participating in a conference call with the first user. 21. The system of claim 20, wherein the first communication device communicates in the conference call using a first communication protocol and the second communication device communicates in the conference call using a second communication protocol, wherein the second communication protocol is different than the first communication protocol;
the method further comprising provisioning a cloud storage to provide a transient workspace, the transient workspace configured to store the media content and the transient workspace configured to be accessed by at least the second entity to retrieve the media content from the transient workspace over the telephonic communication channel. 22. The system of claim 19, wherein initiating at least one event comprises:
parsing information from the media content; and processing the information parsed from the media content to authenticate an identity of the first user. 23. The system of claim 22, wherein the information parsed from the media content comprises at least one type of information selected from a group consisting of a social security number, a credit card number, a maiden name, a date of birth, and a name that is different than the user's name. 24. The system of claim 19, wherein initiating at least one event comprises:
parsing a credit card number from the media content; and processing the credit card number parsed from the media content to process payment for an order for a product or service requested by the first user. 25. A computer program product comprising a computer readable storage medium having program code stored thereon, the program code executable by a processor to perform a method comprising:
during a telephone call session established over a telephonic communication channel between a user communicating using a first communication device and a customer service system, receiving by a content recognition system, executed by the processor, a digital signature transmitted by the first communication device during the telephone call session, the digital signature communicated over the telephonic communication channel used to support audio communication between the first communication device and the customer service system in the telephone call session; and processing the digital signature to authenticate an identity of the user. | Arrangements described herein relate to electronic communications and, more particularly, to the exchange of information over a telephonic communication channel. For example, during a telephone call session established over a telephonic communication channel between a user communicating using a first communication device and a customer service system, a content recognition system can receive media content or a digital signature, transmitted by the first communication device during the telephone call session. The media content or digital signature can be communicated over the telephonic communication channel used to support audio communication between the first communication device and the customer service system in the telephone call session. The digital signature can be processed to authenticate an identity of the user.1. A method, comprising:
during a telephone call session established over a telephonic communication channel between a user communicating using a first communication device and a customer service system, receiving by a content recognition system, executed by a processor, a digital signature transmitted by the first communication device during the telephone call session, the digital signature communicated over the telephonic communication channel used to support audio communication between the first communication device and the customer service system in the telephone call session; and responsive to receiving by the content recognition system the digital signature, processing the digital signature to authenticate an identity of the user. 2. The method of claim 1, wherein the audio communication between the user and the customer service system over the telephonic communication channel occur in the telephone call session prior to and after the digital signature being transmitted. 3. The method of claim 1, wherein the audio communication between the user and the customer service system comprises audio communication between the user and an interactive voice response system. 4. The method of claim 1, wherein the audio communication between the user and the customer service system comprises audio communication between the user and a customer service representative. 5. The method of claim 1, wherein the digital signature comprises a credit card number. 6. The method of claim 1, wherein the digital signature comprises at least one type of information selected from a group consisting of a social security number, a maiden name, a date of birth, a name that is different than the user's name and a password. 7. A method, comprising:
during a telephone call session established over a telephonic communication channel between a first user communicating using a first communication device and at least a second entity, receiving by a content recognition system, executed by a processor, media content transmitted by the first communication device over the telephonic communication channel during the telephone call session, wherein audio communication between the first user and the second entity occurs over the telephonic communication channel in the telephone call session prior to and after the media content being transmitted; and responsive to receiving by the content recognition system the media content, initiating at least one event. 8. The method of claim 7, wherein:
the second entity is a second user; and initiating at least one event comprises transmitting the media content to a second communication device used by the second user participating in a conference call with the first user. 9. The method of claim 8, wherein the first communication device communicates in the conference call using a first communication protocol and the second communication device communicates in the conference call using a second communication protocol, wherein the second communication protocol is different than the first communication protocol;
the method further comprising provisioning a cloud storage to provide a transient workspace, the transient workspace configured to store the media content and the transient workspace configured to be accessed by at least the second entity to retrieve the media content from the transient workspace over the telephonic communication channel. 10. The method of claim 7, wherein initiating at least one event comprises:
parsing information from the media content; and processing the information parsed from the media content to authenticate an identity of the first user. 11. The method of claim 10, wherein the information parsed from the media content comprises at least one type of information selected from a group consisting of a social security number, a credit card number, a maiden name, a date of birth, and a name that is different than the user's name. 12. The method of claim 7, wherein initiating at least one event comprises:
parsing a credit card number from the media content; and processing the credit card number parsed from the media content to process payment for an order for a product or service requested by the first user. 13. A system, comprising:
a processor programmed to initiate executable operations comprising: during a telephone call session established over a telephonic communication channel between a user communicating using a first communication device and a customer service system, receiving by a content recognition system, executed by the processor, a digital signature transmitted by the first communication device during the telephone call session, the digital signature communicated over the telephonic communication channel used to support audio communication between the first communication device and the customer service system in the telephone call session; and processing the digital signature to authenticate an identity of the user. 14. The system of claim 13, wherein the audio communication between the user and the customer service system over the telephonic communication channel occur in the telephone call session prior to and after the digital signature being transmitted. 15. The system of claim 13, wherein the audio communication between the user and the customer service system comprises audio communication between the user and an interactive voice response system. 16. The system of claim 13, wherein the audio communication between the user and the customer service system comprises audio communication between the user and a customer service representative. 17. The system of claim 13, wherein the digital signature comprises a credit card number. 18. The system of claim 13, wherein the digital signature comprises at least one type of information selected from a group consisting of a social security number, a maiden name, a date of birth, a name that is different than the user's name and a password. 19. A system, comprising:
a processor programmed to initiate executable operations comprising: during a telephone call session established over a telephonic communication channel between a first user communicating using a first communication device and at least a second entity, receiving by a content recognition system, executed by the processor, media content transmitted by the first communication device over the telephonic communication channel during the telephone call session, wherein audio communication between the first user and the second entity occurs over the telephonic communication channel in the telephone call session prior to and after the media content being transmitted; and responsive to receiving by the content recognition system the media content, initiating at least one event. 20. The system of claim 19, wherein:
the second entity is a second user; and initiating at least one event comprises transmitting the media content to a second communication device used by the second user participating in a conference call with the first user. 21. The system of claim 20, wherein the first communication device communicates in the conference call using a first communication protocol and the second communication device communicates in the conference call using a second communication protocol, wherein the second communication protocol is different than the first communication protocol;
the method further comprising provisioning a cloud storage to provide a transient workspace, the transient workspace configured to store the media content and the transient workspace configured to be accessed by at least the second entity to retrieve the media content from the transient workspace over the telephonic communication channel. 22. The system of claim 19, wherein initiating at least one event comprises:
parsing information from the media content; and processing the information parsed from the media content to authenticate an identity of the first user. 23. The system of claim 22, wherein the information parsed from the media content comprises at least one type of information selected from a group consisting of a social security number, a credit card number, a maiden name, a date of birth, and a name that is different than the user's name. 24. The system of claim 19, wherein initiating at least one event comprises:
parsing a credit card number from the media content; and processing the credit card number parsed from the media content to process payment for an order for a product or service requested by the first user. 25. A computer program product comprising a computer readable storage medium having program code stored thereon, the program code executable by a processor to perform a method comprising:
during a telephone call session established over a telephonic communication channel between a user communicating using a first communication device and a customer service system, receiving by a content recognition system, executed by the processor, a digital signature transmitted by the first communication device during the telephone call session, the digital signature communicated over the telephonic communication channel used to support audio communication between the first communication device and the customer service system in the telephone call session; and processing the digital signature to authenticate an identity of the user. | 2,600 |
10,554 | 10,554 | 15,276,832 | 2,683 | The disclosure relates to a method for providing an alert to a driver of a vehicle. The method may comprise a) receiving information from a vehicle-to-vehicle communication unit relating to a deceleration of a vehicle-in-front; b) detecting a decrease in distance to a closest vehicle-in-front by a vehicle sensor system in the vehicle; and c) issuing an alert to the driver of the vehicle as a response to step b. | 1. A method for providing an alert to a driver of a vehicle, the method comprising:
a) receiving information from a vehicle-to-vehicle communication unit relating to a deceleration of a vehicle-in-front; b) detecting a decrease in distance to a closest vehicle-in-front by a vehicle sensor system in the vehicle; c) issuing an alert to the driver of the vehicle as a response to step b. 2. The method according to claim 1 wherein the vehicle sensor system detects a deceleration of the closest vehicle-in-front. 3. The method according to claim 1 wherein in step c) the alert to the driver is to brake. 4. The method according to claim 1 wherein the alert is issued as a visual, tactile and/or audible alert. 5. The method according to claim 1 wherein if in step b) a detected decrease in distance to the closest vehicle-in-front exceeds a threshold value, the issuing of the alert in step c) is made with a higher degree of enforcement than for a detected decrease in distance to the closest vehicle-in-front below the threshold value. 6. The method according to claim 1 wherein the vehicle-to-vehicle communication unit is configured to provide the information over a cell network, cloud and/or an automotive Wi-Fi. 7. The method according to claim 1 wherein the vehicle sensor system comprises a camera, video system, radar system and/or LIDAR system. 8. An alert system for a vehicle, the system comprising:
a vehicle-to-vehicle communication unit configured to receive information relating to deceleration of a vehicle-in-front; a vehicle sensor system configured to detect a decrease in distance to a closest vehicle-in-front; and a managing unit configured to receive information from the vehicle-to-vehicle communication unit relating to deceleration of a vehicle-in-front and, when the vehicle sensor system detects a decrease in distance to the closest vehicle-in-front, to issue an alert to a driver of the vehicle. 9. A vehicle comprising an alert system according to claim 8. 10. An alert system for a vehicle for use with a vehicle-to-vehicle communication unit configured to receive information relating to deceleration of a vehicle-in-front, the system comprising:
a vehicle sensor system configured to detect a decrease in distance to a closest vehicle-in-front; and a managing unit configured to receive information from the vehicle-to-vehicle communication unit relating to deceleration of a vehicle-in-front and, when the vehicle sensor system detects a decrease in distance to the closest vehicle-in-front, to issue an alert to a driver of the vehicle. 11. A vehicle comprising an alert system according to claim 9. | The disclosure relates to a method for providing an alert to a driver of a vehicle. The method may comprise a) receiving information from a vehicle-to-vehicle communication unit relating to a deceleration of a vehicle-in-front; b) detecting a decrease in distance to a closest vehicle-in-front by a vehicle sensor system in the vehicle; and c) issuing an alert to the driver of the vehicle as a response to step b.1. A method for providing an alert to a driver of a vehicle, the method comprising:
a) receiving information from a vehicle-to-vehicle communication unit relating to a deceleration of a vehicle-in-front; b) detecting a decrease in distance to a closest vehicle-in-front by a vehicle sensor system in the vehicle; c) issuing an alert to the driver of the vehicle as a response to step b. 2. The method according to claim 1 wherein the vehicle sensor system detects a deceleration of the closest vehicle-in-front. 3. The method according to claim 1 wherein in step c) the alert to the driver is to brake. 4. The method according to claim 1 wherein the alert is issued as a visual, tactile and/or audible alert. 5. The method according to claim 1 wherein if in step b) a detected decrease in distance to the closest vehicle-in-front exceeds a threshold value, the issuing of the alert in step c) is made with a higher degree of enforcement than for a detected decrease in distance to the closest vehicle-in-front below the threshold value. 6. The method according to claim 1 wherein the vehicle-to-vehicle communication unit is configured to provide the information over a cell network, cloud and/or an automotive Wi-Fi. 7. The method according to claim 1 wherein the vehicle sensor system comprises a camera, video system, radar system and/or LIDAR system. 8. An alert system for a vehicle, the system comprising:
a vehicle-to-vehicle communication unit configured to receive information relating to deceleration of a vehicle-in-front; a vehicle sensor system configured to detect a decrease in distance to a closest vehicle-in-front; and a managing unit configured to receive information from the vehicle-to-vehicle communication unit relating to deceleration of a vehicle-in-front and, when the vehicle sensor system detects a decrease in distance to the closest vehicle-in-front, to issue an alert to a driver of the vehicle. 9. A vehicle comprising an alert system according to claim 8. 10. An alert system for a vehicle for use with a vehicle-to-vehicle communication unit configured to receive information relating to deceleration of a vehicle-in-front, the system comprising:
a vehicle sensor system configured to detect a decrease in distance to a closest vehicle-in-front; and a managing unit configured to receive information from the vehicle-to-vehicle communication unit relating to deceleration of a vehicle-in-front and, when the vehicle sensor system detects a decrease in distance to the closest vehicle-in-front, to issue an alert to a driver of the vehicle. 11. A vehicle comprising an alert system according to claim 9. | 2,600 |
10,555 | 10,555 | 14,952,067 | 2,661 | An image is obtained by using a charged particle beam, and a design layout information is generated to select patterns of interest. Grey levels among patterns can be compared with each other to identify abnormal, or grey levels within one pattern can be compared to a determined threshold grey level to identify abnormal. | 1. An inspection method, comprising:
scanning a sample by using a charged particle beam to obtain an image; aligning at least one pattern on the image to a design layout information, wherein the at least one pattern is generated according to the design layout information; and determining abnormality of the at least one pattern, by using grey levels of the at least one pattern, according to the design layout information. 2. The method according to claim 1, further comprising a step of determining a threshold grey level for the at least one pattern before the step of determining abnormality. 3. The method according to claim 2, wherein the step of determining abnormality identifies a pixel of the at least one pattern is a defect if a grey level of the pixel is different from the threshold grey level. 4. The method according to claim 1, wherein the aligning step includes a group of patterns with a similar property to the at least one pattern on the image by using the design layout information. 5. The method according to claim 4, wherein the determining step compares the grey levels of the at least one pattern to each grey levels of the group of patterns. 6. The method according to claim 5, wherein the determining step identifies the at least one pattern is a defect if the grey levels of the at least one pattern are different from the each grey levels of the group of patterns. 7. The method according to claim 6, wherein the defect is a voltage contrast defect. 8. The method according to claim 1, wherein the charged particle beam is an electron beam generated by a scanning electron microscope. 9. A method for detecting defects, comprising:
scanning a sample by using a charged particle beam to obtain an image; aligning the image to a design layout information for generating a single pattern or a group of patterns on the image; and determining abnormality of the single pattern according to the design layout information. 10. The method according to claim 9, further comprising a step of determining a threshold grey level for the at least one pattern before the step of determining abnormality. 11. The method according to claim 10, wherein the step of determining abnormality identifies a pixel of the at least one pattern is a defect if a grey level of the pixel is different from the threshold grey level. 12. The method according to claim 9, wherein the group of patterns has a similar property to the single pattern by using the design layout information. 13. The method according to claim 12, wherein the determining step compares grey levels of the single pattern to each grey levels of the group of patterns. 14. The method according to claim 13, wherein the determining step identifies the single pattern is a defect if the grey levels of the single pattern are different from the each grey levels of the group of patterns. 15. The method according to claim 14, wherein the defect is a voltage contrast defect. 16. The method according to claim 9, wherein the charged particle beam is an electron beam generated by a scanning electron microscope. 17. A method for inspecting a sample, comprising:
scanning the sample by using an electron beam to obtain an image; aligning a pattern on the image to a design layout information, wherein the pattern is generated from the design layout information; determining a threshold grey level for the pattern according to the design layout information; and identifying whether a pixel is a defect if a scanned grey level of the pixel is different from the threshold grey level. 18. The method according to claim 17, wherein the design layout information is GDS. 19. A method for inspecting a sample, comprising:
scanning a sample by using an electron beam to obtain an image; aligning a group of patterns on the image to a design layout information, wherein the group of patterns is generated from the design layout information and has a similar property according to the design layout information; and comparing grey levels of the group of patterns with each other to identify abnormality. 20. The method according to claim 19, wherein the design layout information is GDS. | An image is obtained by using a charged particle beam, and a design layout information is generated to select patterns of interest. Grey levels among patterns can be compared with each other to identify abnormal, or grey levels within one pattern can be compared to a determined threshold grey level to identify abnormal.1. An inspection method, comprising:
scanning a sample by using a charged particle beam to obtain an image; aligning at least one pattern on the image to a design layout information, wherein the at least one pattern is generated according to the design layout information; and determining abnormality of the at least one pattern, by using grey levels of the at least one pattern, according to the design layout information. 2. The method according to claim 1, further comprising a step of determining a threshold grey level for the at least one pattern before the step of determining abnormality. 3. The method according to claim 2, wherein the step of determining abnormality identifies a pixel of the at least one pattern is a defect if a grey level of the pixel is different from the threshold grey level. 4. The method according to claim 1, wherein the aligning step includes a group of patterns with a similar property to the at least one pattern on the image by using the design layout information. 5. The method according to claim 4, wherein the determining step compares the grey levels of the at least one pattern to each grey levels of the group of patterns. 6. The method according to claim 5, wherein the determining step identifies the at least one pattern is a defect if the grey levels of the at least one pattern are different from the each grey levels of the group of patterns. 7. The method according to claim 6, wherein the defect is a voltage contrast defect. 8. The method according to claim 1, wherein the charged particle beam is an electron beam generated by a scanning electron microscope. 9. A method for detecting defects, comprising:
scanning a sample by using a charged particle beam to obtain an image; aligning the image to a design layout information for generating a single pattern or a group of patterns on the image; and determining abnormality of the single pattern according to the design layout information. 10. The method according to claim 9, further comprising a step of determining a threshold grey level for the at least one pattern before the step of determining abnormality. 11. The method according to claim 10, wherein the step of determining abnormality identifies a pixel of the at least one pattern is a defect if a grey level of the pixel is different from the threshold grey level. 12. The method according to claim 9, wherein the group of patterns has a similar property to the single pattern by using the design layout information. 13. The method according to claim 12, wherein the determining step compares grey levels of the single pattern to each grey levels of the group of patterns. 14. The method according to claim 13, wherein the determining step identifies the single pattern is a defect if the grey levels of the single pattern are different from the each grey levels of the group of patterns. 15. The method according to claim 14, wherein the defect is a voltage contrast defect. 16. The method according to claim 9, wherein the charged particle beam is an electron beam generated by a scanning electron microscope. 17. A method for inspecting a sample, comprising:
scanning the sample by using an electron beam to obtain an image; aligning a pattern on the image to a design layout information, wherein the pattern is generated from the design layout information; determining a threshold grey level for the pattern according to the design layout information; and identifying whether a pixel is a defect if a scanned grey level of the pixel is different from the threshold grey level. 18. The method according to claim 17, wherein the design layout information is GDS. 19. A method for inspecting a sample, comprising:
scanning a sample by using an electron beam to obtain an image; aligning a group of patterns on the image to a design layout information, wherein the group of patterns is generated from the design layout information and has a similar property according to the design layout information; and comparing grey levels of the group of patterns with each other to identify abnormality. 20. The method according to claim 19, wherein the design layout information is GDS. | 2,600 |
10,556 | 10,556 | 14,869,635 | 2,651 | A content management system and/or client device can enable a user to initiate a quick play mode where a content category and content medium are selected for the user. A client device and/or a content management system can select a content medium for a user based on one or more factors, such as the content category. Certain content categories of content can be preferably delivered in certain content mediums. In some embodiments, a content management system and/or client device can select a content medium for a user based on contextual data gathered from the user. Contextual data can be data describing the user's current state and/or environment. For example, contextual data can include data such as the time of day, geographic location, etc. | 1. A method comprising:
receiving, by a computer processor, from a client device, a first play mode request to play content items on the client device; in response to receiving the first play mode request, selecting, by the computer processor, a first content category; selecting, by the computer processor, a first content medium in which to play content items from the first content category; and delivering content items selected from the first content category and in the first content medium. 2. The method of claim 1, wherein the first play mode is a quick play mode. 3. The method of claim 1, wherein the first content medium is selected based on contextual data associated with a user of the client device. 4. The method of claim 3, wherein selecting the first content medium comprises:
determining, based on the contextual data associated with the first user, a set of one or more content mediums that the user selected in the past under a same or similar contextual circumstance, wherein the first content medium is selected from the set of one or more content mediums. 5. The method of claim 3, wherein selecting the first content medium comprises:
determining, based on content use data associated with a plurality of users, a set of one or more content mediums that were selected by plurality of users under a same or similar contextual circumstance, wherein the first content medium is selected from the set of one or more content mediums. 6. The method of claim 1, wherein the first content medium is one of a group consisting of an album, a radio station, a playlist, a curated playlist and podcast. 7. The method of claim 1, further comprising:
receiving, from the client device, a second play mode request to play content items on the client device; in response to receiving the second play mode request, selecting a second content category, different than the first content category; selecting a second content medium in which to play content items from the second content category; and delivering content items selected from the second content category and in the second content medium. 8. A system comprising:
a computer processor; and a memory containing instructions that, when executed, cause the computer processor to:
receive, from a client device, a first quick play mode request to initiate quick play mode on the client device;
in response to receiving the first quick play mode request, select a first content medium;
select a first content category from which to deliver content items in the first content medium; and
deliver content items selected from the first content category and in the first content medium. 9. The system of claim 8, wherein the first play mode is a quick play mode. 10. The system of claim 8, wherein the first content medium is selected based on contextual data associated with a user of the client device. 11. The system of claim 10, wherein selecting the first content medium comprises:
determining, based on the contextual data associated with the first user, a set of one or more content mediums that the user selected in the past under a same or similar contextual circumstance, wherein the first content medium is selected from the set of one or more content mediums. 12. The system of claim 10, wherein selecting the first content medium comprises:
determining, based on content use data associated with a plurality of users, a set of one or more content mediums that were selected by plurality of users under a same or similar contextual circumstance, wherein the first content medium is selected from the set of one or more content mediums. 13. The system of claim 8, wherein the first content medium is one of a group consisting of an album, a radio station, a playlist, a curated playlist and podcast. 14. The system of claim 8, wherein the instructions further cause the computer processor to:
receive, from the client device, a second play mode request to play content items on the client device; in response to receiving the second quick play mode request, select a second content medium, different than the first content medium; select a second content category from which to play content items in the second content medium; and deliver content items selected from the second content category and in the second content medium. 15. A non-transitory computer-readable medium containing instructions that, when executed by a computer processor, cause the computer processor to:
receive, from a client device, a first play mode request to play content items on the client device; in response to receiving the first play mode request, select a first content category; select a first content medium in which to play content items from the first content category; and deliver content items selected from the first content category and in the first content medium. 16. The non-transitory computer-readable medium of claim 15, wherein the first play mode is a quick play mode. 17. The non-transitory computer-readable medium of claim 15, wherein the first content medium is selected based on contextual data associated with a user of the client device. 18. The non-transitory computer-readable medium of claim 17, wherein selecting the first content medium comprises:
determining, based on the contextual data associated with the first user, a set of one or more content mediums that the user selected in the past under a same or similar contextual circumstance, wherein the first content medium is selected from the set of one or more content mediums. 19. The non-transitory computer-readable medium of claim 17, wherein selecting the first content medium comprises:
determining, based on content use data associated with a plurality of users, a set of one or more content mediums that were selected by plurality of users under a same or similar contextual circumstance, wherein the first content medium is selected from the set of one or more content mediums. 20. The non-transitory computer-readable medium of claim 15, wherein the first content medium is one of a group consisting of an album, a radio station, a playlist, a curated playlist and podcast. 21. The non-transitory computer-readable medium of claim 15, wherein the instructions further cause the computer processor to:
receive, from the client device, a second play mode request to play content items on the client device; in response to receiving the second play mode request, select a second content category, different than the first content category; select a second content medium in which to play content items from the second content category; and deliver content items selected from the second content category and in the second content medium. | A content management system and/or client device can enable a user to initiate a quick play mode where a content category and content medium are selected for the user. A client device and/or a content management system can select a content medium for a user based on one or more factors, such as the content category. Certain content categories of content can be preferably delivered in certain content mediums. In some embodiments, a content management system and/or client device can select a content medium for a user based on contextual data gathered from the user. Contextual data can be data describing the user's current state and/or environment. For example, contextual data can include data such as the time of day, geographic location, etc.1. A method comprising:
receiving, by a computer processor, from a client device, a first play mode request to play content items on the client device; in response to receiving the first play mode request, selecting, by the computer processor, a first content category; selecting, by the computer processor, a first content medium in which to play content items from the first content category; and delivering content items selected from the first content category and in the first content medium. 2. The method of claim 1, wherein the first play mode is a quick play mode. 3. The method of claim 1, wherein the first content medium is selected based on contextual data associated with a user of the client device. 4. The method of claim 3, wherein selecting the first content medium comprises:
determining, based on the contextual data associated with the first user, a set of one or more content mediums that the user selected in the past under a same or similar contextual circumstance, wherein the first content medium is selected from the set of one or more content mediums. 5. The method of claim 3, wherein selecting the first content medium comprises:
determining, based on content use data associated with a plurality of users, a set of one or more content mediums that were selected by plurality of users under a same or similar contextual circumstance, wherein the first content medium is selected from the set of one or more content mediums. 6. The method of claim 1, wherein the first content medium is one of a group consisting of an album, a radio station, a playlist, a curated playlist and podcast. 7. The method of claim 1, further comprising:
receiving, from the client device, a second play mode request to play content items on the client device; in response to receiving the second play mode request, selecting a second content category, different than the first content category; selecting a second content medium in which to play content items from the second content category; and delivering content items selected from the second content category and in the second content medium. 8. A system comprising:
a computer processor; and a memory containing instructions that, when executed, cause the computer processor to:
receive, from a client device, a first quick play mode request to initiate quick play mode on the client device;
in response to receiving the first quick play mode request, select a first content medium;
select a first content category from which to deliver content items in the first content medium; and
deliver content items selected from the first content category and in the first content medium. 9. The system of claim 8, wherein the first play mode is a quick play mode. 10. The system of claim 8, wherein the first content medium is selected based on contextual data associated with a user of the client device. 11. The system of claim 10, wherein selecting the first content medium comprises:
determining, based on the contextual data associated with the first user, a set of one or more content mediums that the user selected in the past under a same or similar contextual circumstance, wherein the first content medium is selected from the set of one or more content mediums. 12. The system of claim 10, wherein selecting the first content medium comprises:
determining, based on content use data associated with a plurality of users, a set of one or more content mediums that were selected by plurality of users under a same or similar contextual circumstance, wherein the first content medium is selected from the set of one or more content mediums. 13. The system of claim 8, wherein the first content medium is one of a group consisting of an album, a radio station, a playlist, a curated playlist and podcast. 14. The system of claim 8, wherein the instructions further cause the computer processor to:
receive, from the client device, a second play mode request to play content items on the client device; in response to receiving the second quick play mode request, select a second content medium, different than the first content medium; select a second content category from which to play content items in the second content medium; and deliver content items selected from the second content category and in the second content medium. 15. A non-transitory computer-readable medium containing instructions that, when executed by a computer processor, cause the computer processor to:
receive, from a client device, a first play mode request to play content items on the client device; in response to receiving the first play mode request, select a first content category; select a first content medium in which to play content items from the first content category; and deliver content items selected from the first content category and in the first content medium. 16. The non-transitory computer-readable medium of claim 15, wherein the first play mode is a quick play mode. 17. The non-transitory computer-readable medium of claim 15, wherein the first content medium is selected based on contextual data associated with a user of the client device. 18. The non-transitory computer-readable medium of claim 17, wherein selecting the first content medium comprises:
determining, based on the contextual data associated with the first user, a set of one or more content mediums that the user selected in the past under a same or similar contextual circumstance, wherein the first content medium is selected from the set of one or more content mediums. 19. The non-transitory computer-readable medium of claim 17, wherein selecting the first content medium comprises:
determining, based on content use data associated with a plurality of users, a set of one or more content mediums that were selected by plurality of users under a same or similar contextual circumstance, wherein the first content medium is selected from the set of one or more content mediums. 20. The non-transitory computer-readable medium of claim 15, wherein the first content medium is one of a group consisting of an album, a radio station, a playlist, a curated playlist and podcast. 21. The non-transitory computer-readable medium of claim 15, wherein the instructions further cause the computer processor to:
receive, from the client device, a second play mode request to play content items on the client device; in response to receiving the second play mode request, select a second content category, different than the first content category; select a second content medium in which to play content items from the second content category; and deliver content items selected from the second content category and in the second content medium. | 2,600 |
10,557 | 10,557 | 15,289,085 | 2,626 | Touch events are detected on a touch sensing surface coupled to a capacitive sense array and one or more force electrodes. During a plurality of scan cycles, a plurality of capacitive sense signals and one or more force signals are obtained from capacitive sense electrodes of the capacitive sense array and the one or more force electrodes, respectively, and applied to determine a temporal sequence of touches on the touch sensing surface. For each touch of the temporal sequence of touches, a touch location on the touch sensing surface is identified based on the capacitive sense signals, and a force value associated with force applied at the touch location is determined based on the force signals. A gesture associated with the temporal sequence of touches is thereby identified based on the touch locations and the force values of the temporal sequence of touches. | 1. A method of detecting touch events on a touch sensing surface coupled to a capacitive sense array, comprising:
at a processing device coupled to a capacitive sense array and one or more force electrodes, wherein the capacitive sense array includes a plurality of sense electrodes:
performing a plurality of scan cycles, wherein each scan cycle includes obtaining a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array, and obtaining one or more force signals from the one or more force electrodes; and
determining a temporal sequence of touches on the touch sensing surface based on the plurality of capacitive sense signals and the one or more force signals obtained from the plurality of scan cycles, including:
for each touch of the temporal sequence of touches, identifying a touch location on the touch sensing surface based on the plurality of capacitive sense signals, and determining a force value associated with force applied at the touch location based on the one or more force signals; and
identifying a gesture associated with the temporal sequence of touches based on the touch location and the force value for each touch of the temporal sequence of touches. 2. The method of claim 1, wherein each of the plurality of scan cycles includes a respective capacitive sense scan and a respective force scan that are substantially synchronized, wherein for each touch of the temporal sequence of touches, the touch location is identified during the capacitive sense scan of the respective scan cycle, and the force value is determined during the force scan of the respective scan cycle. 3. The method of claim 1, wherein for each touch of the temporal sequence of touches, the respective location of the touch includes an x-coordinate value and a y-coordinate value. 4. The method of claim 1, wherein for each touch of the sequence of touches, the respective force value is determined based on the respective location of the respective touch. 5. The method of claim 1, wherein identifying the gesture associated with the temporal sequence of touches further comprises:
identifying a first predefined gesture, when the locations of the temporal sequence of touches are consistent with a predefined stroke pattern and the force values are consistent with a set of predetermined force values corresponding to the predefined stroke pattern. 6. The method of claim 5, wherein the predefined stroke pattern and the set of predetermined force values are configured for authenticating a user. 7. The method of claim 1, wherein the one or more force electrodes include a force electrode array that has a plurality of force electrodes arranged in one dimension or two dimensions. 8. The method of claim 7, wherein the force electrode array includes a row of force electrodes each extending along a direction that is parallel with the touch sensing surface and the one or more force signals are obtained from the row of electrodes by monitoring self capacitances of the row of electrodes. 9. The method of claim 7, wherein the force electrode array includes a row of force electrodes and a column of force electrodes, and the one or more force signals are obtained from the row of electrodes and the column of electrodes by monitoring self capacitances or mutual capacitances of the row of electrodes and the column of electrodes. 10. The method of claim 1, wherein the temporal sequence of touches includes two synchronous subsequences of touches, and each synchronous subsequence of touch includes a respective set of consecutive finger touches moving on the touch sensing surface. 11. A processing device, wherein:
a processing core; a capacitance sense circuit; and memory storing one or more programs configured for execution by the processing core, the one or more programs comprising instructions for
performing a plurality of scan cycles, wherein each scan cycle includes obtaining a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array, and obtaining one or more force signals from the one or more force electrodes; and
determining a temporal sequence of touches on the touch sensing surface based on the plurality of capacitive sense signals and the one or more force signals obtained from the plurality of scan cycles, including:
for each touch of the temporal sequence of touches, identifying a touch location on the touch sensing surface based on the plurality of capacitive sense signals, and determining a force value associated with force applied at the touch location based on the one or more force signals; and
identifying a gesture associated with the temporal sequence of touches based on the touch location and the force value for each touch of the temporal sequence of touches. 12. The processing device of claim 11, wherein for each touch of the temporal sequence of touches, the respective location of the touch includes an x-coordinate value and a y-coordinate value. 13. The processing device of claim 11, wherein for each touch of the temporal sequence of touches, the respective force value is adjusted based on the respective location of the respective touch. 14. The processing device of claim 11, wherein the processing device is configured to identify the gesture associated with the temporal sequence of touches by:
identifying a first predefined gesture, when the locations of the temporal sequence of touches are consistent with a predefined stroke pattern and each of the force values varies according to a set of predetermined force values corresponding to the predefined stroke pattern. 15. The processing device of claim 14, wherein the predefined stroke pattern and the set of predetermined force values are configured for authenticating a user. 16. An electronic system, comprising:
a capacitive sense array coupled to a touch sensing surface; one or more force electrodes; and a processing device coupled to the capacitive sense array and the one or more force electrodes, wherein the processing device is configured for:
performing a plurality of scan cycles, wherein each scan cycle includes obtaining a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array, and obtaining one or more force signals from the one or more force electrodes; and
determining a temporal sequence of touches on the touch sensing surface based on the plurality of capacitive sense signals and the one or more force signals obtained from the plurality of scan cycles, including:
for each touch of the temporal sequence of touches, identifying a touch location on the touch sensing surface based on the plurality of capacitive sense signals, and determining a force value associated with force applied at the touch location based on the one or more force signals; and
identifying a gesture associated with the temporal sequence of touches based on the touch location and the force value for each touch of the temporal sequence of touches. 17. The electronic system of claim 16, wherein for each touch of the temporal sequence of touches, the respective location of the touch includes an x-coordinate value and a y-coordinate value. 18. The electronic system of claim 16, wherein for each touch of the temporal sequence of touches, the respective force value is adjusted based on the respective location of the respective touch. 19. The electronic system of claim 16, wherein the one or more programs further comprise instructions for identifying the gesture associated with the temporal sequence of touches by:
identifying a first predefined gesture, when the locations of the temporal sequence of touches are consistent with a predefined stroke pattern and each of the force values varies according to a set of predetermined force values corresponding to the predefined stroke pattern. 20. The electronic system of claim 19 wherein the predefined stroke pattern and the set of predetermined force values are configured for authenticating a user. | Touch events are detected on a touch sensing surface coupled to a capacitive sense array and one or more force electrodes. During a plurality of scan cycles, a plurality of capacitive sense signals and one or more force signals are obtained from capacitive sense electrodes of the capacitive sense array and the one or more force electrodes, respectively, and applied to determine a temporal sequence of touches on the touch sensing surface. For each touch of the temporal sequence of touches, a touch location on the touch sensing surface is identified based on the capacitive sense signals, and a force value associated with force applied at the touch location is determined based on the force signals. A gesture associated with the temporal sequence of touches is thereby identified based on the touch locations and the force values of the temporal sequence of touches.1. A method of detecting touch events on a touch sensing surface coupled to a capacitive sense array, comprising:
at a processing device coupled to a capacitive sense array and one or more force electrodes, wherein the capacitive sense array includes a plurality of sense electrodes:
performing a plurality of scan cycles, wherein each scan cycle includes obtaining a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array, and obtaining one or more force signals from the one or more force electrodes; and
determining a temporal sequence of touches on the touch sensing surface based on the plurality of capacitive sense signals and the one or more force signals obtained from the plurality of scan cycles, including:
for each touch of the temporal sequence of touches, identifying a touch location on the touch sensing surface based on the plurality of capacitive sense signals, and determining a force value associated with force applied at the touch location based on the one or more force signals; and
identifying a gesture associated with the temporal sequence of touches based on the touch location and the force value for each touch of the temporal sequence of touches. 2. The method of claim 1, wherein each of the plurality of scan cycles includes a respective capacitive sense scan and a respective force scan that are substantially synchronized, wherein for each touch of the temporal sequence of touches, the touch location is identified during the capacitive sense scan of the respective scan cycle, and the force value is determined during the force scan of the respective scan cycle. 3. The method of claim 1, wherein for each touch of the temporal sequence of touches, the respective location of the touch includes an x-coordinate value and a y-coordinate value. 4. The method of claim 1, wherein for each touch of the sequence of touches, the respective force value is determined based on the respective location of the respective touch. 5. The method of claim 1, wherein identifying the gesture associated with the temporal sequence of touches further comprises:
identifying a first predefined gesture, when the locations of the temporal sequence of touches are consistent with a predefined stroke pattern and the force values are consistent with a set of predetermined force values corresponding to the predefined stroke pattern. 6. The method of claim 5, wherein the predefined stroke pattern and the set of predetermined force values are configured for authenticating a user. 7. The method of claim 1, wherein the one or more force electrodes include a force electrode array that has a plurality of force electrodes arranged in one dimension or two dimensions. 8. The method of claim 7, wherein the force electrode array includes a row of force electrodes each extending along a direction that is parallel with the touch sensing surface and the one or more force signals are obtained from the row of electrodes by monitoring self capacitances of the row of electrodes. 9. The method of claim 7, wherein the force electrode array includes a row of force electrodes and a column of force electrodes, and the one or more force signals are obtained from the row of electrodes and the column of electrodes by monitoring self capacitances or mutual capacitances of the row of electrodes and the column of electrodes. 10. The method of claim 1, wherein the temporal sequence of touches includes two synchronous subsequences of touches, and each synchronous subsequence of touch includes a respective set of consecutive finger touches moving on the touch sensing surface. 11. A processing device, wherein:
a processing core; a capacitance sense circuit; and memory storing one or more programs configured for execution by the processing core, the one or more programs comprising instructions for
performing a plurality of scan cycles, wherein each scan cycle includes obtaining a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array, and obtaining one or more force signals from the one or more force electrodes; and
determining a temporal sequence of touches on the touch sensing surface based on the plurality of capacitive sense signals and the one or more force signals obtained from the plurality of scan cycles, including:
for each touch of the temporal sequence of touches, identifying a touch location on the touch sensing surface based on the plurality of capacitive sense signals, and determining a force value associated with force applied at the touch location based on the one or more force signals; and
identifying a gesture associated with the temporal sequence of touches based on the touch location and the force value for each touch of the temporal sequence of touches. 12. The processing device of claim 11, wherein for each touch of the temporal sequence of touches, the respective location of the touch includes an x-coordinate value and a y-coordinate value. 13. The processing device of claim 11, wherein for each touch of the temporal sequence of touches, the respective force value is adjusted based on the respective location of the respective touch. 14. The processing device of claim 11, wherein the processing device is configured to identify the gesture associated with the temporal sequence of touches by:
identifying a first predefined gesture, when the locations of the temporal sequence of touches are consistent with a predefined stroke pattern and each of the force values varies according to a set of predetermined force values corresponding to the predefined stroke pattern. 15. The processing device of claim 14, wherein the predefined stroke pattern and the set of predetermined force values are configured for authenticating a user. 16. An electronic system, comprising:
a capacitive sense array coupled to a touch sensing surface; one or more force electrodes; and a processing device coupled to the capacitive sense array and the one or more force electrodes, wherein the processing device is configured for:
performing a plurality of scan cycles, wherein each scan cycle includes obtaining a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array, and obtaining one or more force signals from the one or more force electrodes; and
determining a temporal sequence of touches on the touch sensing surface based on the plurality of capacitive sense signals and the one or more force signals obtained from the plurality of scan cycles, including:
for each touch of the temporal sequence of touches, identifying a touch location on the touch sensing surface based on the plurality of capacitive sense signals, and determining a force value associated with force applied at the touch location based on the one or more force signals; and
identifying a gesture associated with the temporal sequence of touches based on the touch location and the force value for each touch of the temporal sequence of touches. 17. The electronic system of claim 16, wherein for each touch of the temporal sequence of touches, the respective location of the touch includes an x-coordinate value and a y-coordinate value. 18. The electronic system of claim 16, wherein for each touch of the temporal sequence of touches, the respective force value is adjusted based on the respective location of the respective touch. 19. The electronic system of claim 16, wherein the one or more programs further comprise instructions for identifying the gesture associated with the temporal sequence of touches by:
identifying a first predefined gesture, when the locations of the temporal sequence of touches are consistent with a predefined stroke pattern and each of the force values varies according to a set of predetermined force values corresponding to the predefined stroke pattern. 20. The electronic system of claim 19 wherein the predefined stroke pattern and the set of predetermined force values are configured for authenticating a user. | 2,600 |
10,558 | 10,558 | 15,558,001 | 2,666 | According to embodiments, crosstalk avoidance in a data transmission system is based on separating lines of the data transmission system at least into a first group, a second group, and a third group. Transmissions on lines of the first group are controlled to occur at different times than transmissions on lines of the second group. Transmissions on lines of the third group are allowed to occur at the same time with transmissions on the lines of the first group or with transmissions on the lines of the second group. | 1-23. (canceled) 24. A method for crosstalk avoidance in a data transmission system, the method comprising:
separating lines of the data transmission system at least into a first group, a second group, and a third group; controlling transmissions on lines of the first group to occur at different times than transmissions on lines of the second group; controlling transmissions on lines of the third group to occur at the same time with transmissions on the lines of the first group; and controlling transmissions on the lines of the third group to occur at the same time with transmissions on the lines of the second group. 25. The method according to claim 24, comprising:
configuring at least a first time interval and a second time interval which does not overlap the first time interval; assigning transmissions on the lines of the first group to the first time interval; assigning transmissions on the lines of the second group to the second time interval; assigning transmissions on the lines of the third group to the first time interval; and assigning transmissions on the lines of the third group to the second time interval. 26. The method according to claim 25, comprising:
discontinuing the lines of the first group in the second time interval; and discontinuing the lines of the second group in the first time interval. 27. The method according to claim 26, comprising:
controlling a transmitter and crosstalk cancelation coefficients of a discontinued line for enhancing received signal power of one or more lines of the third group. 28. The method according to claim 24, comprising:
for each of the lines, determining a line length; and according to the determined line lengths, separating the lines into the first group, the second group, and the third group. 29. The method according to claim 28, comprising:
estimating the line length during a startup sequence for joining the line to the data transmission system; and depending on the estimated line length, assign the line to the first group, the second group, or the third group. 30. The method according to claim 28,
wherein the lines of the third group have longer line lengths than the lines of the first group and the lines of the second group. 31. The method according to claim 24, comprising:
assigning frequencies to at least some lines of the first group and/or of the second group which are different from frequencies assigned to the lines of the third group. 32. The method according to claim 30, comprising:
assigning frequencies to at least some lines of the first group and/or of the second group which are higher than frequencies assigned to the lines of the third group. 33. The method according to claim 24, comprising:
configuring a first crosstalk cancelation group comprising the lines of the first group and the lines of the third group; configuring a second crosstalk cancelation group comprising the lines of the second group and the lines of the third group; wherein crosstalk cancelation is limited to consideration of lines of the same crosstalk cancelation group. 34. The method according to claim 33, comprising:
in crosstalk cancelation, ignoring mutual crosstalk couplings for some lines of the first group and/or for some lines of the second group. 35. The method according to claim 24, comprising:
separating the lines into the first group, the second group, the third group, and a fourth group, wherein the lines of the fourth group are not subject to crosstalk cancelation. 36. The method according to claim 35, comprising:
for each of the lines, estimating a crosstalk strength indicator; and depending on the crosstalk strength indicator, assigning some of the lines to the fourth group. 37. The method according to claim 24, comprising:
constructing codes for channel estimation which are the same for the lines of the first group and the lines of the second group. 38. The method according to claim 24,
wherein the data transmission system is based on a Vectoring Digital Subscriber Line technology. 39. A device for a data transmission system, the device comprising at least one processor configured to:
separate lines of the data transmission system at least into a first group, a second group, and a third group; control transmissions on lines of the first group to occur at different times than transmissions on lines of the second group; control transmissions on lines of the third group to occur at the same time with transmissions on the lines of the first group; and control transmissions on the lines of the third group to occur at the same time with transmissions on the lines of the second group. 40. The device according to claim 39, wherein the at least one processor is configured to:
for each of the lines, determine a line length; and according to the determined line lengths, separate the lines into the first group, the second group, and the third group. 41. The device according to claim 39, wherein the at least one processor is configured to:
estimate the line length during a startup sequence for joining the line to the data transmission system; and depending on the estimated line length, assign the line to the first group, the second group, or the third group. 42. The device according to claim 39,
wherein the lines of the third group have longer line lengths than the lines of the first group and the lines of the second group. 43. The device according to claim 42, wherein the at least one processor is configured to:
assign frequencies to at least some lines of the first group and/or of the second group which are higher than frequencies assigned to the lines of the third group. 44. A data transmission system, comprising:
a plurality of lines; and at least one device configured to:
separate lines of the data transmission system at least into a first group, a second group, and a third group;
control transmissions on lines of the first group to occur at different times than transmissions on lines of the second group;
control transmissions on lines of the third group to occur at the same time with transmissions on the lines of the first group; and
control transmissions on the lines of the third group to occur at the same time with transmissions on the lines of the second group. 45. The data transmission system according to claim 44,
wherein the at least one device is a distribution point of a fibre to the curb or fibre to the building system based on a Vectoring Digital Subscriber Line technology. 46. The data transmission system according to claim 44,
wherein the at least one device is a processor of a distribution point of a fibre to the curb or fibre to the building system based on a Vectoring Digital Subscriber Line technology. 47. The device according to claim 39,
wherein the device is a processor of a distribution point of a fiber to the curb or fiber to building system based on a Vectoring Digital Subscriber Line technology. | According to embodiments, crosstalk avoidance in a data transmission system is based on separating lines of the data transmission system at least into a first group, a second group, and a third group. Transmissions on lines of the first group are controlled to occur at different times than transmissions on lines of the second group. Transmissions on lines of the third group are allowed to occur at the same time with transmissions on the lines of the first group or with transmissions on the lines of the second group.1-23. (canceled) 24. A method for crosstalk avoidance in a data transmission system, the method comprising:
separating lines of the data transmission system at least into a first group, a second group, and a third group; controlling transmissions on lines of the first group to occur at different times than transmissions on lines of the second group; controlling transmissions on lines of the third group to occur at the same time with transmissions on the lines of the first group; and controlling transmissions on the lines of the third group to occur at the same time with transmissions on the lines of the second group. 25. The method according to claim 24, comprising:
configuring at least a first time interval and a second time interval which does not overlap the first time interval; assigning transmissions on the lines of the first group to the first time interval; assigning transmissions on the lines of the second group to the second time interval; assigning transmissions on the lines of the third group to the first time interval; and assigning transmissions on the lines of the third group to the second time interval. 26. The method according to claim 25, comprising:
discontinuing the lines of the first group in the second time interval; and discontinuing the lines of the second group in the first time interval. 27. The method according to claim 26, comprising:
controlling a transmitter and crosstalk cancelation coefficients of a discontinued line for enhancing received signal power of one or more lines of the third group. 28. The method according to claim 24, comprising:
for each of the lines, determining a line length; and according to the determined line lengths, separating the lines into the first group, the second group, and the third group. 29. The method according to claim 28, comprising:
estimating the line length during a startup sequence for joining the line to the data transmission system; and depending on the estimated line length, assign the line to the first group, the second group, or the third group. 30. The method according to claim 28,
wherein the lines of the third group have longer line lengths than the lines of the first group and the lines of the second group. 31. The method according to claim 24, comprising:
assigning frequencies to at least some lines of the first group and/or of the second group which are different from frequencies assigned to the lines of the third group. 32. The method according to claim 30, comprising:
assigning frequencies to at least some lines of the first group and/or of the second group which are higher than frequencies assigned to the lines of the third group. 33. The method according to claim 24, comprising:
configuring a first crosstalk cancelation group comprising the lines of the first group and the lines of the third group; configuring a second crosstalk cancelation group comprising the lines of the second group and the lines of the third group; wherein crosstalk cancelation is limited to consideration of lines of the same crosstalk cancelation group. 34. The method according to claim 33, comprising:
in crosstalk cancelation, ignoring mutual crosstalk couplings for some lines of the first group and/or for some lines of the second group. 35. The method according to claim 24, comprising:
separating the lines into the first group, the second group, the third group, and a fourth group, wherein the lines of the fourth group are not subject to crosstalk cancelation. 36. The method according to claim 35, comprising:
for each of the lines, estimating a crosstalk strength indicator; and depending on the crosstalk strength indicator, assigning some of the lines to the fourth group. 37. The method according to claim 24, comprising:
constructing codes for channel estimation which are the same for the lines of the first group and the lines of the second group. 38. The method according to claim 24,
wherein the data transmission system is based on a Vectoring Digital Subscriber Line technology. 39. A device for a data transmission system, the device comprising at least one processor configured to:
separate lines of the data transmission system at least into a first group, a second group, and a third group; control transmissions on lines of the first group to occur at different times than transmissions on lines of the second group; control transmissions on lines of the third group to occur at the same time with transmissions on the lines of the first group; and control transmissions on the lines of the third group to occur at the same time with transmissions on the lines of the second group. 40. The device according to claim 39, wherein the at least one processor is configured to:
for each of the lines, determine a line length; and according to the determined line lengths, separate the lines into the first group, the second group, and the third group. 41. The device according to claim 39, wherein the at least one processor is configured to:
estimate the line length during a startup sequence for joining the line to the data transmission system; and depending on the estimated line length, assign the line to the first group, the second group, or the third group. 42. The device according to claim 39,
wherein the lines of the third group have longer line lengths than the lines of the first group and the lines of the second group. 43. The device according to claim 42, wherein the at least one processor is configured to:
assign frequencies to at least some lines of the first group and/or of the second group which are higher than frequencies assigned to the lines of the third group. 44. A data transmission system, comprising:
a plurality of lines; and at least one device configured to:
separate lines of the data transmission system at least into a first group, a second group, and a third group;
control transmissions on lines of the first group to occur at different times than transmissions on lines of the second group;
control transmissions on lines of the third group to occur at the same time with transmissions on the lines of the first group; and
control transmissions on the lines of the third group to occur at the same time with transmissions on the lines of the second group. 45. The data transmission system according to claim 44,
wherein the at least one device is a distribution point of a fibre to the curb or fibre to the building system based on a Vectoring Digital Subscriber Line technology. 46. The data transmission system according to claim 44,
wherein the at least one device is a processor of a distribution point of a fibre to the curb or fibre to the building system based on a Vectoring Digital Subscriber Line technology. 47. The device according to claim 39,
wherein the device is a processor of a distribution point of a fiber to the curb or fiber to building system based on a Vectoring Digital Subscriber Line technology. | 2,600 |
10,559 | 10,559 | 14,938,704 | 2,683 | Methods, systems and devices are provided for displaying monitored activity data in substantial real-time on a screen of a computing device. One example method includes capturing motion data associated with activity of a user via an activity tracking device. The motion data is quantified into a plurality of metrics associated with the activity of the user. The method includes connecting the activity tracking device with a computing device over a wireless data connection, and sending motion data from the activity tracking device to the computing device for display of one or more of the plurality of metrics on a graphical user interface of the computing device. At least one of the plurality of metrics displayed on the graphical user interface is shown to change in substantial real-time based on the motion data. | 1. A method comprising:
capturing motion data associated with activity of a user via an activity tracking device, the motion data quantified into a plurality of metrics associated with the activity of the user; storing the motion data in storage of the activity tracking device; connecting the activity tracking device with a computing device over a wireless data connection; sending the motion data to the computing device for display of a metric, of the plurality of metrics, on a graphical user interface of an activity application of the computing device; and detecting the activity tracking device being within a proximity distance from the computing device, the proximity distance being within a low energy wireless communication distance that enables the wireless data connection; the sending of motion data to the computing device being configured to continue while additional motion data is captured and while the activity tracking device is within the proximity distance, the metric displayed on the graphical user interface being shown to change in an increasing numerical or graphical form in substantial real-time for the motion data sent when the activity tracking device is within the proximity distance, the method being executed by a processor. 2. The method of claim 1, wherein the metric of the plurality of metrics is a step count quantified from the motion data. 3. The method of claim 1, wherein the metric is a step count that is shown increasing numerically or graphically on the graphical user interface of the computing device as motion quantified as steps is captured by the activity tracking device and the step count is shown to pause on the graphical user interface when motion captured by the activity tracking device is insufficient or lacking to qualify as steps. 4. A method comprising:
capturing motion data associated with activity of a user via an activity tracking device, the motion data quantified into a plurality of metrics associated with the activity of the user; storing the motion data in storage of the activity tracking device; connecting the activity tracking device with a computing device over a wireless data connection; and sending the motion data to the computing device for display of a metric, of the plurality of metrics, on a graphical user interface of an activity application of the computing device, the sending of motion data to the computing device being configured to continue while additional motion data is captured, and the metric displayed on the graphical user interface being shown to change in an increasing numerical or graphical form in substantial real-time while the activity tracking device is within a proximity distance that enables wireless communication, the method being executed by a processor, the motion data being transferred from the storage of the activity tracking device to the computing device, and upon a sync operation, the motion data being transferred to an activity management server by the computing device over an Internet connection, and the activity management server updating the plurality of metrics associated with the motion data captured based on the activity of the user. 5. A method comprising:
capturing motion data associated with activity of a user via an activity tracking device, the motion data quantified into a plurality of metrics associated with the activity of the user; storing the motion data in storage of the activity tracking device; connecting the activity tracking device with a computing device over a wireless data connection; sending the motion data to the computing device for display of a metric, of the plurality of metrics, on a graphical user interface of an activity application of the computing device, the sending of motion data to the computing device being configured to continue while additional motion data is captured, and the metric displayed on the graphical user interface being shown to change in an increasing numerical or graphical form in substantial real-time while the activity tracking device is within a proximity distance that enables wireless communication, the method being executed by a processor; and wherein the computing device is configured to receive setting configurations from an activity management server, the setting configurations being associated with a user account that is associated with the activity tracking device, the activity management server configured to render graphical user interfaces on a website to display the plurality of metrics in one or more illustration configurations, the motion data being transferred via the computing device to the activity management server to synchronize the plurality of metrics on the computing device with the graphical user interfaces of the website. 6. The method of claim 5, wherein the metric of the plurality of metrics corresponds to at least one of a step count metric, a stair count metric, a distance traveled metric, an active time metric, a calories burned metric, and a sleep metric. 7. The method of claim 6, wherein the metric is additionally displayed on a display screen of the activity tracking device. 8. A method of claim 5, wherein a data transfer rate is set based on a scaled-down connection interval between the activity tracking device and the computing device while sending the motion data for substantial real-time display on a screen of the computing device. 9. A device configured for capture of activity for a user comprising:
a housing; a sensor disposed in the housing to capture motion data associated with activity of the user via the device, the motion data being captured over time, the motion data quantified to define a plurality of metrics associated with the activity of the user; a memory for storing the captured motion data; and a processor for managing connection of the device with a computing device over a wireless data connection, the processor further managing sending of the motion data to the computing device for display of a metric, of the plurality of metrics, on a graphical user interface of an activity application of the computing device, the sending of the motion data to the computing device being configured to continue while additional motion data is captured, the metric displayed on the graphical user interface being shown to change in an increasing numerical or graphical form in substantial real-time while the device is within a proximity distance, wherein the processor further: manages pausing the sending of the motion data when the device is beyond the proximity distance; and manages continuing to store the motion data captured by the device in storage of the device. 10. The device of claim 9, wherein the device is defined as a wearable wrist attachable structure that is defined at least partially from a plastic material. 11. The device of claim 10, wherein the processor further:
manages reestablishing the connection between the device and the computing device when within the proximity distance; and manages sending the motion data from the storage to the computing device, the sent motion data acting to increment the metric based on metric data stored in the device while the connection was paused. 12. The device of claim 9, wherein the wireless data connection is facilitated by wireless communication logic that includes one of a wireless processing logic, a low energy wireless processing logic, or a radio processing logic. 13. The device of claim 9, wherein the processor sets a data transfer rate based on a scaled-down connection interval between the device and the computing device while sending the motion data for the substantial real-time display on a screen of the computing device. | Methods, systems and devices are provided for displaying monitored activity data in substantial real-time on a screen of a computing device. One example method includes capturing motion data associated with activity of a user via an activity tracking device. The motion data is quantified into a plurality of metrics associated with the activity of the user. The method includes connecting the activity tracking device with a computing device over a wireless data connection, and sending motion data from the activity tracking device to the computing device for display of one or more of the plurality of metrics on a graphical user interface of the computing device. At least one of the plurality of metrics displayed on the graphical user interface is shown to change in substantial real-time based on the motion data.1. A method comprising:
capturing motion data associated with activity of a user via an activity tracking device, the motion data quantified into a plurality of metrics associated with the activity of the user; storing the motion data in storage of the activity tracking device; connecting the activity tracking device with a computing device over a wireless data connection; sending the motion data to the computing device for display of a metric, of the plurality of metrics, on a graphical user interface of an activity application of the computing device; and detecting the activity tracking device being within a proximity distance from the computing device, the proximity distance being within a low energy wireless communication distance that enables the wireless data connection; the sending of motion data to the computing device being configured to continue while additional motion data is captured and while the activity tracking device is within the proximity distance, the metric displayed on the graphical user interface being shown to change in an increasing numerical or graphical form in substantial real-time for the motion data sent when the activity tracking device is within the proximity distance, the method being executed by a processor. 2. The method of claim 1, wherein the metric of the plurality of metrics is a step count quantified from the motion data. 3. The method of claim 1, wherein the metric is a step count that is shown increasing numerically or graphically on the graphical user interface of the computing device as motion quantified as steps is captured by the activity tracking device and the step count is shown to pause on the graphical user interface when motion captured by the activity tracking device is insufficient or lacking to qualify as steps. 4. A method comprising:
capturing motion data associated with activity of a user via an activity tracking device, the motion data quantified into a plurality of metrics associated with the activity of the user; storing the motion data in storage of the activity tracking device; connecting the activity tracking device with a computing device over a wireless data connection; and sending the motion data to the computing device for display of a metric, of the plurality of metrics, on a graphical user interface of an activity application of the computing device, the sending of motion data to the computing device being configured to continue while additional motion data is captured, and the metric displayed on the graphical user interface being shown to change in an increasing numerical or graphical form in substantial real-time while the activity tracking device is within a proximity distance that enables wireless communication, the method being executed by a processor, the motion data being transferred from the storage of the activity tracking device to the computing device, and upon a sync operation, the motion data being transferred to an activity management server by the computing device over an Internet connection, and the activity management server updating the plurality of metrics associated with the motion data captured based on the activity of the user. 5. A method comprising:
capturing motion data associated with activity of a user via an activity tracking device, the motion data quantified into a plurality of metrics associated with the activity of the user; storing the motion data in storage of the activity tracking device; connecting the activity tracking device with a computing device over a wireless data connection; sending the motion data to the computing device for display of a metric, of the plurality of metrics, on a graphical user interface of an activity application of the computing device, the sending of motion data to the computing device being configured to continue while additional motion data is captured, and the metric displayed on the graphical user interface being shown to change in an increasing numerical or graphical form in substantial real-time while the activity tracking device is within a proximity distance that enables wireless communication, the method being executed by a processor; and wherein the computing device is configured to receive setting configurations from an activity management server, the setting configurations being associated with a user account that is associated with the activity tracking device, the activity management server configured to render graphical user interfaces on a website to display the plurality of metrics in one or more illustration configurations, the motion data being transferred via the computing device to the activity management server to synchronize the plurality of metrics on the computing device with the graphical user interfaces of the website. 6. The method of claim 5, wherein the metric of the plurality of metrics corresponds to at least one of a step count metric, a stair count metric, a distance traveled metric, an active time metric, a calories burned metric, and a sleep metric. 7. The method of claim 6, wherein the metric is additionally displayed on a display screen of the activity tracking device. 8. A method of claim 5, wherein a data transfer rate is set based on a scaled-down connection interval between the activity tracking device and the computing device while sending the motion data for substantial real-time display on a screen of the computing device. 9. A device configured for capture of activity for a user comprising:
a housing; a sensor disposed in the housing to capture motion data associated with activity of the user via the device, the motion data being captured over time, the motion data quantified to define a plurality of metrics associated with the activity of the user; a memory for storing the captured motion data; and a processor for managing connection of the device with a computing device over a wireless data connection, the processor further managing sending of the motion data to the computing device for display of a metric, of the plurality of metrics, on a graphical user interface of an activity application of the computing device, the sending of the motion data to the computing device being configured to continue while additional motion data is captured, the metric displayed on the graphical user interface being shown to change in an increasing numerical or graphical form in substantial real-time while the device is within a proximity distance, wherein the processor further: manages pausing the sending of the motion data when the device is beyond the proximity distance; and manages continuing to store the motion data captured by the device in storage of the device. 10. The device of claim 9, wherein the device is defined as a wearable wrist attachable structure that is defined at least partially from a plastic material. 11. The device of claim 10, wherein the processor further:
manages reestablishing the connection between the device and the computing device when within the proximity distance; and manages sending the motion data from the storage to the computing device, the sent motion data acting to increment the metric based on metric data stored in the device while the connection was paused. 12. The device of claim 9, wherein the wireless data connection is facilitated by wireless communication logic that includes one of a wireless processing logic, a low energy wireless processing logic, or a radio processing logic. 13. The device of claim 9, wherein the processor sets a data transfer rate based on a scaled-down connection interval between the device and the computing device while sending the motion data for the substantial real-time display on a screen of the computing device. | 2,600 |
10,560 | 10,560 | 15,373,401 | 2,627 | An apparatus can include a processor; memory accessible by the processor; and a display housing, a keyboard housing and a hinge assembly that rotatably couples the display housing and the keyboard housing where the keyboard housing includes a hinge assembly end, a front end, a left side and a right side; a keyboard that includes a spacebar, an S key, an L key and an S-to-L key distance; and a touchpad disposed between the spacebar and the front end that extends a left side to right side distance greater than the S-to-L key distance. | 1. An apparatus comprising:
a processor; memory accessible by the processor; and a display housing, a keyboard housing and a hinge assembly that rotatably couples the display housing and the keyboard housing wherein the keyboard housing comprises
a hinge assembly end, a front end, a left side and a right side;
a keyboard that comprises a spacebar, an S key, an L key and an S-to-L key distance; and
a touchpad disposed between the spacebar and the front end that extends a left side to right side distance greater than the S-to-L key distance. 2. The apparatus of claim 1 comprising palm rejection circuitry operatively coupled to the touchpad. 3. The apparatus of claim 1 comprising gesture recognition circuitry operatively coupled to the touchpad. 4. The apparatus of claim 3 wherein the gesture recognition circuitry comprises multi-touch gesture recognition circuitry. 5. The apparatus of claim 1 wherein the touchpad comprises a left palm rest portion and a right palm rest portion. 6. The apparatus of claim 3 wherein the gesture recognition circuitry comprises palm-based gesture recognition circuitry. 7. The apparatus of claim 6 wherein the palm-based gesture recognition circuitry comprises a library that comprises a palm-based horizontal slide gesture and a palm-based vertical slide gesture. 8. The apparatus of claim 6 wherein the palm-based gesture recognition circuitry comprises a library that comprises at least one dual palm-based gesture. 9. The apparatus of claim 1 wherein the spacebar comprises a spacebar width and wherein the touchpad comprises a left to right distance greater than the spacebar width. 10. The apparatus of claim 1 wherein the keyboard comprises a left side shift key, a right side shift key and a left side shift key to a right side shift key distance and wherein the touchpad extends a left side to right side distance greater than the left side shift key to a right side shift key distance. 11. A method comprising:
rejecting a static palm signal generated via a touchpad of a keyboard housing that comprises a keyboard wherein the signal is generated from an area of the touchpad which extends beyond an S key of the keyboard if originating from a touchpad side associated with the S key, and wherein the signal is generated from an area of the touchpad which extends beyond an L key of the keyboard if originating from a touchpad side associated with the L key; detecting a dynamic touch signal generated via the touchpad; and responsive to the dynamic touch signal, issuing a command. 12. The method of claim 11 wherein the dynamic touch signal corresponds to a dynamic finger touch signal associated with a finger sized area. 13. The method of claim 11 wherein the dynamic touch signal corresponds to a dynamic palm touch signal associated with a palm sized area. 14. The method of claim 11 wherein the dynamic touch signal corresponds to a dynamic palm gesture signal and wherein the command comprises a scroll command. 15. The method of claim 11 wherein the dynamic touch signal corresponds to a first dynamic palm gesture signal generated via the area of the touchpad side associated with the S key and a second dynamic palm gesture signal generated via the area of the touchpad side associated with the L key. 16. The method of claim 11 wherein the dynamic touch signal corresponds to a thumb touch signal associated with a central portion of the touchpad. 17. The method of claim 16 wherein the dynamic touch signal corresponds to a multiple thumb touch signal associated with the central portion of the touchpad. 18. One or more computer-readable media comprising computer-executable instructions to instruct a computer to:
reject a static palm signal generated via a touchpad of a keyboard housing that comprises a keyboard wherein the signal is generated from an area of the touchpad which extends beyond an S key of the keyboard if originating from a touchpad side associated with the S key, and wherein the signal is generated from an area of the touchpad which extends beyond an L key of the keyboard if originating from a touchpad side associated with the L key; detect a dynamic touch signal generated via the touchpad; and responsive to the dynamic touch signal, issue a command. 19. The one or more computer-readable media of claim 18 wherein the computer-executable instructions comprise instructions to instruct the computer to reject a static palm signal generated via the area of the touchpad side associated with the S key and to detect a dynamic touch signal generated via the area of the touchpad side associated with the L key. 20. The one or more computer-readable media of claim 18 wherein the computer-executable instructions comprise instructions to instruct the computer to reject a static palm signal generated via the area of the touchpad side associated with the L key and to detect a dynamic touch signal generated via the area of the touchpad side associated with the S key. | An apparatus can include a processor; memory accessible by the processor; and a display housing, a keyboard housing and a hinge assembly that rotatably couples the display housing and the keyboard housing where the keyboard housing includes a hinge assembly end, a front end, a left side and a right side; a keyboard that includes a spacebar, an S key, an L key and an S-to-L key distance; and a touchpad disposed between the spacebar and the front end that extends a left side to right side distance greater than the S-to-L key distance.1. An apparatus comprising:
a processor; memory accessible by the processor; and a display housing, a keyboard housing and a hinge assembly that rotatably couples the display housing and the keyboard housing wherein the keyboard housing comprises
a hinge assembly end, a front end, a left side and a right side;
a keyboard that comprises a spacebar, an S key, an L key and an S-to-L key distance; and
a touchpad disposed between the spacebar and the front end that extends a left side to right side distance greater than the S-to-L key distance. 2. The apparatus of claim 1 comprising palm rejection circuitry operatively coupled to the touchpad. 3. The apparatus of claim 1 comprising gesture recognition circuitry operatively coupled to the touchpad. 4. The apparatus of claim 3 wherein the gesture recognition circuitry comprises multi-touch gesture recognition circuitry. 5. The apparatus of claim 1 wherein the touchpad comprises a left palm rest portion and a right palm rest portion. 6. The apparatus of claim 3 wherein the gesture recognition circuitry comprises palm-based gesture recognition circuitry. 7. The apparatus of claim 6 wherein the palm-based gesture recognition circuitry comprises a library that comprises a palm-based horizontal slide gesture and a palm-based vertical slide gesture. 8. The apparatus of claim 6 wherein the palm-based gesture recognition circuitry comprises a library that comprises at least one dual palm-based gesture. 9. The apparatus of claim 1 wherein the spacebar comprises a spacebar width and wherein the touchpad comprises a left to right distance greater than the spacebar width. 10. The apparatus of claim 1 wherein the keyboard comprises a left side shift key, a right side shift key and a left side shift key to a right side shift key distance and wherein the touchpad extends a left side to right side distance greater than the left side shift key to a right side shift key distance. 11. A method comprising:
rejecting a static palm signal generated via a touchpad of a keyboard housing that comprises a keyboard wherein the signal is generated from an area of the touchpad which extends beyond an S key of the keyboard if originating from a touchpad side associated with the S key, and wherein the signal is generated from an area of the touchpad which extends beyond an L key of the keyboard if originating from a touchpad side associated with the L key; detecting a dynamic touch signal generated via the touchpad; and responsive to the dynamic touch signal, issuing a command. 12. The method of claim 11 wherein the dynamic touch signal corresponds to a dynamic finger touch signal associated with a finger sized area. 13. The method of claim 11 wherein the dynamic touch signal corresponds to a dynamic palm touch signal associated with a palm sized area. 14. The method of claim 11 wherein the dynamic touch signal corresponds to a dynamic palm gesture signal and wherein the command comprises a scroll command. 15. The method of claim 11 wherein the dynamic touch signal corresponds to a first dynamic palm gesture signal generated via the area of the touchpad side associated with the S key and a second dynamic palm gesture signal generated via the area of the touchpad side associated with the L key. 16. The method of claim 11 wherein the dynamic touch signal corresponds to a thumb touch signal associated with a central portion of the touchpad. 17. The method of claim 16 wherein the dynamic touch signal corresponds to a multiple thumb touch signal associated with the central portion of the touchpad. 18. One or more computer-readable media comprising computer-executable instructions to instruct a computer to:
reject a static palm signal generated via a touchpad of a keyboard housing that comprises a keyboard wherein the signal is generated from an area of the touchpad which extends beyond an S key of the keyboard if originating from a touchpad side associated with the S key, and wherein the signal is generated from an area of the touchpad which extends beyond an L key of the keyboard if originating from a touchpad side associated with the L key; detect a dynamic touch signal generated via the touchpad; and responsive to the dynamic touch signal, issue a command. 19. The one or more computer-readable media of claim 18 wherein the computer-executable instructions comprise instructions to instruct the computer to reject a static palm signal generated via the area of the touchpad side associated with the S key and to detect a dynamic touch signal generated via the area of the touchpad side associated with the L key. 20. The one or more computer-readable media of claim 18 wherein the computer-executable instructions comprise instructions to instruct the computer to reject a static palm signal generated via the area of the touchpad side associated with the L key and to detect a dynamic touch signal generated via the area of the touchpad side associated with the S key. | 2,600 |
10,561 | 10,561 | 16,017,888 | 2,653 | An earbud may include capacitive sensor electrodes and a capacitance-to-digital converter that is configured to make capacitance measurements with the capacitive sensor electrodes. The earbud may have a housing in which a speaker is mounted. A tubular portion of the housing may have a passageway that is aligned with the speaker. The tubular portion may be received within the ear canal of a user. The tubular portion may include a tubular member on which the capacitive sensor electrodes are formed. The tubular member may be formed from a compressible tubular member that is compressed when the earbud is worn in the ear of the user. Ring-shaped electrodes and other electrodes may be formed on the compressible tubular member. Control circuitry in the earbud may determine whether the earbud is fully or partially within the ear of a user based on the capacitance measurements. | 1. An earbud, comprising:
a housing having a tubular portion with a passageway; a speaker in the housing that is aligned with the passageway and that is configured to provide sound through the passageway; capacitive sensor electrodes on the tubular portion; and a capacitance-to-digital converter configured to gather capacitive sensor measurements from the capacitive sensor electrodes. 2. The earbud defined in claim 1 wherein the tubular portion comprises a compressible tubular member. 3. The earbud defined in claim 2 wherein the capacitive sensor electrodes include a first capacitive sensor electrode on an outer surface of the tubular portion and a second capacitive sensor electrode on an inner surface of the tubular member that runs along the passageway. 4. The earbud defined in claim 1 wherein the capacitive sensor electrodes include a metal ring that surrounds the passageway. 5. The earbud defined in claim 1 wherein the capacitive sensor electrodes include a partial metal ring that partially surrounds the passageway and that has end portions separated by a gap that extends parallel to the passageway. 6. The earbud defined in claim 1 wherein the capacitive sensor electrodes include multiple rings arranged respectively at different positions along the tubular portion. 7. The earbud defined in claim 1 further comprising control circuitry configured to determine from the capacitive sensor measurements whether the tubular portion is in an ear canal. 8. The earbud defined in claim 7 wherein the control circuitry is configured to adjust an audio equalization setting based on the capacitive sensor measurements. 9. An earbud, comprising:
a tubular member that extends along a longitudinal axis, wherein the tubular member has a passageway that passes through the tubular member along the longitudinal axis; a speaker that is aligned with the passageway and that is configured to supply sound through the passageway; and first and second metal rings supported by the tubular member that form respective first and second capacitive sensor electrodes. 10. The earbud defined in claim 9 further comprising:
a capacitance-to-digital converter configured to use the first and second metal rings to gather capacitive sensor measurements; and
control circuitry configured to play audio through the speaker and configured to adjust an equalization setting for the audio based on the capacitive sensor measurements. 11. The earbud defined in claim 9 wherein the tubular member comprises a compressible material. 12. The earbud defined in claim 11 wherein the compressible material comprises foam. 13. The earbud defined in claim 9 wherein the tubular member has a rigid inner tubular portion and a compressible outer tubular portion surrounding the rigid inner tubular portion. 14. The earbud defined in claim 9 wherein the first metal ring is formed within the passageway and wherein the second metal ring is coaxial with the first metal ring. 15. The earbud defined in claim 9 further comprising a flexible printed circuit with metal traces, wherein the first and second capacitive sensor electrodes are formed from the metal traces. 16. The earbud defined in claim 15 wherein the flexible printed circuit has a portion that wraps around the tubular member and the longitudinal axis. 17. An electronic device, comprising:
a compressible member configured to be received within an ear of a user, wherein the compressible member includes a passageway; a speaker configured to provide sound through the passageway; ring-shaped capacitive sensor electrodes on the compressible electrode; and a capacitance-to-digital converter that is configured to gather capacitance measurements from the capacitive sensor electrodes. 18. The electronic device defined in claim 17 wherein the passageway passes through each of the ring-shaped capacitive sensor electrodes. 19. The electronic device defined in claim 18 wherein the compressible member comprises a foam tube. 20. The electronic device defined in claim 19 further comprising a rigid polymer tube within the foam tube. | An earbud may include capacitive sensor electrodes and a capacitance-to-digital converter that is configured to make capacitance measurements with the capacitive sensor electrodes. The earbud may have a housing in which a speaker is mounted. A tubular portion of the housing may have a passageway that is aligned with the speaker. The tubular portion may be received within the ear canal of a user. The tubular portion may include a tubular member on which the capacitive sensor electrodes are formed. The tubular member may be formed from a compressible tubular member that is compressed when the earbud is worn in the ear of the user. Ring-shaped electrodes and other electrodes may be formed on the compressible tubular member. Control circuitry in the earbud may determine whether the earbud is fully or partially within the ear of a user based on the capacitance measurements.1. An earbud, comprising:
a housing having a tubular portion with a passageway; a speaker in the housing that is aligned with the passageway and that is configured to provide sound through the passageway; capacitive sensor electrodes on the tubular portion; and a capacitance-to-digital converter configured to gather capacitive sensor measurements from the capacitive sensor electrodes. 2. The earbud defined in claim 1 wherein the tubular portion comprises a compressible tubular member. 3. The earbud defined in claim 2 wherein the capacitive sensor electrodes include a first capacitive sensor electrode on an outer surface of the tubular portion and a second capacitive sensor electrode on an inner surface of the tubular member that runs along the passageway. 4. The earbud defined in claim 1 wherein the capacitive sensor electrodes include a metal ring that surrounds the passageway. 5. The earbud defined in claim 1 wherein the capacitive sensor electrodes include a partial metal ring that partially surrounds the passageway and that has end portions separated by a gap that extends parallel to the passageway. 6. The earbud defined in claim 1 wherein the capacitive sensor electrodes include multiple rings arranged respectively at different positions along the tubular portion. 7. The earbud defined in claim 1 further comprising control circuitry configured to determine from the capacitive sensor measurements whether the tubular portion is in an ear canal. 8. The earbud defined in claim 7 wherein the control circuitry is configured to adjust an audio equalization setting based on the capacitive sensor measurements. 9. An earbud, comprising:
a tubular member that extends along a longitudinal axis, wherein the tubular member has a passageway that passes through the tubular member along the longitudinal axis; a speaker that is aligned with the passageway and that is configured to supply sound through the passageway; and first and second metal rings supported by the tubular member that form respective first and second capacitive sensor electrodes. 10. The earbud defined in claim 9 further comprising:
a capacitance-to-digital converter configured to use the first and second metal rings to gather capacitive sensor measurements; and
control circuitry configured to play audio through the speaker and configured to adjust an equalization setting for the audio based on the capacitive sensor measurements. 11. The earbud defined in claim 9 wherein the tubular member comprises a compressible material. 12. The earbud defined in claim 11 wherein the compressible material comprises foam. 13. The earbud defined in claim 9 wherein the tubular member has a rigid inner tubular portion and a compressible outer tubular portion surrounding the rigid inner tubular portion. 14. The earbud defined in claim 9 wherein the first metal ring is formed within the passageway and wherein the second metal ring is coaxial with the first metal ring. 15. The earbud defined in claim 9 further comprising a flexible printed circuit with metal traces, wherein the first and second capacitive sensor electrodes are formed from the metal traces. 16. The earbud defined in claim 15 wherein the flexible printed circuit has a portion that wraps around the tubular member and the longitudinal axis. 17. An electronic device, comprising:
a compressible member configured to be received within an ear of a user, wherein the compressible member includes a passageway; a speaker configured to provide sound through the passageway; ring-shaped capacitive sensor electrodes on the compressible electrode; and a capacitance-to-digital converter that is configured to gather capacitance measurements from the capacitive sensor electrodes. 18. The electronic device defined in claim 17 wherein the passageway passes through each of the ring-shaped capacitive sensor electrodes. 19. The electronic device defined in claim 18 wherein the compressible member comprises a foam tube. 20. The electronic device defined in claim 19 further comprising a rigid polymer tube within the foam tube. | 2,600 |
10,562 | 10,562 | 16,357,123 | 2,696 | The present invention provides a mobile terminal and a method of capturing an image using the same. The mobile terminal controls a camera conveniently and efficiently to capture an image and performs focusing in various manners to capture an image. Accordingly, a user can obtain a desired image easily and conveniently. | 1. A mobile terminal comprising:
a camera configured to capture an image, a touch screen configured to receive a touch input, and a processor configured to:
control the touch screen to display a preview image of an image to be captured by the camera,
control the touch screen to display an auto focus guide,
determine a first touch input received by the touch screen is a drag-and-drop touch,
control the touch screen to move the auto focus guide to a position on the preview image where the drop has occurred, and
control the camera to auto focus on an object present at the position where the auto focus guide is displayed. 2. The mobile terminal according to claim 1, wherein the processor is further configured to control the touch screen to move the auto focus guide to a position on the preview image where the drop has occurred after the first touch input is sustained at the position where the drop has occurred for a period of time that is equal to or more than a preset period of time. 3. The mobile terminal according to claim 1, wherein the processor is further configured to control the touch screen to move the auto focus guide to a position on the preview image where the drop has occurred after the drag-and-drop operation is executed. 4. The mobile terminal according to claim 1, wherein the drag-and-drop touch comprises contacting the auto focus guide displayed on a preview image, and then sliding the contact from the initially contacted position to another position while maintaining the contact with the touch screen during the sliding. 5. The mobile terminal according to claim 1, wherein the processor is further configured to control the camera to capture the image based on recognizing a release of the first touch input. 6. The mobile terminal according to claim 5, wherein the processor is further configured to control the camera to initiate the capture of the image immediately after the first touch input is released. 7. The mobile terminal according to claim 5, wherein the processor is further configured to control the camera to capture the image after a preset period of time has elapsed from the a release of the first touch input. 8. The mobile terminal according to claim 1, wherein the processor is further configured to control the touch screen to display the auto focus guide at a central portion of the touch screen and after the first touch input is received to move the auto focus guide to the selected position based on receiving the drag-and-drop touch input. 9. The mobile terminal according to claim 1, wherein the processor is further configured to notify the user that auto focusing has been successful. 10. The mobile terminal according to claim 9, wherein the processor is further configured to notify the user that auto focusing has been successful by outputting a sound. 11. The mobile terminal according to claim 9, wherein the processor is further configured to notify the user that auto focusing has been successful by visually altering the auto focus guide. 12. A method for autofocusing, the method comprising:
displaying on a touchscreen a preview image of an image to be captured by a camera, displaying an auto focus guide on the touchscreen, determining a first touch input received by the touch screen is a drag-and-drop touch, moving on the touchscreen the auto focus guide to a position on the preview image where the drop has occurred, and auto focusing the camera on an object present at the position where the auto focus guide is displayed. 13. The method of claim 12, further comprising moving on the touchscreen the auto focus guide to a position on the preview image where the drop has occurred after the first touch input is sustained at the position where the drop has occurred for a period of time that is equal to or more than a preset period of time. 14. The method of claim 12, further comprising moving the auto focus guide to a position on the preview image where the drop has occurred after the drag-and-drop operation is executed. 15. The method of claim 12, wherein the drag-and-drop touch comprises contacting the auto focus guide displayed on a preview image, and then sliding the contact from the initially contacted position to another position while maintaining the contact with the touch screen during the sliding. 16. The method of claim 12, further comprising capturing the image with the camera based on recognizing a release of the first touch input. 17. The method of claim 16, initiating the capture of the image with the camera immediately after the first touch input is released. 18. The method of claim 16, further comprising capturing the image with the camera after a preset period of time has elapsed from the release of the first touch input. 19. The method of claim 12, further comprising:
displaying on the touch screen the auto focus guide at a central portion of the touch screen, and after the first touch input is received, moving the auto focus guide to the selected position based on receiving the drag-and-drop touch input. 20. The method of claim 12, further comprising notifying the user that auto focusing has been successful. 21. The method of claim 20, wherein notifying the user that auto focusing has been successful comprises outputting a sound. 22. The method of claim 20, wherein notifying the user that auto focusing has been successful comprises visually altering the auto focus guide. 23. A computer readable medium having stored thereon a computer program that when executed causes a mobile terminal to:
control a touch screen to display a preview image of an image to be captured by a camera, control the touch screen to display an auto focus guide, determine a first touch input received by the touch screen is a drag-and-drop touch, control the touch screen to move the auto focus guide to a position on the preview image where the drop has occurred, and control the camera to auto focus on an object present at the position where the auto focus guide is displayed. | The present invention provides a mobile terminal and a method of capturing an image using the same. The mobile terminal controls a camera conveniently and efficiently to capture an image and performs focusing in various manners to capture an image. Accordingly, a user can obtain a desired image easily and conveniently.1. A mobile terminal comprising:
a camera configured to capture an image, a touch screen configured to receive a touch input, and a processor configured to:
control the touch screen to display a preview image of an image to be captured by the camera,
control the touch screen to display an auto focus guide,
determine a first touch input received by the touch screen is a drag-and-drop touch,
control the touch screen to move the auto focus guide to a position on the preview image where the drop has occurred, and
control the camera to auto focus on an object present at the position where the auto focus guide is displayed. 2. The mobile terminal according to claim 1, wherein the processor is further configured to control the touch screen to move the auto focus guide to a position on the preview image where the drop has occurred after the first touch input is sustained at the position where the drop has occurred for a period of time that is equal to or more than a preset period of time. 3. The mobile terminal according to claim 1, wherein the processor is further configured to control the touch screen to move the auto focus guide to a position on the preview image where the drop has occurred after the drag-and-drop operation is executed. 4. The mobile terminal according to claim 1, wherein the drag-and-drop touch comprises contacting the auto focus guide displayed on a preview image, and then sliding the contact from the initially contacted position to another position while maintaining the contact with the touch screen during the sliding. 5. The mobile terminal according to claim 1, wherein the processor is further configured to control the camera to capture the image based on recognizing a release of the first touch input. 6. The mobile terminal according to claim 5, wherein the processor is further configured to control the camera to initiate the capture of the image immediately after the first touch input is released. 7. The mobile terminal according to claim 5, wherein the processor is further configured to control the camera to capture the image after a preset period of time has elapsed from the a release of the first touch input. 8. The mobile terminal according to claim 1, wherein the processor is further configured to control the touch screen to display the auto focus guide at a central portion of the touch screen and after the first touch input is received to move the auto focus guide to the selected position based on receiving the drag-and-drop touch input. 9. The mobile terminal according to claim 1, wherein the processor is further configured to notify the user that auto focusing has been successful. 10. The mobile terminal according to claim 9, wherein the processor is further configured to notify the user that auto focusing has been successful by outputting a sound. 11. The mobile terminal according to claim 9, wherein the processor is further configured to notify the user that auto focusing has been successful by visually altering the auto focus guide. 12. A method for autofocusing, the method comprising:
displaying on a touchscreen a preview image of an image to be captured by a camera, displaying an auto focus guide on the touchscreen, determining a first touch input received by the touch screen is a drag-and-drop touch, moving on the touchscreen the auto focus guide to a position on the preview image where the drop has occurred, and auto focusing the camera on an object present at the position where the auto focus guide is displayed. 13. The method of claim 12, further comprising moving on the touchscreen the auto focus guide to a position on the preview image where the drop has occurred after the first touch input is sustained at the position where the drop has occurred for a period of time that is equal to or more than a preset period of time. 14. The method of claim 12, further comprising moving the auto focus guide to a position on the preview image where the drop has occurred after the drag-and-drop operation is executed. 15. The method of claim 12, wherein the drag-and-drop touch comprises contacting the auto focus guide displayed on a preview image, and then sliding the contact from the initially contacted position to another position while maintaining the contact with the touch screen during the sliding. 16. The method of claim 12, further comprising capturing the image with the camera based on recognizing a release of the first touch input. 17. The method of claim 16, initiating the capture of the image with the camera immediately after the first touch input is released. 18. The method of claim 16, further comprising capturing the image with the camera after a preset period of time has elapsed from the release of the first touch input. 19. The method of claim 12, further comprising:
displaying on the touch screen the auto focus guide at a central portion of the touch screen, and after the first touch input is received, moving the auto focus guide to the selected position based on receiving the drag-and-drop touch input. 20. The method of claim 12, further comprising notifying the user that auto focusing has been successful. 21. The method of claim 20, wherein notifying the user that auto focusing has been successful comprises outputting a sound. 22. The method of claim 20, wherein notifying the user that auto focusing has been successful comprises visually altering the auto focus guide. 23. A computer readable medium having stored thereon a computer program that when executed causes a mobile terminal to:
control a touch screen to display a preview image of an image to be captured by a camera, control the touch screen to display an auto focus guide, determine a first touch input received by the touch screen is a drag-and-drop touch, control the touch screen to move the auto focus guide to a position on the preview image where the drop has occurred, and control the camera to auto focus on an object present at the position where the auto focus guide is displayed. | 2,600 |
10,563 | 10,563 | 16,125,363 | 2,697 | Presenting an image of a scene may include capturing an image of a scene by a camera of an electronic device, wherein the electronic device comprises the camera and a display, and wherein the camera and the display have a first spatial relationship, determining a second spatial relationship between a viewpoint and the display of the electronic device, warping the image to obtain an image of a first portion of the scene based on the first spatial relationship and the second spatial relationship, and presenting the warped image on the display, wherein, from the viewpoint, the image of the first portion of the scene is substantially contiguous with a second portion of the scene visible outside an edge of the electronic device. | 1. A method for presenting an image of a scene, comprising:
capturing an image of a scene by a camera of an electronic device, wherein the electronic device comprises the camera and a display, and wherein the camera and the display have a first spatial relationship; determining a second spatial relationship between a viewpoint and the display of the electronic device; warping the image to obtain an image of a first portion of the scene based on the first spatial relationship and the second spatial relationship; and presenting the warped image on the display, wherein, from the viewpoint, the image of the first portion of the scene is substantially contiguous with a second portion of the scene visible outside an edge of the electronic device. 2. The method of claim 1, wherein warping the image further comprises:
back-projecting the image based on intrinsics of the camera and a depth in the scene; and re-projecting the image to the display based on the second spatial relationship. 3. The method of claim 2, further comprising:
selecting the depth using at least one selected from a group consisting of an autofocus operation, a stereo estimation, a time of flight operation, and a structured light operation; and backprojecting the image based on the selected depth. 4. The method of claim 2, wherein back-projecting the image comprises:
identifying four image corners, based on the intrinsics of the camera and the depth; and intersecting the four image corners with a plane defined by the display to obtain a 2D image transform. 5. The method of claim 1, wherein warping the image further comprises:
identifying an object in the scene; and warping the image such that a first portion of the object is visible in the scene from the viewpoint and an image of a second portion of the object is visible on the display from the viewpoint, and wherein the first portion and the second portion are substantially aligned. 6. The method of claim 1, further comprising:
detecting a movement of the electronic device; obtaining an updated image of the scene from the electronic device; determining an updated spatial relationship between an updated viewpoint and the display of the electronic device; warping the updated image based, at least in part, on the updated spatial relationship to obtain an updated warped image; and displaying the updated warped image on the display. 7. A computer readable medium comprising computer readable code for presenting an image of a scene, executable by one or more processors to:
capture an image of a scene by a camera of an electronic device, wherein the electronic device comprises the camera and a display, and wherein the camera and the display have a first spatial relationship; determine a second spatial relationship between a viewpoint and the display of the electronic device; warp the image to obtain an image of a first portion of the scene based on the first spatial relationship and the second spatial relationship; and present the warped image on the display, wherein, from the viewpoint, the image of the first portion of the scene is substantially contiguous with a second portion of the scene visible outside an edge of the electronic device. 8. The computer readable medium of claim 7, wherein the computer readable code to warp the image further comprises computer readable code to:
back-project the image based on intrinsics of the camera and a depth in the scene; and re-project the image to the display based on the second spatial relationship. 9. The computer readable medium of claim 8, further comprising computer readable code to:
select the depth using at least one selected from a group consisting of an autofocus operation, a stereo estimation, a time of flight operation, and a structured light operation; and backproject the image based on the selected depth. 10. The computer readable medium of claim 8, wherein the computer readable code to back-project the image comprises computer readable code to:
identify four image corners, based on the intrinsics of the camera and the depth; and intersect the four image corners with a plane defined by the display to obtain a 2D image transform. 11. The computer readable medium of claim 7, wherein the computer readable code to warp the image further comprises computer readable code to perform a color correction on the image. 12. The computer readable medium of claim 11, wherein the computer readable code to perform the color correction on the image comprises computer readable code to perform a white balance function on the image. 13. The computer readable medium of claim 7, wherein the camera is a back-facing camera, and wherein a front-facing camera captures a second image comprising at least part of a user, and wherein the viewpoint is determined based on the second image. 14. The computer readable medium of claim 7, wherein the computer readable code to warp the image further comprises computer readable code to:
identify an object in the scene; and warp the image such that a first portion of the object is visible in the scene from the viewpoint and an image of a second portion of the object is visible on the display from the viewpoint, and wherein the first portion and the second portion are substantially aligned. 15. The computer readable medium of claim 7, further comprising computer readable code to:
detect a movement of the electronic device; obtain an updated image of the scene from the electronic device; determine an updated spatial relationship between an updated viewpoint and the display of the electronic device; warp the updated image based, at least in part, on the updated spatial relationship to obtain an updated warped image; and display the updated warped image on the display. 16. A system for presenting an image of a scene, comprising:
one or more processors; and one or more memory devices coupled to the one or more processors and comprising computer readable code executable by the one or more processors to:
capture an image of a scene by a camera of an electronic device, wherein the electronic device comprises the camera and a display, and wherein the camera and the display have a first spatial relationship;
determine a second spatial relationship between a viewpoint and the display of the electronic device;
warp the image to obtain an image of a first portion of the scene based on the first spatial relationship and the second spatial relationship; and
present the warped image on the display, wherein, from the viewpoint, the image of the first portion of the scene is substantially contiguous with a second portion of the scene visible outside an edge of the electronic device. 17. The system of claim 16, wherein the computer readable code to warp the image further comprises computer readable code to perform a color correction on the image. 18. The system of claim 16, wherein the camera is a back-facing camera, and wherein a front-facing camera captures a second image comprising at least part of a user, and wherein the viewpoint is determined based on the second image. 19. The system of claim 16, wherein the computer readable code to warp the image further comprises computer readable code to:
identify an object in the scene; and warp the image such that a first portion of the object is visible in the scene from the viewpoint and an image of a second portion of the object is visible on the display from the viewpoint, and wherein the first portion and the second portion are substantially aligned. 20. The system of claim 16, further comprising computer readable code to:
detect a movement of the electronic device; obtain an updated image of the scene from the electronic device; determine an updated spatial relationship between an updated viewpoint and the display of the electronic device; warp the updated image based, at least in part, on the updated spatial relationship to obtain an updated warped image; and display the updated warped image on the display. | Presenting an image of a scene may include capturing an image of a scene by a camera of an electronic device, wherein the electronic device comprises the camera and a display, and wherein the camera and the display have a first spatial relationship, determining a second spatial relationship between a viewpoint and the display of the electronic device, warping the image to obtain an image of a first portion of the scene based on the first spatial relationship and the second spatial relationship, and presenting the warped image on the display, wherein, from the viewpoint, the image of the first portion of the scene is substantially contiguous with a second portion of the scene visible outside an edge of the electronic device.1. A method for presenting an image of a scene, comprising:
capturing an image of a scene by a camera of an electronic device, wherein the electronic device comprises the camera and a display, and wherein the camera and the display have a first spatial relationship; determining a second spatial relationship between a viewpoint and the display of the electronic device; warping the image to obtain an image of a first portion of the scene based on the first spatial relationship and the second spatial relationship; and presenting the warped image on the display, wherein, from the viewpoint, the image of the first portion of the scene is substantially contiguous with a second portion of the scene visible outside an edge of the electronic device. 2. The method of claim 1, wherein warping the image further comprises:
back-projecting the image based on intrinsics of the camera and a depth in the scene; and re-projecting the image to the display based on the second spatial relationship. 3. The method of claim 2, further comprising:
selecting the depth using at least one selected from a group consisting of an autofocus operation, a stereo estimation, a time of flight operation, and a structured light operation; and backprojecting the image based on the selected depth. 4. The method of claim 2, wherein back-projecting the image comprises:
identifying four image corners, based on the intrinsics of the camera and the depth; and intersecting the four image corners with a plane defined by the display to obtain a 2D image transform. 5. The method of claim 1, wherein warping the image further comprises:
identifying an object in the scene; and warping the image such that a first portion of the object is visible in the scene from the viewpoint and an image of a second portion of the object is visible on the display from the viewpoint, and wherein the first portion and the second portion are substantially aligned. 6. The method of claim 1, further comprising:
detecting a movement of the electronic device; obtaining an updated image of the scene from the electronic device; determining an updated spatial relationship between an updated viewpoint and the display of the electronic device; warping the updated image based, at least in part, on the updated spatial relationship to obtain an updated warped image; and displaying the updated warped image on the display. 7. A computer readable medium comprising computer readable code for presenting an image of a scene, executable by one or more processors to:
capture an image of a scene by a camera of an electronic device, wherein the electronic device comprises the camera and a display, and wherein the camera and the display have a first spatial relationship; determine a second spatial relationship between a viewpoint and the display of the electronic device; warp the image to obtain an image of a first portion of the scene based on the first spatial relationship and the second spatial relationship; and present the warped image on the display, wherein, from the viewpoint, the image of the first portion of the scene is substantially contiguous with a second portion of the scene visible outside an edge of the electronic device. 8. The computer readable medium of claim 7, wherein the computer readable code to warp the image further comprises computer readable code to:
back-project the image based on intrinsics of the camera and a depth in the scene; and re-project the image to the display based on the second spatial relationship. 9. The computer readable medium of claim 8, further comprising computer readable code to:
select the depth using at least one selected from a group consisting of an autofocus operation, a stereo estimation, a time of flight operation, and a structured light operation; and backproject the image based on the selected depth. 10. The computer readable medium of claim 8, wherein the computer readable code to back-project the image comprises computer readable code to:
identify four image corners, based on the intrinsics of the camera and the depth; and intersect the four image corners with a plane defined by the display to obtain a 2D image transform. 11. The computer readable medium of claim 7, wherein the computer readable code to warp the image further comprises computer readable code to perform a color correction on the image. 12. The computer readable medium of claim 11, wherein the computer readable code to perform the color correction on the image comprises computer readable code to perform a white balance function on the image. 13. The computer readable medium of claim 7, wherein the camera is a back-facing camera, and wherein a front-facing camera captures a second image comprising at least part of a user, and wherein the viewpoint is determined based on the second image. 14. The computer readable medium of claim 7, wherein the computer readable code to warp the image further comprises computer readable code to:
identify an object in the scene; and warp the image such that a first portion of the object is visible in the scene from the viewpoint and an image of a second portion of the object is visible on the display from the viewpoint, and wherein the first portion and the second portion are substantially aligned. 15. The computer readable medium of claim 7, further comprising computer readable code to:
detect a movement of the electronic device; obtain an updated image of the scene from the electronic device; determine an updated spatial relationship between an updated viewpoint and the display of the electronic device; warp the updated image based, at least in part, on the updated spatial relationship to obtain an updated warped image; and display the updated warped image on the display. 16. A system for presenting an image of a scene, comprising:
one or more processors; and one or more memory devices coupled to the one or more processors and comprising computer readable code executable by the one or more processors to:
capture an image of a scene by a camera of an electronic device, wherein the electronic device comprises the camera and a display, and wherein the camera and the display have a first spatial relationship;
determine a second spatial relationship between a viewpoint and the display of the electronic device;
warp the image to obtain an image of a first portion of the scene based on the first spatial relationship and the second spatial relationship; and
present the warped image on the display, wherein, from the viewpoint, the image of the first portion of the scene is substantially contiguous with a second portion of the scene visible outside an edge of the electronic device. 17. The system of claim 16, wherein the computer readable code to warp the image further comprises computer readable code to perform a color correction on the image. 18. The system of claim 16, wherein the camera is a back-facing camera, and wherein a front-facing camera captures a second image comprising at least part of a user, and wherein the viewpoint is determined based on the second image. 19. The system of claim 16, wherein the computer readable code to warp the image further comprises computer readable code to:
identify an object in the scene; and warp the image such that a first portion of the object is visible in the scene from the viewpoint and an image of a second portion of the object is visible on the display from the viewpoint, and wherein the first portion and the second portion are substantially aligned. 20. The system of claim 16, further comprising computer readable code to:
detect a movement of the electronic device; obtain an updated image of the scene from the electronic device; determine an updated spatial relationship between an updated viewpoint and the display of the electronic device; warp the updated image based, at least in part, on the updated spatial relationship to obtain an updated warped image; and display the updated warped image on the display. | 2,600 |
10,564 | 10,564 | 15,628,532 | 2,688 | A lone worker system for performing a safety check is provided. The lone worker system includes a mobile device. The mobile device includes a memory storing a safety check application thereon and a processor. The processor is coupled to the memory and executes the safety check application. The processor initiates the safety check application with respect to the service call. The safety check application executes a use-case safety monitoring of the mobile device with respect to a service call to implement the safety check. The use-case safety monitoring detects an emergency situation with respect to a connectivity of the mobile device to a server, an activity of the mobile device, and a motion of the mobile device. | 1. A lone worker system for performing a safety check, the lone worker system comprising:
a mobile device comprising:
a memory storing a safety check application thereon; and
a processor, coupled to the memory, executing the safety check application; and
wherein the processor initiates the safety check application with respect to a service call,
wherein the safety check application executes a use-case safety monitoring of the mobile device implement the safety check,
wherein the use-case safety monitoring detects an emergency situation with respect to each of a connectivity of the mobile device to a server, an activity of the mobile device, and a motion of the mobile device,
wherein detecting the activity of the mobile device comprises detecting a use inactivity of a predefined time or longer,
wherein the safety check application activates an emergency mode that escalates the service call in response the detection of the emergency situation by the use-case safety monitoring,
wherein the safety check application executes an emergency alert at a conclusion of a count by an emergency timer of the safety check application under the emergency mode or the server executes an emergency call based on a communication from the safety check application of the mobile device under the emergency mode. 2. (canceled) 3. (Canceled) 4. The lone worker system of claim 1, wherein the emergency alert comprises a notification to at least one of a supervisor, a call center operator, and emergency responder. 5. The lone worker system of claim 4, wherein the notification comprises at least one of an automatic phone call, an automatic email, and an automatic text message. 6. The lone worker system of claim 1, wherein the safety check application detects a location of the mobile device. 7. The lone worker system of claim 1, wherein the lone worker system comprises the server, and
wherein the server is in communication with the mobile device. 8. (canceled) 9. The lone worker system of claim 7, wherein the server executes an emergency call based on the mobile device being out-of-communication at a conclusion of a count by an emergency timer. 10. The lone worker system of claim 1, wherein the safety check application determines the connectivity of the mobile device to the server by determining a location of the mobile device. 11. A processor-implemented method for performing a safety check, the processor-implemented method being implemented by a safety check application stored on a memory of a mobile device, the safety check application being executed by a processor of the mobile device, the processor being coupled to the memory, the processor-implemented method comprising:
initiating, by the processor, the safety check application with respect to a service call; and executing, by the safety check application, a use-case safety monitoring of the mobile device to implement the safety check, wherein the use-case safety monitoring detects an emergency situation with respect to each of a connectivity of the mobile device to a server, an activity of the mobile device, and a motion of the mobile device, wherein detecting the activity of the mobile device comprises detecting a use inactivity of a predefined time or longer, wherein the safety check application activates an emergency mode that escalates the service call in response the detection of the emergency situation by the use-case safety monitoring, wherein the safety check application executes an emergency alert at a conclusion of a count by an emergency timer of the safety check application under the emergency mode or the server executes an emergency call based on a communication from the safety check application of the mobile device under the emergency mode. 12. (canceled) 13. (canceled) 14. The lone worker system of claim 11, wherein the emergency alert comprises a notification to at least one of a supervisor, a call center operator, and emergency responder. 15. The lone worker system of claim 14, wherein the notification comprises at least one of an automatic phone call, an automatic email, and an automatic text message. 16. The processor-implemented method of claim 11, wherein the safety check application detects a location of the mobile device. 17. The processor-implemented method of claim 11, wherein the server is in communication with the mobile device. 18. (canceled) 19. The processor-implemented method of claim 17, wherein the server executes an emergency call based on the mobile device being out-of-communication at a conclusion of a count by an emergency timer. 20. The processor-implemented method of claim 18, wherein the safety check application determines the connectivity of the mobile device to the server by determining a location of the mobile device. | A lone worker system for performing a safety check is provided. The lone worker system includes a mobile device. The mobile device includes a memory storing a safety check application thereon and a processor. The processor is coupled to the memory and executes the safety check application. The processor initiates the safety check application with respect to the service call. The safety check application executes a use-case safety monitoring of the mobile device with respect to a service call to implement the safety check. The use-case safety monitoring detects an emergency situation with respect to a connectivity of the mobile device to a server, an activity of the mobile device, and a motion of the mobile device.1. A lone worker system for performing a safety check, the lone worker system comprising:
a mobile device comprising:
a memory storing a safety check application thereon; and
a processor, coupled to the memory, executing the safety check application; and
wherein the processor initiates the safety check application with respect to a service call,
wherein the safety check application executes a use-case safety monitoring of the mobile device implement the safety check,
wherein the use-case safety monitoring detects an emergency situation with respect to each of a connectivity of the mobile device to a server, an activity of the mobile device, and a motion of the mobile device,
wherein detecting the activity of the mobile device comprises detecting a use inactivity of a predefined time or longer,
wherein the safety check application activates an emergency mode that escalates the service call in response the detection of the emergency situation by the use-case safety monitoring,
wherein the safety check application executes an emergency alert at a conclusion of a count by an emergency timer of the safety check application under the emergency mode or the server executes an emergency call based on a communication from the safety check application of the mobile device under the emergency mode. 2. (canceled) 3. (Canceled) 4. The lone worker system of claim 1, wherein the emergency alert comprises a notification to at least one of a supervisor, a call center operator, and emergency responder. 5. The lone worker system of claim 4, wherein the notification comprises at least one of an automatic phone call, an automatic email, and an automatic text message. 6. The lone worker system of claim 1, wherein the safety check application detects a location of the mobile device. 7. The lone worker system of claim 1, wherein the lone worker system comprises the server, and
wherein the server is in communication with the mobile device. 8. (canceled) 9. The lone worker system of claim 7, wherein the server executes an emergency call based on the mobile device being out-of-communication at a conclusion of a count by an emergency timer. 10. The lone worker system of claim 1, wherein the safety check application determines the connectivity of the mobile device to the server by determining a location of the mobile device. 11. A processor-implemented method for performing a safety check, the processor-implemented method being implemented by a safety check application stored on a memory of a mobile device, the safety check application being executed by a processor of the mobile device, the processor being coupled to the memory, the processor-implemented method comprising:
initiating, by the processor, the safety check application with respect to a service call; and executing, by the safety check application, a use-case safety monitoring of the mobile device to implement the safety check, wherein the use-case safety monitoring detects an emergency situation with respect to each of a connectivity of the mobile device to a server, an activity of the mobile device, and a motion of the mobile device, wherein detecting the activity of the mobile device comprises detecting a use inactivity of a predefined time or longer, wherein the safety check application activates an emergency mode that escalates the service call in response the detection of the emergency situation by the use-case safety monitoring, wherein the safety check application executes an emergency alert at a conclusion of a count by an emergency timer of the safety check application under the emergency mode or the server executes an emergency call based on a communication from the safety check application of the mobile device under the emergency mode. 12. (canceled) 13. (canceled) 14. The lone worker system of claim 11, wherein the emergency alert comprises a notification to at least one of a supervisor, a call center operator, and emergency responder. 15. The lone worker system of claim 14, wherein the notification comprises at least one of an automatic phone call, an automatic email, and an automatic text message. 16. The processor-implemented method of claim 11, wherein the safety check application detects a location of the mobile device. 17. The processor-implemented method of claim 11, wherein the server is in communication with the mobile device. 18. (canceled) 19. The processor-implemented method of claim 17, wherein the server executes an emergency call based on the mobile device being out-of-communication at a conclusion of a count by an emergency timer. 20. The processor-implemented method of claim 18, wherein the safety check application determines the connectivity of the mobile device to the server by determining a location of the mobile device. | 2,600 |
10,565 | 10,565 | 15,407,550 | 2,643 | Techniques are discussed herein for providing notification messages to a user are provided. An example method of sending an alert message according to the disclosure includes receiving a notification message for a user, determining a location of the user, determining one or more notification devices based the location of the user, generating the alert message based on one or more notification preferences associated with the user and the one or more notification devices, and sending the alert message to at least one of the one or more notification devices based on the notification preferences. | 1. A method of sending an alert message, comprising:
receiving, at a communication device, a notification message for a user; determining, by the communication device, a location of the user; determining, by the communication device, one or more notification devices based on the location of the user; generating, by the communication device, the alert message based on one or more notification preferences associated with the user and the one or more notification devices; and sending, by the communication device, the alert message to at least one of the one or more notification devices based on the one or more notification preferences. 2. The method of claim 1 wherein the alert message includes an indication of the one or more notification preferences. 3. The method of claim 1 wherein the communication device is a central controller. 4. The method of claim 3 wherein determining the location of the user includes providing a user identification associated with the user to the central controller and receiving an indication of the location of the user from the central controller. 5. The method of claim 1 wherein the at least one of the one or more notification devices is configured to receive the alert message from a second notification device. 6. The method of claim 1 wherein the alert message is a pending-notification message including information configured to enable the at least one of the one or more notification devices to display an icon associated with the user. 7. The method of claim 1 wherein determining the location of the user includes executing a user search function. 8. The method of claim 1 wherein determining the location of the user includes obtaining an image with a camera on at least one of the one or more notification devices. 9. A device for providing an alert message to a user, comprising:
at least one processor configured to:
receive a notification for the user;
determine a location of the user;
determine one or more notification devices based on the location of the user;
determine one or more notification preferences associated with the user and the one or more notification devices;
generate the alert message for at least one of the one or more notification devices based on the one or more notification preferences; and
a transceiver, communicatively coupled to the at least one processor, configured to transmit the alert message wirelessly from the device. 10. The device of claim 9 wherein the one or more notification preferences includes a privacy preference indicating conditions in which a receiving notification device that receives the alert message may present the alert message. 11. The device of claim 9 wherein the one or more notification preferences includes a display area preference indicating an area on a display in which a receiving notification device that receives the alert message will present the alert message. 12. The device of claim 9 wherein the one or more notification preferences includes a volume level preference indicating a volume in which a receiving notification device that receives the alert message will present the alert message. 13. The device of claim 9 wherein the alert message is a pending-notification message including information configured to enable the at least one of the one or more notification devices to display an icon associated with the user. 14. The device of claim 9 wherein the at least one processor is configured to receive information from at least one presence sensor and determine the location of the user based at least in part on the information received from the at least one presence sensor. 15. The device of claim 9 wherein the at least one processor is configured to receive an indication of the location of the user from the one or more notification devices. 16. An apparatus for sending an alert message, comprising:
means for receiving a notification message for a user; means for determining a location of the user; means for determining one or more notification devices based on the location of the user; means for generating the alert message based on one or more notification preferences associated with the user and the one or more notification devices; and means for sending the alert message to at least one of the one or more notification devices based on the one or more notification preferences. 17. The apparatus of claim 16 wherein the alert message includes an indication of the one or more notification preferences. 18. The apparatus of claim 16 further comprising a central control means for receiving the notification message and sending the alert message. 19. The apparatus of claim 16 wherein the means for determining the location of the user includes a means for sending a user identification associated with the user to a central controller and a means for receiving an indication of the location of the user from the central controller. 20. The apparatus of claim 16 wherein the at least one of the one or more notification devices is configured to receive the alert message from a second notification device. 21. The apparatus of claim 16 wherein the alert message is a pending-notification message including information configured to enable the at least one of the one or more notification devices to display an icon associated with the user. 22. The apparatus of claim 16 wherein the means determining the location of the user includes a means for receiving an input from at least one presence sensor. 23. The apparatus of claim 16 wherein the means for determining the location of the user includes a means for obtaining an image with a camera on at least one of the one or more notification devices. 24. A non-transitory processor-readable storage medium comprising processor-readable instructions configured to cause one or more processors to send an alert message, comprising:
code for receiving a notification message for a user; code for determining a location of the user; code for determining one or more notification devices based on the location of the user; code for generating the alert message based on one or more notification preferences associated with the user and the one or more notification devices; and code for sending the alert message to at least one of the one or more notification devices based on the one or more notification preferences. 25. The storage medium of claim 24 wherein the alert message includes an indication of the one or more notification preferences. 26. The storage medium of claim 24 further comprising code for sending the notification message to a central controller, and code for sending the alert message from the central controller. 27. The storage medium of claim 24 wherein the code for determining the location of the user includes code for providing a user identification associated with the user to a central controller and code for receiving an indication of the location of the user from the central controller. 28. The storage medium of claim 24 wherein the at least one of the one or more notification devices includes code for receiving the alert message from a second notification device. 29. The storage medium of claim 24 wherein the alert message is a pending-notification message including information configured to enable the at least one of the one or more notification devices to display an icon associated with the user. 30. The storage medium of claim 24 wherein the code for determining the location of the user includes code for receiving an input from at least one presence sensor. | Techniques are discussed herein for providing notification messages to a user are provided. An example method of sending an alert message according to the disclosure includes receiving a notification message for a user, determining a location of the user, determining one or more notification devices based the location of the user, generating the alert message based on one or more notification preferences associated with the user and the one or more notification devices, and sending the alert message to at least one of the one or more notification devices based on the notification preferences.1. A method of sending an alert message, comprising:
receiving, at a communication device, a notification message for a user; determining, by the communication device, a location of the user; determining, by the communication device, one or more notification devices based on the location of the user; generating, by the communication device, the alert message based on one or more notification preferences associated with the user and the one or more notification devices; and sending, by the communication device, the alert message to at least one of the one or more notification devices based on the one or more notification preferences. 2. The method of claim 1 wherein the alert message includes an indication of the one or more notification preferences. 3. The method of claim 1 wherein the communication device is a central controller. 4. The method of claim 3 wherein determining the location of the user includes providing a user identification associated with the user to the central controller and receiving an indication of the location of the user from the central controller. 5. The method of claim 1 wherein the at least one of the one or more notification devices is configured to receive the alert message from a second notification device. 6. The method of claim 1 wherein the alert message is a pending-notification message including information configured to enable the at least one of the one or more notification devices to display an icon associated with the user. 7. The method of claim 1 wherein determining the location of the user includes executing a user search function. 8. The method of claim 1 wherein determining the location of the user includes obtaining an image with a camera on at least one of the one or more notification devices. 9. A device for providing an alert message to a user, comprising:
at least one processor configured to:
receive a notification for the user;
determine a location of the user;
determine one or more notification devices based on the location of the user;
determine one or more notification preferences associated with the user and the one or more notification devices;
generate the alert message for at least one of the one or more notification devices based on the one or more notification preferences; and
a transceiver, communicatively coupled to the at least one processor, configured to transmit the alert message wirelessly from the device. 10. The device of claim 9 wherein the one or more notification preferences includes a privacy preference indicating conditions in which a receiving notification device that receives the alert message may present the alert message. 11. The device of claim 9 wherein the one or more notification preferences includes a display area preference indicating an area on a display in which a receiving notification device that receives the alert message will present the alert message. 12. The device of claim 9 wherein the one or more notification preferences includes a volume level preference indicating a volume in which a receiving notification device that receives the alert message will present the alert message. 13. The device of claim 9 wherein the alert message is a pending-notification message including information configured to enable the at least one of the one or more notification devices to display an icon associated with the user. 14. The device of claim 9 wherein the at least one processor is configured to receive information from at least one presence sensor and determine the location of the user based at least in part on the information received from the at least one presence sensor. 15. The device of claim 9 wherein the at least one processor is configured to receive an indication of the location of the user from the one or more notification devices. 16. An apparatus for sending an alert message, comprising:
means for receiving a notification message for a user; means for determining a location of the user; means for determining one or more notification devices based on the location of the user; means for generating the alert message based on one or more notification preferences associated with the user and the one or more notification devices; and means for sending the alert message to at least one of the one or more notification devices based on the one or more notification preferences. 17. The apparatus of claim 16 wherein the alert message includes an indication of the one or more notification preferences. 18. The apparatus of claim 16 further comprising a central control means for receiving the notification message and sending the alert message. 19. The apparatus of claim 16 wherein the means for determining the location of the user includes a means for sending a user identification associated with the user to a central controller and a means for receiving an indication of the location of the user from the central controller. 20. The apparatus of claim 16 wherein the at least one of the one or more notification devices is configured to receive the alert message from a second notification device. 21. The apparatus of claim 16 wherein the alert message is a pending-notification message including information configured to enable the at least one of the one or more notification devices to display an icon associated with the user. 22. The apparatus of claim 16 wherein the means determining the location of the user includes a means for receiving an input from at least one presence sensor. 23. The apparatus of claim 16 wherein the means for determining the location of the user includes a means for obtaining an image with a camera on at least one of the one or more notification devices. 24. A non-transitory processor-readable storage medium comprising processor-readable instructions configured to cause one or more processors to send an alert message, comprising:
code for receiving a notification message for a user; code for determining a location of the user; code for determining one or more notification devices based on the location of the user; code for generating the alert message based on one or more notification preferences associated with the user and the one or more notification devices; and code for sending the alert message to at least one of the one or more notification devices based on the one or more notification preferences. 25. The storage medium of claim 24 wherein the alert message includes an indication of the one or more notification preferences. 26. The storage medium of claim 24 further comprising code for sending the notification message to a central controller, and code for sending the alert message from the central controller. 27. The storage medium of claim 24 wherein the code for determining the location of the user includes code for providing a user identification associated with the user to a central controller and code for receiving an indication of the location of the user from the central controller. 28. The storage medium of claim 24 wherein the at least one of the one or more notification devices includes code for receiving the alert message from a second notification device. 29. The storage medium of claim 24 wherein the alert message is a pending-notification message including information configured to enable the at least one of the one or more notification devices to display an icon associated with the user. 30. The storage medium of claim 24 wherein the code for determining the location of the user includes code for receiving an input from at least one presence sensor. | 2,600 |
10,566 | 10,566 | 15,855,826 | 2,689 | Various systems and methods for implementing a reverse-facing anti-collision mechanism are described herein. An anti-collision system for a lead vehicle to provide an alert to a trailing vehicle behind the lead vehicle, includes a vehicle controller subsystem to receive from a sensor array interface, sensor data from a rear-facing sensor incorporated into the lead vehicle; determine, using a processor, from the sensor data that the trailing vehicle is a collision risk; and initiate, via a light controller, a visual alert to the trailing vehicle, the visual alert in addition to or in place of brake lights on the lead vehicle. | 1. An anti-collision system for a lead vehicle to provide an alert to a trailing vehicle behind the lead vehicle, the system comprising:
a vehicle controller subsystem to:
receive from a sensor array interface, sensor data from a rear-facing sensor incorporated into the lead vehicle;
determine, using a processor, from the sensor data that the trailing vehicle is a collision risk; and
initiate, via a light controller, a visual alert to the trailing vehicle, the visual alert in addition to or in place of brake lights on the lead vehicle. 2. The system of claim 1, wherein the rear-facing sensor is a radar sensor. 3. The system of claim 1, wherein the rear-facing sensor is a LIDAR sensor. 4. The system of claim 1, wherein the rear-facing sensor is a camera sensor. 5. The system of claim 1, wherein to determine that the trailing vehicle is the collision risk, the vehicle controller subsystem is to:
determine a distance between the lead vehicle and the trailing vehicle; determine a relative velocity of the trailing vehicle with respect to the lead vehicle; and determine that the collision risk exists when the distance is not far enough for the trailing vehicle to safely maneuver in view of the relative velocity. 6. The system of claim 1, wherein to initiate the visual alert, the light controller is to initiate an illumination pattern on a taillight cluster of the lead vehicle. 7. The system of claim 6, wherein the illumination pattern comprises a series of flashing lights in the taillight cluster. 8. The system of claim 7, wherein the series of flashing lights comprises at least two of: a brake light, a turn signal, or a reverse light of the taillight cluster. 9. The system of claim 1, wherein to initiate the visual alert, the light controller is to initiate an illumination intensity change on a taillight cluster of the lead vehicle. 10. The system of claim 9, wherein the illumination intensity change comprises increasing an illumination intensity of at least one of: a brake light, a turn signal, or a reverse light of the taillight cluster. 11. The system of claim 1, wherein to initiate the visual alert, the light controller is to alternatively flash a light in a left taillight cluster and a light in a right taillight cluster of the lead vehicle. 12. The system of claim 1, wherein to initiate the visual alert, the light controller is to project a visual warning on a rear window of the lead vehicle. 13. The system of claim 12, wherein the visual warning is one of: a textual message, an icon, a symbol, or a light pattern. 14. The system of claim 1, wherein to initiate the visual alert, the light controller is to project a visual warning on a roadway surface behind the lead vehicle. 15. The system of claim 14, wherein the visual warning is one of: a textual message, an icon, a symbol, or a light pattern. 16. The system of claim 1, wherein to initiate the visual alert, the light controller is to illuminate a light that is not a part of a taillight and not a part of a third-brake light assembly of the lead vehicle. 17. A method for a lead vehicle to provide an alert to a trailing vehicle behind the lead vehicle, the method comprising:
receiving sensor data from a rear-facing sensor incorporated into the lead vehicle; determining from the sensor data that the trailing vehicle is a collision risk; and initiating a visual alert to the trailing vehicle, the visual alert in addition to or in place of brake lights on the lead vehicle. 18. The method of claim 17, wherein the rear-facing sensor is a radar sensor. 19. The method of claim 7, wherein the rear-facing sensor is a LIDAR sensor. 20. The method of claim 17, wherein the rear-facing sensor is a camera sensor. 21. The method of claim 17, wherein determining that the trailing vehicle is the collision risk comprises:
determining a distance between the lead vehicle and the trailing vehicle; determining a relative velocity of the trailing vehicle with respect to the lead vehicle; and determining that the collision risk exists then the distance is not far enough for the trailing vehicle to safely maneuver in view of the relative velocity. 22. At least one machine-readable medium including instructions for a lead vehicle to provide an alert to a trailing vehicle behind the lead vehicle, the instructions when executed by a machine, cause the machine to perform the operations comprising:
receiving sensor data from a rear-facing sensor incorporated into the lead vehicle; determining from the sensor data that the trailing vehicle is a collision risk; and initiating a visual alert to the trailing vehicle, the visual alert in addition to or in place of brake lights on the lead vehicle. 23. The machine-readable medium of claim 22, wherein determining that the trailing vehicle is the collision risk comprises:
determining a distance between the lead vehicle and the trailing vehicle; determining a relative velocity of the trailing vehicle with respect to the lead vehicle; and determining that the collision risk exists when the distance is not far enough for the trailing vehicle to safely maneuver in view of the relative velocity. 24. The machine-readable medium of claim 22, wherein initiating the visual alert comprises initiating an illumination pattern on a taillight cluster of the lead vehicle. 25. The machine-readable medium of claim 24, wherein the illumination pattern comprises a series of flashing lights in the taillight cluster. | Various systems and methods for implementing a reverse-facing anti-collision mechanism are described herein. An anti-collision system for a lead vehicle to provide an alert to a trailing vehicle behind the lead vehicle, includes a vehicle controller subsystem to receive from a sensor array interface, sensor data from a rear-facing sensor incorporated into the lead vehicle; determine, using a processor, from the sensor data that the trailing vehicle is a collision risk; and initiate, via a light controller, a visual alert to the trailing vehicle, the visual alert in addition to or in place of brake lights on the lead vehicle.1. An anti-collision system for a lead vehicle to provide an alert to a trailing vehicle behind the lead vehicle, the system comprising:
a vehicle controller subsystem to:
receive from a sensor array interface, sensor data from a rear-facing sensor incorporated into the lead vehicle;
determine, using a processor, from the sensor data that the trailing vehicle is a collision risk; and
initiate, via a light controller, a visual alert to the trailing vehicle, the visual alert in addition to or in place of brake lights on the lead vehicle. 2. The system of claim 1, wherein the rear-facing sensor is a radar sensor. 3. The system of claim 1, wherein the rear-facing sensor is a LIDAR sensor. 4. The system of claim 1, wherein the rear-facing sensor is a camera sensor. 5. The system of claim 1, wherein to determine that the trailing vehicle is the collision risk, the vehicle controller subsystem is to:
determine a distance between the lead vehicle and the trailing vehicle; determine a relative velocity of the trailing vehicle with respect to the lead vehicle; and determine that the collision risk exists when the distance is not far enough for the trailing vehicle to safely maneuver in view of the relative velocity. 6. The system of claim 1, wherein to initiate the visual alert, the light controller is to initiate an illumination pattern on a taillight cluster of the lead vehicle. 7. The system of claim 6, wherein the illumination pattern comprises a series of flashing lights in the taillight cluster. 8. The system of claim 7, wherein the series of flashing lights comprises at least two of: a brake light, a turn signal, or a reverse light of the taillight cluster. 9. The system of claim 1, wherein to initiate the visual alert, the light controller is to initiate an illumination intensity change on a taillight cluster of the lead vehicle. 10. The system of claim 9, wherein the illumination intensity change comprises increasing an illumination intensity of at least one of: a brake light, a turn signal, or a reverse light of the taillight cluster. 11. The system of claim 1, wherein to initiate the visual alert, the light controller is to alternatively flash a light in a left taillight cluster and a light in a right taillight cluster of the lead vehicle. 12. The system of claim 1, wherein to initiate the visual alert, the light controller is to project a visual warning on a rear window of the lead vehicle. 13. The system of claim 12, wherein the visual warning is one of: a textual message, an icon, a symbol, or a light pattern. 14. The system of claim 1, wherein to initiate the visual alert, the light controller is to project a visual warning on a roadway surface behind the lead vehicle. 15. The system of claim 14, wherein the visual warning is one of: a textual message, an icon, a symbol, or a light pattern. 16. The system of claim 1, wherein to initiate the visual alert, the light controller is to illuminate a light that is not a part of a taillight and not a part of a third-brake light assembly of the lead vehicle. 17. A method for a lead vehicle to provide an alert to a trailing vehicle behind the lead vehicle, the method comprising:
receiving sensor data from a rear-facing sensor incorporated into the lead vehicle; determining from the sensor data that the trailing vehicle is a collision risk; and initiating a visual alert to the trailing vehicle, the visual alert in addition to or in place of brake lights on the lead vehicle. 18. The method of claim 17, wherein the rear-facing sensor is a radar sensor. 19. The method of claim 7, wherein the rear-facing sensor is a LIDAR sensor. 20. The method of claim 17, wherein the rear-facing sensor is a camera sensor. 21. The method of claim 17, wherein determining that the trailing vehicle is the collision risk comprises:
determining a distance between the lead vehicle and the trailing vehicle; determining a relative velocity of the trailing vehicle with respect to the lead vehicle; and determining that the collision risk exists then the distance is not far enough for the trailing vehicle to safely maneuver in view of the relative velocity. 22. At least one machine-readable medium including instructions for a lead vehicle to provide an alert to a trailing vehicle behind the lead vehicle, the instructions when executed by a machine, cause the machine to perform the operations comprising:
receiving sensor data from a rear-facing sensor incorporated into the lead vehicle; determining from the sensor data that the trailing vehicle is a collision risk; and initiating a visual alert to the trailing vehicle, the visual alert in addition to or in place of brake lights on the lead vehicle. 23. The machine-readable medium of claim 22, wherein determining that the trailing vehicle is the collision risk comprises:
determining a distance between the lead vehicle and the trailing vehicle; determining a relative velocity of the trailing vehicle with respect to the lead vehicle; and determining that the collision risk exists when the distance is not far enough for the trailing vehicle to safely maneuver in view of the relative velocity. 24. The machine-readable medium of claim 22, wherein initiating the visual alert comprises initiating an illumination pattern on a taillight cluster of the lead vehicle. 25. The machine-readable medium of claim 24, wherein the illumination pattern comprises a series of flashing lights in the taillight cluster. | 2,600 |
10,567 | 10,567 | 16,012,802 | 2,622 | A method, including receiving a three-dimensional (3D) map of at least a part of a body of a user ( 22 ) of a computerized system, and receiving a two dimensional (2D) image of the user, the image including an eye ( 34 ) of the user. 3D coordinates of a head ( 32 ) of the user are extracted from the 3D map and the 2D image, and a direction of a gaze performed by the user is identified based on the 3D coordinates of the head and the image of the eye. | 1. A method, comprising:
receiving a sequence of three-dimensional (3D) maps of at least a part of a body of a user of a computerized system; processing the 3D maps in order to identify a gesture performed by the user; identifying a direction of a gaze of the user by analyzing light reflected off an element of the eye; and controlling a function of the computerized system responsively to the gesture and the direction of the gaze. 2. The method according to claim 1, wherein the element is selected from a list comprising a pupil, an iris and a cornea. 3. The method according to claim 1, wherein processing the 3D maps comprises identifying a position of a finger in the 3D maps. 4. The method according to claim 3, wherein identifying the position of the finger comprises detecting a point-select gesture with respect to an interactive item, in which the finger moves toward the interactive item, and then moves along a plane from the interactive item. 5. The method according to claim 3, wherein identifying the position of the finger comprises detecting a point-touch gesture with respect to an interactive item, in which the finger makes an initial movement toward the interactive item, and then moves in a direction perpendicular to the initial movement. 6. The method according to claim 3, wherein identifying the position of the finger comprises detecting respective locations of a tip and a knuckle of the finger, and identifying a direction in which the finger is pointing responsively to the detected locations. 7. The method according to claim 1, wherein controlling the function of the computerized system comprises identifying an interactive item presented on a display coupled to the computerized system and in the direction of the gaze, and changing a state of the interactive item. 8. The method according to claim 7, wherein the state of the interactive item is changed responsively to the gaze. 9. The method according to claim 7, wherein the state of the interactive item is changed responsively to a vocal command received from the user. 10. The method according to claim 7, wherein the state of the interactive item is changed responsively to the gesture. 11. The method according to claim 7, wherein identifying the position of the finger comprises detecting a pointing motion of a finger, and wherein changing the state comprises moving the identified interactive item on the display in a direction pointed to by the finger. 12. The method according to claim 1, wherein controlling the function of the computerized system comprises performing an action associated with an interactive item presented in the direction of the gaze on a display coupled to the computerized system, upon detecting that the gesture comprises a motion of a limb toward the display, followed by a deceleration of the motion of the limb toward the display, and then followed by a motion of the limb away from the display. 13. The method according to claim 1, wherein controlling the function of the computerized system comprises activating, by the computerized system, a power saving technique upon the user directing the gaze away from the display. 14. The method according to claim 1, wherein controlling the function of the computerized system comprises identifying a device coupled to the computerized system and in the direction of the gaze, and changing a state of the device. 15. An apparatus, comprising:
at least one sensing device configured to produce a sequence of three dimensional (3D) maps of at least a part of a body of a user of a computerized system and to detect light reflected off an element of an eye of the user; and a processor coupled to the at least one sensing device and configured to process the 3D maps in order to identify a gesture performed by the user and to identify a direction of a gaze of the user based on the light reflected off the element of the eye, and to control a function of the computerized system responsively to the gesture and the direction of the gaze. 16. The apparatus according to claim 15, wherein the element is selected from a list comprising a pupil, an iris and a cornea. 17. The apparatus according to claim 15, wherein the processor is configured to identify the gesture responsively to a position of a finger in the 3D maps. 18. The apparatus according to claim 15, wherein controlling the function of the computerized system comprises identifying an interactive item presented on a display coupled to the computerized system and in the direction of the gaze, and changing a state of the interactive item. 19. The apparatus according to claim 15, wherein controlling the function of the computerized system comprises identifying a device coupled to the computerized system and in the direction of the gaze, and changing a state of the device. 20. A computer software product comprising a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer, cause the computer to receive a sequence of three-dimensional (3D) maps of at least a part of a body of a user of the computer, and to receive an input indicative of light reflected off an element of an eye of the user,
wherein the instructions cause the computer to process the 3D maps in order to identify a gesture performed by the user and to identify a direction of a gaze of the user based on the light reflected off the element of the eye, and to control a function of the computerized system responsively to the gesture and the direction of the gaze. | A method, including receiving a three-dimensional (3D) map of at least a part of a body of a user ( 22 ) of a computerized system, and receiving a two dimensional (2D) image of the user, the image including an eye ( 34 ) of the user. 3D coordinates of a head ( 32 ) of the user are extracted from the 3D map and the 2D image, and a direction of a gaze performed by the user is identified based on the 3D coordinates of the head and the image of the eye.1. A method, comprising:
receiving a sequence of three-dimensional (3D) maps of at least a part of a body of a user of a computerized system; processing the 3D maps in order to identify a gesture performed by the user; identifying a direction of a gaze of the user by analyzing light reflected off an element of the eye; and controlling a function of the computerized system responsively to the gesture and the direction of the gaze. 2. The method according to claim 1, wherein the element is selected from a list comprising a pupil, an iris and a cornea. 3. The method according to claim 1, wherein processing the 3D maps comprises identifying a position of a finger in the 3D maps. 4. The method according to claim 3, wherein identifying the position of the finger comprises detecting a point-select gesture with respect to an interactive item, in which the finger moves toward the interactive item, and then moves along a plane from the interactive item. 5. The method according to claim 3, wherein identifying the position of the finger comprises detecting a point-touch gesture with respect to an interactive item, in which the finger makes an initial movement toward the interactive item, and then moves in a direction perpendicular to the initial movement. 6. The method according to claim 3, wherein identifying the position of the finger comprises detecting respective locations of a tip and a knuckle of the finger, and identifying a direction in which the finger is pointing responsively to the detected locations. 7. The method according to claim 1, wherein controlling the function of the computerized system comprises identifying an interactive item presented on a display coupled to the computerized system and in the direction of the gaze, and changing a state of the interactive item. 8. The method according to claim 7, wherein the state of the interactive item is changed responsively to the gaze. 9. The method according to claim 7, wherein the state of the interactive item is changed responsively to a vocal command received from the user. 10. The method according to claim 7, wherein the state of the interactive item is changed responsively to the gesture. 11. The method according to claim 7, wherein identifying the position of the finger comprises detecting a pointing motion of a finger, and wherein changing the state comprises moving the identified interactive item on the display in a direction pointed to by the finger. 12. The method according to claim 1, wherein controlling the function of the computerized system comprises performing an action associated with an interactive item presented in the direction of the gaze on a display coupled to the computerized system, upon detecting that the gesture comprises a motion of a limb toward the display, followed by a deceleration of the motion of the limb toward the display, and then followed by a motion of the limb away from the display. 13. The method according to claim 1, wherein controlling the function of the computerized system comprises activating, by the computerized system, a power saving technique upon the user directing the gaze away from the display. 14. The method according to claim 1, wherein controlling the function of the computerized system comprises identifying a device coupled to the computerized system and in the direction of the gaze, and changing a state of the device. 15. An apparatus, comprising:
at least one sensing device configured to produce a sequence of three dimensional (3D) maps of at least a part of a body of a user of a computerized system and to detect light reflected off an element of an eye of the user; and a processor coupled to the at least one sensing device and configured to process the 3D maps in order to identify a gesture performed by the user and to identify a direction of a gaze of the user based on the light reflected off the element of the eye, and to control a function of the computerized system responsively to the gesture and the direction of the gaze. 16. The apparatus according to claim 15, wherein the element is selected from a list comprising a pupil, an iris and a cornea. 17. The apparatus according to claim 15, wherein the processor is configured to identify the gesture responsively to a position of a finger in the 3D maps. 18. The apparatus according to claim 15, wherein controlling the function of the computerized system comprises identifying an interactive item presented on a display coupled to the computerized system and in the direction of the gaze, and changing a state of the interactive item. 19. The apparatus according to claim 15, wherein controlling the function of the computerized system comprises identifying a device coupled to the computerized system and in the direction of the gaze, and changing a state of the device. 20. A computer software product comprising a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer, cause the computer to receive a sequence of three-dimensional (3D) maps of at least a part of a body of a user of the computer, and to receive an input indicative of light reflected off an element of an eye of the user,
wherein the instructions cause the computer to process the 3D maps in order to identify a gesture performed by the user and to identify a direction of a gaze of the user based on the light reflected off the element of the eye, and to control a function of the computerized system responsively to the gesture and the direction of the gaze. | 2,600 |
10,568 | 10,568 | 13,674,773 | 2,600 | A system which performs social interaction analysis for a plurality of participants includes a processor. The processor is configured to determine a similarity between a first spatially filtered output and each of a plurality of second spatially filtered outputs. The processor is configured to determine the social interaction between the participants based on the similarities between the first spatially filtered output and each of the second spatially filtered outputs and display an output that is representative of the social interaction between the participants. The first spatially filtered output is received from a fixed microphone array, and the second spatially filtered outputs are received from a plurality of steerable microphone arrays each corresponding to a different participant. | 1. A system which performs social interaction analysis for a plurality of participants, comprising:
a processor configured to:
determine a similarity between a first spatially filtered output and each of a plurality of second spatially filtered outputs,
determine a social interaction between the participants based on the similarity between the first spatially filtered output and each of the second spatially filtered outputs, and
display an output representative of the social interaction between the participants;
wherein the first spatially filtered output is received from a fixed microphone array, and the second spatially filtered outputs are received from a plurality of steerable microphone arrays each corresponding to a different participant. 2. The system of claim 1, wherein the output is displayed in real-time as the participants are interacting with each other. 3. The system of claim 1, wherein the output comprises an interaction graph comprising:
a plurality of identifiers, each identifier corresponding to a respective participant; and a plurality of indicators, each indicator providing information relating to at least one of: a participant looking at another participant, a strength of an interaction between two participants, a participation level of a participant, or a leader of a group of participants. 4. The system of claim 3, wherein the strength of the interaction between two participants is based on a time that the two participants have interacted. 5. The system of claim 3, wherein the indicators have at least one of a direction, a thickness, or a color, wherein the direction indicates which participant is looking at another participant, the thickness indicates the strength of the interaction between two participants, and the color indicates the leader of the group of participants. 6. The system of claim 3, wherein each of the participants is a speaker. 7. The system of claim 3, wherein the interaction graph is used to assess group dynamics or topic dynamics. 8. The system of claim 3, wherein the interaction graph indicates social interaction information among the participants. 9. The system of claim 8, wherein the social interaction information is accumulated over a period of time. 10. The system of claim 3, wherein the interaction graph is displayed on a smartphone. 11. The system of claim 3, wherein the interaction graph is displayed on at least one from among the group comprising a handset, a laptop, a tablet, a computer, and a netbook. 12. The system of claim 3, wherein each indicator represents active participant location and energy. 13. The system of claim 12, further comprising an additional indicator that represents a refined active participant location and energy. 14. The system of claim 12, wherein the indicators comprise beam patterns. 15. The system of claim 1, wherein the processor is further configured to perform real-time meeting analysis of a meeting the participants are participating in. 16. The system of claim 1, wherein the processor is further configured to generate a personal time line for a participant that shows an interaction history of the participant with respect to the other participants, a meeting topic, or a subject matter. 17. The system of claim 1, wherein the processor is further configured to generate participant interaction statistics over time. 18. The system of claim 1, wherein the processor is further configured to generate an evolution of interaction between participants over time. 19. The system of claim 1, wherein the processor is further configured to generate an interaction graph among the participants. 20. The system of claim 1, further comprising a user interface that is configured for collaboratively zooming into one of the participants in real-time. 21. A method for performing social interaction analysis for a plurality of participants, comprising:
determining a similarity between a first spatially filtered output and each of a plurality of second spatially filtered outputs; determining a social interaction between the participants based on the similarity between the first spatially filtered output and each of the second spatially filtered outputs; and displaying an output representative of the social interaction between the participants; wherein the first spatially filtered output is received from a fixed microphone array, and the second spatially filtered outputs are received from a plurality of steerable microphone arrays each corresponding to a different participant. 22. The method of claim 21, further comprising displaying the output in real-time as the participants are interacting with each other. 23. The method of claim 21, wherein the output comprises an interaction graph comprising:
a plurality of identifiers, each identifier corresponding to a respective participant; and a plurality of indicators, each indicator providing information relating to at least one of: a participant looking at another participant, a strength of an interaction between two participants, a participation level of a participant, or a leader of a group of participants. 24. The method of claim 23, wherein the strength of the interaction between two participants is based on a time that the two participants have interacted. 25. The method of claim 23, wherein the indicators have at least one of a direction, a thickness, or a color, wherein the direction indicates which participant is looking at another participant, the thickness indicates the strength of the interaction between two participants, and the color indicates the leader of the group of participants. 26. The method of claim 23, wherein each of the participants is a speaker. 27. The method of claim 23, further comprising using the interaction graph to assess group dynamics or topic dynamics. 28. The method of claim 23, wherein the interaction graph indicates social interaction information among the participants. 29. The method of claim 28, further comprising accumulating the social interaction information over a period of time. 30. The method of claim 23, further comprising displaying the interaction graph on a smartphone. 31. The method of claim 23, further comprising displaying the interaction graph on at least one from among the group comprising a handset, a laptop, a tablet, a computer, and a netbook. 32. The method of claim 23, wherein each indicator represents active participant location and energy. 33. The method of claim 23, further comprising an additional indicator that represents a refined active participant location and energy. 34. The method of claim 23, wherein the indicators comprise beam patterns. 35. The method of claim 21, further comprising performing real-time meeting analysis of a meeting the participants are participating in. 36. The method of claim 21, further comprising generating a personal time line for a participant that shows an interaction history of the participant with respect to other participants, a meeting topic, or a subject matter. 37. The method of claim 21, further comprising generating participant interaction statistics over time. 38. The method of claim 21, further comprising generating an evolution of interaction between participants over time. 39. The method of claim 21, further comprising generating an interaction graph among the participants. 40. The method of claim 21, further comprising collaboratively zooming into one of the participants in real-time. 41. An apparatus for performing social interaction analysis for a plurality of participants, comprising:
means for determining a similarity between a first spatially filtered output and each of a plurality of second spatially filtered outputs; means for determining a social interaction between the participants based on the similarity between the first spatially filtered output and each of the second spatially filtered outputs; and means for displaying an output representative of the social interaction between the participants; wherein the first spatially filtered output is received from a fixed microphone array, and the second spatially filtered outputs are received from a plurality of steerable microphone arrays each corresponding to a different participant. 42. The apparatus of claim 41, further comprising means for displaying the output in real-time as the participants are interacting with each other. 43. The apparatus of claim 41, wherein the output comprises an interaction graph comprising:
a plurality of identifiers, each identifier corresponding to a respective participant; and a plurality of indicators, each indicator providing information relating to at least one of: a participant looking at another participant, a strength of an interaction between two participants, a participation level of a participant, or a leader of a group of participants. 44. The apparatus of claim 43, wherein the strength of the interaction between two participants is based on a time that the two participants have interacted. 45. The apparatus of claim 43, wherein the indicators have at least one of a direction, a thickness, or a color, wherein the direction indicates which participant is looking at another participant, the thickness indicates the strength of the interaction between two participants, and the color indicates the leader of the group of participants. 46. The apparatus of claim 43, wherein each of the participants is a speaker. 47. The apparatus of claim 43, further comprising means for using the interaction graph to assess group dynamics or topic dynamics. 48. The apparatus of claim 43, wherein the interaction graph indicates social interaction information among the participants. 49. The apparatus of claim 48, further comprising means for accumulating the social interaction information over a period of time. 50. The apparatus of claim 43, further comprising means for displaying the interaction graph on a smartphone. 51. The apparatus of claim 43, further comprising means for displaying the interaction graph on at least one from among the group comprising a handset, a laptop, a tablet, a computer, and a netbook. 52. The apparatus of claim 43, wherein each indicator represents active participant location and energy. 53. The apparatus of claim 52, further comprising an additional indicator that represents a refined active participant location and energy. 54. The apparatus of claim 52, wherein the indicators comprise beam patterns. 55. The apparatus of claim 41, further comprising means for performing real-time meeting analysis of a meeting the participants are participating in. 56. The apparatus of claim 41, further comprising means for generating a personal time line for a participant that shows an interaction history of the participant with respect to other participants, a meeting topic, or a subject matter. 57. The apparatus of claim 41, further comprising means for generating participant interaction statistics over time. 58. The apparatus of claim 41, further comprising means for generating an evolution of interaction between participants over time. 59. The apparatus of claim 41, further comprising means for generating an interaction graph among the participants. 60. The apparatus of claim 41, further comprising means for collaboratively zooming into one of the participants in real-time. 61. A non-transitory computer-readable medium comprising computer-readable instructions for causing a processor to:
determine a similarity between a first spatially filtered output and each of a plurality of second spatially filtered outputs; determine a social interaction between a plurality of participants based on the similarity between the first spatially filtered output and each of the second spatially filtered outputs; and display an output representative of the social interaction between the plurality of participants; wherein the first spatially filtered output is received from a fixed microphone array, and the second spatially filtered outputs are received from a plurality of steerable microphone arrays each corresponding to a different participant. 62. The computer-readable medium of claim 61, further comprising instructions for causing the processor to display the output in real-time as the participants are interacting with each other. 63. The computer-readable medium of claim 61, wherein the output comprises an interaction graph comprising:
a plurality of identifiers, each identifier corresponding to a respective participant; and a plurality of indicators, each indicator providing information relating to at least one of: a participant looking at another participant, a strength of an interaction between two participants, a participation level of a participant, or a leader of a group of participants. 64. The computer-readable medium of claim 63, wherein the strength of the interaction between two participants is based on a time that the two participants have interacted. 65. The computer-readable medium of claim 63, wherein the indicators have at least one of a direction, a thickness, or a color, wherein the direction indicates which participant is looking at another participant, the thickness indicates the strength of the interaction between two participants, and the color indicates the leader of the group of participants. 66. The computer-readable medium of claim 63, wherein each of the participants is a speaker. 67. The computer-readable medium of claim 63, further comprising instructions for causing the processor to use the interaction graph to assess group dynamics or topic dynamics. 68. The computer-readable medium of claim 63, wherein the interaction graph indicates social interaction information among the participants. 69. The computer-readable medium of claim 68, further comprising instructions for causing the processor to accumulate the social interaction information over a period of time. 70. The computer-readable medium of claim 63, further comprising instructions for causing the processor to display the interaction graph on a smartphone. 71. The computer-readable medium of claim 63, further comprising instructions for causing the processor to display the interaction graph on at least one from among the group comprising a handset, a laptop, a tablet, a computer, and a netbook. 72. The computer-readable medium of claim 63, wherein each indicator represents active participant location and energy. 73. The computer-readable medium of claim 72, further comprising an additional indicator that represents a refined active participant location and energy. 74. The computer-readable medium of claim 72, wherein the indicators comprise beam patterns. 75. The computer-readable medium of claim 61, further comprising instructions for causing the processor to perform real-time meeting analysis of a meeting the participants are participating in. 76. The computer-readable medium of claim 61, further comprising instructions for causing the processor to generate a personal time line for a participant that shows an interaction history of the participant with respect to other participants, a meeting topic, or a subject matter. 77. The computer-readable medium of claim 61, further comprising instructions for causing the processor to generate participant interaction statistics over time. 78. The computer-readable medium of claim 61, further comprising instructions for causing the processor to generate an evolution of interaction between participants over time. 79. The computer-readable medium of claim 61, further comprising instructions for causing the processor to generate an interaction graph among the participants. 80. The computer-readable medium of claim 61, further comprising instructions for causing the processor to collaboratively zoom into one of the participants in real-time. | A system which performs social interaction analysis for a plurality of participants includes a processor. The processor is configured to determine a similarity between a first spatially filtered output and each of a plurality of second spatially filtered outputs. The processor is configured to determine the social interaction between the participants based on the similarities between the first spatially filtered output and each of the second spatially filtered outputs and display an output that is representative of the social interaction between the participants. The first spatially filtered output is received from a fixed microphone array, and the second spatially filtered outputs are received from a plurality of steerable microphone arrays each corresponding to a different participant.1. A system which performs social interaction analysis for a plurality of participants, comprising:
a processor configured to:
determine a similarity between a first spatially filtered output and each of a plurality of second spatially filtered outputs,
determine a social interaction between the participants based on the similarity between the first spatially filtered output and each of the second spatially filtered outputs, and
display an output representative of the social interaction between the participants;
wherein the first spatially filtered output is received from a fixed microphone array, and the second spatially filtered outputs are received from a plurality of steerable microphone arrays each corresponding to a different participant. 2. The system of claim 1, wherein the output is displayed in real-time as the participants are interacting with each other. 3. The system of claim 1, wherein the output comprises an interaction graph comprising:
a plurality of identifiers, each identifier corresponding to a respective participant; and a plurality of indicators, each indicator providing information relating to at least one of: a participant looking at another participant, a strength of an interaction between two participants, a participation level of a participant, or a leader of a group of participants. 4. The system of claim 3, wherein the strength of the interaction between two participants is based on a time that the two participants have interacted. 5. The system of claim 3, wherein the indicators have at least one of a direction, a thickness, or a color, wherein the direction indicates which participant is looking at another participant, the thickness indicates the strength of the interaction between two participants, and the color indicates the leader of the group of participants. 6. The system of claim 3, wherein each of the participants is a speaker. 7. The system of claim 3, wherein the interaction graph is used to assess group dynamics or topic dynamics. 8. The system of claim 3, wherein the interaction graph indicates social interaction information among the participants. 9. The system of claim 8, wherein the social interaction information is accumulated over a period of time. 10. The system of claim 3, wherein the interaction graph is displayed on a smartphone. 11. The system of claim 3, wherein the interaction graph is displayed on at least one from among the group comprising a handset, a laptop, a tablet, a computer, and a netbook. 12. The system of claim 3, wherein each indicator represents active participant location and energy. 13. The system of claim 12, further comprising an additional indicator that represents a refined active participant location and energy. 14. The system of claim 12, wherein the indicators comprise beam patterns. 15. The system of claim 1, wherein the processor is further configured to perform real-time meeting analysis of a meeting the participants are participating in. 16. The system of claim 1, wherein the processor is further configured to generate a personal time line for a participant that shows an interaction history of the participant with respect to the other participants, a meeting topic, or a subject matter. 17. The system of claim 1, wherein the processor is further configured to generate participant interaction statistics over time. 18. The system of claim 1, wherein the processor is further configured to generate an evolution of interaction between participants over time. 19. The system of claim 1, wherein the processor is further configured to generate an interaction graph among the participants. 20. The system of claim 1, further comprising a user interface that is configured for collaboratively zooming into one of the participants in real-time. 21. A method for performing social interaction analysis for a plurality of participants, comprising:
determining a similarity between a first spatially filtered output and each of a plurality of second spatially filtered outputs; determining a social interaction between the participants based on the similarity between the first spatially filtered output and each of the second spatially filtered outputs; and displaying an output representative of the social interaction between the participants; wherein the first spatially filtered output is received from a fixed microphone array, and the second spatially filtered outputs are received from a plurality of steerable microphone arrays each corresponding to a different participant. 22. The method of claim 21, further comprising displaying the output in real-time as the participants are interacting with each other. 23. The method of claim 21, wherein the output comprises an interaction graph comprising:
a plurality of identifiers, each identifier corresponding to a respective participant; and a plurality of indicators, each indicator providing information relating to at least one of: a participant looking at another participant, a strength of an interaction between two participants, a participation level of a participant, or a leader of a group of participants. 24. The method of claim 23, wherein the strength of the interaction between two participants is based on a time that the two participants have interacted. 25. The method of claim 23, wherein the indicators have at least one of a direction, a thickness, or a color, wherein the direction indicates which participant is looking at another participant, the thickness indicates the strength of the interaction between two participants, and the color indicates the leader of the group of participants. 26. The method of claim 23, wherein each of the participants is a speaker. 27. The method of claim 23, further comprising using the interaction graph to assess group dynamics or topic dynamics. 28. The method of claim 23, wherein the interaction graph indicates social interaction information among the participants. 29. The method of claim 28, further comprising accumulating the social interaction information over a period of time. 30. The method of claim 23, further comprising displaying the interaction graph on a smartphone. 31. The method of claim 23, further comprising displaying the interaction graph on at least one from among the group comprising a handset, a laptop, a tablet, a computer, and a netbook. 32. The method of claim 23, wherein each indicator represents active participant location and energy. 33. The method of claim 23, further comprising an additional indicator that represents a refined active participant location and energy. 34. The method of claim 23, wherein the indicators comprise beam patterns. 35. The method of claim 21, further comprising performing real-time meeting analysis of a meeting the participants are participating in. 36. The method of claim 21, further comprising generating a personal time line for a participant that shows an interaction history of the participant with respect to other participants, a meeting topic, or a subject matter. 37. The method of claim 21, further comprising generating participant interaction statistics over time. 38. The method of claim 21, further comprising generating an evolution of interaction between participants over time. 39. The method of claim 21, further comprising generating an interaction graph among the participants. 40. The method of claim 21, further comprising collaboratively zooming into one of the participants in real-time. 41. An apparatus for performing social interaction analysis for a plurality of participants, comprising:
means for determining a similarity between a first spatially filtered output and each of a plurality of second spatially filtered outputs; means for determining a social interaction between the participants based on the similarity between the first spatially filtered output and each of the second spatially filtered outputs; and means for displaying an output representative of the social interaction between the participants; wherein the first spatially filtered output is received from a fixed microphone array, and the second spatially filtered outputs are received from a plurality of steerable microphone arrays each corresponding to a different participant. 42. The apparatus of claim 41, further comprising means for displaying the output in real-time as the participants are interacting with each other. 43. The apparatus of claim 41, wherein the output comprises an interaction graph comprising:
a plurality of identifiers, each identifier corresponding to a respective participant; and a plurality of indicators, each indicator providing information relating to at least one of: a participant looking at another participant, a strength of an interaction between two participants, a participation level of a participant, or a leader of a group of participants. 44. The apparatus of claim 43, wherein the strength of the interaction between two participants is based on a time that the two participants have interacted. 45. The apparatus of claim 43, wherein the indicators have at least one of a direction, a thickness, or a color, wherein the direction indicates which participant is looking at another participant, the thickness indicates the strength of the interaction between two participants, and the color indicates the leader of the group of participants. 46. The apparatus of claim 43, wherein each of the participants is a speaker. 47. The apparatus of claim 43, further comprising means for using the interaction graph to assess group dynamics or topic dynamics. 48. The apparatus of claim 43, wherein the interaction graph indicates social interaction information among the participants. 49. The apparatus of claim 48, further comprising means for accumulating the social interaction information over a period of time. 50. The apparatus of claim 43, further comprising means for displaying the interaction graph on a smartphone. 51. The apparatus of claim 43, further comprising means for displaying the interaction graph on at least one from among the group comprising a handset, a laptop, a tablet, a computer, and a netbook. 52. The apparatus of claim 43, wherein each indicator represents active participant location and energy. 53. The apparatus of claim 52, further comprising an additional indicator that represents a refined active participant location and energy. 54. The apparatus of claim 52, wherein the indicators comprise beam patterns. 55. The apparatus of claim 41, further comprising means for performing real-time meeting analysis of a meeting the participants are participating in. 56. The apparatus of claim 41, further comprising means for generating a personal time line for a participant that shows an interaction history of the participant with respect to other participants, a meeting topic, or a subject matter. 57. The apparatus of claim 41, further comprising means for generating participant interaction statistics over time. 58. The apparatus of claim 41, further comprising means for generating an evolution of interaction between participants over time. 59. The apparatus of claim 41, further comprising means for generating an interaction graph among the participants. 60. The apparatus of claim 41, further comprising means for collaboratively zooming into one of the participants in real-time. 61. A non-transitory computer-readable medium comprising computer-readable instructions for causing a processor to:
determine a similarity between a first spatially filtered output and each of a plurality of second spatially filtered outputs; determine a social interaction between a plurality of participants based on the similarity between the first spatially filtered output and each of the second spatially filtered outputs; and display an output representative of the social interaction between the plurality of participants; wherein the first spatially filtered output is received from a fixed microphone array, and the second spatially filtered outputs are received from a plurality of steerable microphone arrays each corresponding to a different participant. 62. The computer-readable medium of claim 61, further comprising instructions for causing the processor to display the output in real-time as the participants are interacting with each other. 63. The computer-readable medium of claim 61, wherein the output comprises an interaction graph comprising:
a plurality of identifiers, each identifier corresponding to a respective participant; and a plurality of indicators, each indicator providing information relating to at least one of: a participant looking at another participant, a strength of an interaction between two participants, a participation level of a participant, or a leader of a group of participants. 64. The computer-readable medium of claim 63, wherein the strength of the interaction between two participants is based on a time that the two participants have interacted. 65. The computer-readable medium of claim 63, wherein the indicators have at least one of a direction, a thickness, or a color, wherein the direction indicates which participant is looking at another participant, the thickness indicates the strength of the interaction between two participants, and the color indicates the leader of the group of participants. 66. The computer-readable medium of claim 63, wherein each of the participants is a speaker. 67. The computer-readable medium of claim 63, further comprising instructions for causing the processor to use the interaction graph to assess group dynamics or topic dynamics. 68. The computer-readable medium of claim 63, wherein the interaction graph indicates social interaction information among the participants. 69. The computer-readable medium of claim 68, further comprising instructions for causing the processor to accumulate the social interaction information over a period of time. 70. The computer-readable medium of claim 63, further comprising instructions for causing the processor to display the interaction graph on a smartphone. 71. The computer-readable medium of claim 63, further comprising instructions for causing the processor to display the interaction graph on at least one from among the group comprising a handset, a laptop, a tablet, a computer, and a netbook. 72. The computer-readable medium of claim 63, wherein each indicator represents active participant location and energy. 73. The computer-readable medium of claim 72, further comprising an additional indicator that represents a refined active participant location and energy. 74. The computer-readable medium of claim 72, wherein the indicators comprise beam patterns. 75. The computer-readable medium of claim 61, further comprising instructions for causing the processor to perform real-time meeting analysis of a meeting the participants are participating in. 76. The computer-readable medium of claim 61, further comprising instructions for causing the processor to generate a personal time line for a participant that shows an interaction history of the participant with respect to other participants, a meeting topic, or a subject matter. 77. The computer-readable medium of claim 61, further comprising instructions for causing the processor to generate participant interaction statistics over time. 78. The computer-readable medium of claim 61, further comprising instructions for causing the processor to generate an evolution of interaction between participants over time. 79. The computer-readable medium of claim 61, further comprising instructions for causing the processor to generate an interaction graph among the participants. 80. The computer-readable medium of claim 61, further comprising instructions for causing the processor to collaboratively zoom into one of the participants in real-time. | 2,600 |
10,569 | 10,569 | 15,721,647 | 2,683 | A detection and response device for a surveillance system detects events, responds to events, or both. The detection and response device may be used with or provided by a variety of surveillance systems, including peer to peer surveillance architectures. The device may utilize one or more defined geospaces. If an event occurs in a geospace a predefined response may then be provided. The predefined response may include automatically targeting one or more cameras to areas relevant to the event and presenting one or more predefined views optimized for viewing the event. If an event does not occur within a geospace, the detection and response device may provide one or more default responses. | 1. A detection and response device for automatically selecting and presenting surveillance information collected by one or more capture devices comprising:
one or more transceivers configured to receive data indicating that an event has occurred; one or more storage devices configured to store, for each of the one or more capture devices, an associated range and viewable area, wherein the range is represented as a radius around the capture device, and wherein the viewable area is represented as at least one shape that defines an area that can be seen by the capture device; and a response system configured to:
select at least one of the one or more capture devices to target the event based on whether the event occurred within the range and/or viewable area of the one or more capture devices; and
control the selected capture device to target the event. 2. The detection and response device of claim 1 further comprising a routing system configured to share the surveillance information from the one or more capture devices with one or more mobile video displays. 3. The detection and response device of claim 1,
wherein the one or more storage devices are further configured to store one or more geospaces each comprising data identifying one or more physical areas, one or more instructions associated with each of the one or more geospaces, one or more event types, and one or more user interface settings associated with each of the one or more geospaces, and wherein the response system is further configured to transmit one or more commands to the one or more capture devices through the one or more transceivers, the one or more commands defined by the one or more instructions. 4. (canceled) 5. The detection and response device of claim 2, wherein the response system is further configured to present on a display a predefined view comprising surveillance information from at least one of the one or more capture devices according to the one or more user interface settings associated with the event geospacer, wherein the one or more user interface settings include data identifying at least one of the one or more capture devices, the one or more storage devices being configured to store the data along with the one or more user interface settings. 6. The detection and response device of claim 5, wherein the response system is configured to establish the predefined view such that the predefined view only displays surveillance information from the at least one capture device identified in the one or more user interface settings. 7. The detection and response device of claim 5, wherein the one or more transceivers are configured to receive location information identifying the location of one or more mobile units, and wherein the response system is configured to establish the predefined view such that the predefined view presents the location of the one or more mobile units on the display. 8. A surveillance system for providing automated responses to one or more events comprising:
one or more cameras; one or more storage devices configured to store, for each of the one or more cameras, an associated range and viewable area, wherein the range is represented as a radius around the camera, and wherein the viewable area is represented as at least one shape that defines an area that can be seen by the camera; and a detection and response device in communication with the one or more cameras, wherein the detection and response device is configured to:
detect the occurrence of an event;
select at least one of the one or more cameras to target the event based on whether the event occurred within the range and/or viewable area of the one or more cameras; and
control the selected at least one of the one or more cameras to target the event. 9. The surveillance system of claim 8, wherein the detection and response device is further configured to:
select one of the one or more cameras to target the event based on the camera range and the viewable area of each of the one or more cameras, wherein the event is not within the viewable area of the selected camera, wherein the event is within the range of the selected camera; and control the selected camera to target the event. 10. The surveillance system of claim 9, wherein the at least one of the one or more cameras targeting the event is nearest the event relative to the other cameras. 11-15. (canceled) 16. A method for responding to an event with a surveillance system having one or more cameras comprising:
receiving and storing on one or more storage devices, for each of the one or more cameras, an associated range and viewable area, wherein the range is represented as a radius around the camera, and wherein the viewable area is represented as at least one shape that defines an area that can be seen by the camera; detecting the occurrence of the event with one or more sensors; selecting at least one of the one or more cameras to target the event based on whether the event occurred within the range and/or viewable area of the one or more cameras; and control the selected at least one of the one or more cameras to target the event. 17. The method of claim 16 further comprising receiving and presenting the location of one or more mobile units on a display. 18. The method of claim 16 further comprising:
presenting one or more video streams from at least one of the one or more cameras on a display;
receiving and presenting the location of a mobile unit on the display; and
transmitting at least one of the one or more video streams to the mobile unit. 19. The method of claim 16 further comprising selecting at least one of the one or more cameras to target the event based on the camera range and the viewable area of the one or more cameras, wherein the event is not within the viewable area of the selected camera, wherein the event is within the range of the selected camera. 20. The method of claim 16 further comprising selecting at least one of the one or more cameras to target the event based on the camera range and the viewable area of the one or more cameras, wherein the event is both within the viewable area and the range of the selected camera. 21. The method of claim 16, further comprising transmitting information to an output device near the occurred event, including controlling a speaker to emit an alarm sound, a distracting sound, or a painful sound. 22. The method of claim 16, wherein selecting the at least one camera includes selecting one or more cameras that are closest to the event. 23. The method of claim 16, further comprising:
storing a mapping between zoom levels of a camera and viewable locations provided by the zoom levels; and using the mapping to target the camera to provide a view of the event. 24. The method of claim 16, wherein the selected camera is a mobile camera. 25. The device of claim 1, wherein the selected camera is a mobile camera. 26. The system of claim 8, wherein the selected camera is a mobile camera. 27. The system of claim 8, wherein the viewable area associated with at least one of the cameras comprises one or more separate areas. | A detection and response device for a surveillance system detects events, responds to events, or both. The detection and response device may be used with or provided by a variety of surveillance systems, including peer to peer surveillance architectures. The device may utilize one or more defined geospaces. If an event occurs in a geospace a predefined response may then be provided. The predefined response may include automatically targeting one or more cameras to areas relevant to the event and presenting one or more predefined views optimized for viewing the event. If an event does not occur within a geospace, the detection and response device may provide one or more default responses.1. A detection and response device for automatically selecting and presenting surveillance information collected by one or more capture devices comprising:
one or more transceivers configured to receive data indicating that an event has occurred; one or more storage devices configured to store, for each of the one or more capture devices, an associated range and viewable area, wherein the range is represented as a radius around the capture device, and wherein the viewable area is represented as at least one shape that defines an area that can be seen by the capture device; and a response system configured to:
select at least one of the one or more capture devices to target the event based on whether the event occurred within the range and/or viewable area of the one or more capture devices; and
control the selected capture device to target the event. 2. The detection and response device of claim 1 further comprising a routing system configured to share the surveillance information from the one or more capture devices with one or more mobile video displays. 3. The detection and response device of claim 1,
wherein the one or more storage devices are further configured to store one or more geospaces each comprising data identifying one or more physical areas, one or more instructions associated with each of the one or more geospaces, one or more event types, and one or more user interface settings associated with each of the one or more geospaces, and wherein the response system is further configured to transmit one or more commands to the one or more capture devices through the one or more transceivers, the one or more commands defined by the one or more instructions. 4. (canceled) 5. The detection and response device of claim 2, wherein the response system is further configured to present on a display a predefined view comprising surveillance information from at least one of the one or more capture devices according to the one or more user interface settings associated with the event geospacer, wherein the one or more user interface settings include data identifying at least one of the one or more capture devices, the one or more storage devices being configured to store the data along with the one or more user interface settings. 6. The detection and response device of claim 5, wherein the response system is configured to establish the predefined view such that the predefined view only displays surveillance information from the at least one capture device identified in the one or more user interface settings. 7. The detection and response device of claim 5, wherein the one or more transceivers are configured to receive location information identifying the location of one or more mobile units, and wherein the response system is configured to establish the predefined view such that the predefined view presents the location of the one or more mobile units on the display. 8. A surveillance system for providing automated responses to one or more events comprising:
one or more cameras; one or more storage devices configured to store, for each of the one or more cameras, an associated range and viewable area, wherein the range is represented as a radius around the camera, and wherein the viewable area is represented as at least one shape that defines an area that can be seen by the camera; and a detection and response device in communication with the one or more cameras, wherein the detection and response device is configured to:
detect the occurrence of an event;
select at least one of the one or more cameras to target the event based on whether the event occurred within the range and/or viewable area of the one or more cameras; and
control the selected at least one of the one or more cameras to target the event. 9. The surveillance system of claim 8, wherein the detection and response device is further configured to:
select one of the one or more cameras to target the event based on the camera range and the viewable area of each of the one or more cameras, wherein the event is not within the viewable area of the selected camera, wherein the event is within the range of the selected camera; and control the selected camera to target the event. 10. The surveillance system of claim 9, wherein the at least one of the one or more cameras targeting the event is nearest the event relative to the other cameras. 11-15. (canceled) 16. A method for responding to an event with a surveillance system having one or more cameras comprising:
receiving and storing on one or more storage devices, for each of the one or more cameras, an associated range and viewable area, wherein the range is represented as a radius around the camera, and wherein the viewable area is represented as at least one shape that defines an area that can be seen by the camera; detecting the occurrence of the event with one or more sensors; selecting at least one of the one or more cameras to target the event based on whether the event occurred within the range and/or viewable area of the one or more cameras; and control the selected at least one of the one or more cameras to target the event. 17. The method of claim 16 further comprising receiving and presenting the location of one or more mobile units on a display. 18. The method of claim 16 further comprising:
presenting one or more video streams from at least one of the one or more cameras on a display;
receiving and presenting the location of a mobile unit on the display; and
transmitting at least one of the one or more video streams to the mobile unit. 19. The method of claim 16 further comprising selecting at least one of the one or more cameras to target the event based on the camera range and the viewable area of the one or more cameras, wherein the event is not within the viewable area of the selected camera, wherein the event is within the range of the selected camera. 20. The method of claim 16 further comprising selecting at least one of the one or more cameras to target the event based on the camera range and the viewable area of the one or more cameras, wherein the event is both within the viewable area and the range of the selected camera. 21. The method of claim 16, further comprising transmitting information to an output device near the occurred event, including controlling a speaker to emit an alarm sound, a distracting sound, or a painful sound. 22. The method of claim 16, wherein selecting the at least one camera includes selecting one or more cameras that are closest to the event. 23. The method of claim 16, further comprising:
storing a mapping between zoom levels of a camera and viewable locations provided by the zoom levels; and using the mapping to target the camera to provide a view of the event. 24. The method of claim 16, wherein the selected camera is a mobile camera. 25. The device of claim 1, wherein the selected camera is a mobile camera. 26. The system of claim 8, wherein the selected camera is a mobile camera. 27. The system of claim 8, wherein the viewable area associated with at least one of the cameras comprises one or more separate areas. | 2,600 |
10,570 | 10,570 | 15,278,814 | 2,643 | In an access control system, an ancillary user device is used in conjunction with a mobile computing device to broadcast user information for authentication. The mobile computing device and ancillary user device are paired, and user information is transmitted from the mobile computing device to the ancillary user device. The user information can be stored and/or hashed by the ancillary user device, and an origin flag can be set on the user information, before the user information is transmitted to the positioning unit of the access control system. An attachment mechanism attaches the ancillary user device to the user's body. | 1. An ancillary user device for interacting with access control systems, the device comprising:
a wireless interface for transmitting user information to the access control systems; and a controller for storing the user information, which was received from a mobile computing device. 2. The device according to claim 1 ; wherein the wireless interface for transmitting user information to the access control systems is a Bluetooth transceiver. 3. The device according to claim I, wherein the ancillary user device is paired with the mobile computing device. 4. The device according to claim 3, wherein the ancillary user device verifies that the user information originates from the mobile computing device that was previously paired with the ancillary user device. 5. The device according to claim 1, wherein the user information is a token hash. 6. The device according to claim 1, wherein the user information is stored before being transmitted to the access control systems. 7. The device according to claim 6, wherein updated user information is received from the mobile computing device when the user information stored on the ancillary user device becomes stale. 8. The device according to claim 1, wherein an origin flag is set on the user information by the ancillary user device before the user information is transmitted to the access control systems. 9. The device according to claim 1, wherein the user information is a token. 10. The device according to claim 9, wherein the user information is hashed by the ancillary user device before it is transmitted to the access control systems. 11. The device according to claim 1, wherein the ancillary user device is worn by a user via an attachment mechanism. 12. A method for providing user information to access control systems, comprising:
a mobile computing device passing user information of a user to an ancillary user device; and the ancillary user device transmitting the user information to the access control systems. 13. The method according to claim 12, wherein the user information is transmitted via a Bluetooth transceiver. 14. The method according to claim 12, wherein the ancillary user device is paired with the mobile computing device. 15. The method according to claim 14, wherein the ancillary user device verifies that the user information received from the mobile computing device originates from the mobile computing device that was previously paired with the ancillary user device. 16. The method according to claim 12, wherein the user information is a token hash. 17. The method according to claim 12, wherein the user information is stored before being transmitted to the access control systems. 18. The method according to claim 17, wherein updated user information is received from the mobile computing device when the user information stored on the ancillary user device becomes stale. 19. The method according to claim 12, wherein an origin flag is set on the user information by the ancillary user device before the user information is transmitted to the access control systems. 20. (canceled) 21. The method according to claim 12, wherein the user information is hashed by the ancillary user device before it is transmitted to the access control systems. 22. (canceled) 23. An ancillary user device for interacting with access control systems, the device comprising:
a wireless interface for transmitting user information including a token hash to the access control systems; a controller for storing the user information including the token hash, which was received from a mobile computing device; wherein updated user information is received from the mobile computing device when a current token hash stored on the ancillary user device becomes stale; and an attachment mechanism enabling the ancillary user device to be worn by a user. 24. A method for providing user information to an access control system and controlling an access point, comprising:
a mobile computing device passing user information of a user to an ancillary user device; the ancillary user device receiving the user information and verifying that the user information originated from the mobile computing device to which the ancillary user device is paired; the ancillary user device then transmitting the user information to the access control system; a positioning unit located near an access point detecting the user information broadcast by the ancillary user device and determining whether a user is in a predetermined threshold area of the access point; a verification system determining if the user is an authorized user for the access point based on the user information broadcast by the ancillary user device; and if the user is determined to be an authorized user and the user was also determined to be within the threshold area, then a door controller is signaled to enable access through the access point. | In an access control system, an ancillary user device is used in conjunction with a mobile computing device to broadcast user information for authentication. The mobile computing device and ancillary user device are paired, and user information is transmitted from the mobile computing device to the ancillary user device. The user information can be stored and/or hashed by the ancillary user device, and an origin flag can be set on the user information, before the user information is transmitted to the positioning unit of the access control system. An attachment mechanism attaches the ancillary user device to the user's body.1. An ancillary user device for interacting with access control systems, the device comprising:
a wireless interface for transmitting user information to the access control systems; and a controller for storing the user information, which was received from a mobile computing device. 2. The device according to claim 1 ; wherein the wireless interface for transmitting user information to the access control systems is a Bluetooth transceiver. 3. The device according to claim I, wherein the ancillary user device is paired with the mobile computing device. 4. The device according to claim 3, wherein the ancillary user device verifies that the user information originates from the mobile computing device that was previously paired with the ancillary user device. 5. The device according to claim 1, wherein the user information is a token hash. 6. The device according to claim 1, wherein the user information is stored before being transmitted to the access control systems. 7. The device according to claim 6, wherein updated user information is received from the mobile computing device when the user information stored on the ancillary user device becomes stale. 8. The device according to claim 1, wherein an origin flag is set on the user information by the ancillary user device before the user information is transmitted to the access control systems. 9. The device according to claim 1, wherein the user information is a token. 10. The device according to claim 9, wherein the user information is hashed by the ancillary user device before it is transmitted to the access control systems. 11. The device according to claim 1, wherein the ancillary user device is worn by a user via an attachment mechanism. 12. A method for providing user information to access control systems, comprising:
a mobile computing device passing user information of a user to an ancillary user device; and the ancillary user device transmitting the user information to the access control systems. 13. The method according to claim 12, wherein the user information is transmitted via a Bluetooth transceiver. 14. The method according to claim 12, wherein the ancillary user device is paired with the mobile computing device. 15. The method according to claim 14, wherein the ancillary user device verifies that the user information received from the mobile computing device originates from the mobile computing device that was previously paired with the ancillary user device. 16. The method according to claim 12, wherein the user information is a token hash. 17. The method according to claim 12, wherein the user information is stored before being transmitted to the access control systems. 18. The method according to claim 17, wherein updated user information is received from the mobile computing device when the user information stored on the ancillary user device becomes stale. 19. The method according to claim 12, wherein an origin flag is set on the user information by the ancillary user device before the user information is transmitted to the access control systems. 20. (canceled) 21. The method according to claim 12, wherein the user information is hashed by the ancillary user device before it is transmitted to the access control systems. 22. (canceled) 23. An ancillary user device for interacting with access control systems, the device comprising:
a wireless interface for transmitting user information including a token hash to the access control systems; a controller for storing the user information including the token hash, which was received from a mobile computing device; wherein updated user information is received from the mobile computing device when a current token hash stored on the ancillary user device becomes stale; and an attachment mechanism enabling the ancillary user device to be worn by a user. 24. A method for providing user information to an access control system and controlling an access point, comprising:
a mobile computing device passing user information of a user to an ancillary user device; the ancillary user device receiving the user information and verifying that the user information originated from the mobile computing device to which the ancillary user device is paired; the ancillary user device then transmitting the user information to the access control system; a positioning unit located near an access point detecting the user information broadcast by the ancillary user device and determining whether a user is in a predetermined threshold area of the access point; a verification system determining if the user is an authorized user for the access point based on the user information broadcast by the ancillary user device; and if the user is determined to be an authorized user and the user was also determined to be within the threshold area, then a door controller is signaled to enable access through the access point. | 2,600 |
10,571 | 10,571 | 15,443,215 | 2,627 | A dielectrophoretic display has a substrate having walls defining a cavity, the cavity having a viewing surface and a side wall inclined to the viewing surface. A fluid is contained within the cavity; and a plurality of particles are present in the fluid. There is applied to the substrate an electric field effective to cause dielectrophoretic movement of the particles so that the particles occupy only a minor proportion of the viewing surface. | 1. A method for operating a display, the method comprising:
providing a substrate having walls defining at least one cavity, the cavity having a viewing surface; a fluid contained within the cavity; and a plurality of at least one type of particle within the fluid; and applying to the substrate an alternating electric field effective to cause movement of the particles so that the particles are visible at only a minor proportion of the viewing surface; and applying to the substrate a direct current electric field effective to cause movement of the particles such that they occupy substantially the entire viewing surface, thereby rendering the display substantially opaque. 2. A method according to claim 1 wherein the fluid is light-transmissive. 3. A method according to claim 1 wherein at least some of the at least one type of particle are electrically charged. 4. A method according to claim 1 wherein the plurality of at least one type of particle comprises a first type of particle having a first optical characteristic and a first electrophoretic mobility, and a second type of particle having a second optical characteristic different from the first optical characteristic and a second electrophoretic mobility different from the first electrophoretic mobility. 5. A method according to claim 4 wherein the first and second electrophoretic mobilities differ in sign, so that the first and second types of particles move in opposed directions in an electric field. 6. A method according to claim 5 further comprising:
applying an electric field of a first polarity to the cavity, thereby causing the first type of particles to approach the viewing surface and the cavity to display the first optical characteristic at the viewing surface; and
applying an electric field of a polarity opposite to the first polarity to the cavity, thereby causing the second type of particles to approach the viewing surface and the cavity to display the second optical characteristic at the viewing surface. 7. A method according to claim 6 further comprising providing a backing member disposed on the opposed side of the cavity from the viewing surface, at least part of the backing member having a third optical characteristic different from the first and second optical characteristics. 8. A method according to claim 7 wherein the backing member comprises areas having third and fourth optical characteristics different from each other and from the first and second optical characteristics. 9. A method according to claim 1 wherein the at least one type of particle is formed from an electrically conductive material. 10. A method according to claim 9 wherein the at least one type of particle is formed from a metal or carbon black. 11. A method according to claim 1 wherein the substrate comprises at least one capsule wall so that the display comprises at least one capsule. 12. A method according to claim 11 wherein the substrate comprises a plurality of capsules, the capsules being arranged in a single layer. 13. A method according to claim 1 wherein the substrate comprises a continuous phase surrounding a plurality of discrete droplets of the fluid having the at least one type of particle therein. 14. A method according to claim 1 wherein the substrate comprises a substantially rigid material having the at least one cavity formed therein, the substrate further comprising at least one cover member closing the at least one cavity. 15. A method for operating a display, the method comprising:
providing a substrate having walls defining at least one cavity, the cavity having a viewing surface; a fluid contained within the cavity; and a plurality of at least one type of particle within the fluid; and applying to the substrate an alternating electric field effective to cause movement of the particles laterally across the viewing surface so that the particles are visible at only a minor proportion of the viewing surface; and applying to the substrate a direct current electric field effective to cause movement of the particles such that they occupy substantially the entire viewing surface, thereby rendering the display substantially opaque. 16. A method according to claim 15 wherein the fluid is light-transmissive. 17. A method according to claim 15 wherein at least some of the at least one type of particle are electrically charged. 18. A method according to claim 17 wherein the plurality of at least one type of particle comprises a first type of particle having a first optical characteristic and a first electrophoretic mobility, and a second type of particle having a second optical characteristic different from the first optical characteristic and a second electrophoretic mobility different from the first electrophoretic mobility. 19. A method according to claim 18 wherein the first and second electrophoretic mobilities differ in sign, so that the first and second types of particles move in opposed directions in an electric field. 20. A method according to claim 19 further comprising:
applying an electric field of a first polarity to the cavity, thereby causing the first type of particles to approach the viewing surface and the cavity to display the first optical characteristic at the viewing surface; and
applying an electric field of a polarity opposite to the first polarity to the cavity, thereby causing the second type of particles to approach the viewing surface and the cavity to display the second optical characteristic at the viewing surface. 21. A method according to claim 20 further comprising providing a backing member disposed on the opposed side of the cavity from the viewing surface, at least part of the backing member having a third optical characteristic different from the first and second optical characteristics. 22. A method according to claim 21 wherein the backing member comprises areas having third and fourth optical characteristics different from each other and from the first and second optical characteristics. 23. A method according to claim 20 wherein the at least one type of particle is formed from an electrically conductive material. 24. A method according to claim 23 wherein the at least one type of particle is formed from a metal or carbon black. 25. A method according to claim 20 wherein the substrate comprises at least one capsule wall so that the display comprises at least one capsule. 26. A method according to claim 25 wherein the substrate comprises a plurality of capsules, the capsules being arranged in a single layer. 27. A method according to claim 20 wherein the substrate comprises a continuous phase surrounding a plurality of discrete droplets of the fluid having the at least one type of particle therein. 28. A method according to claim 20 wherein the substrate comprises a substantially rigid material having the at least one cavity formed therein, the substrate further comprising at least one cover member closing the at least one cavity. | A dielectrophoretic display has a substrate having walls defining a cavity, the cavity having a viewing surface and a side wall inclined to the viewing surface. A fluid is contained within the cavity; and a plurality of particles are present in the fluid. There is applied to the substrate an electric field effective to cause dielectrophoretic movement of the particles so that the particles occupy only a minor proportion of the viewing surface.1. A method for operating a display, the method comprising:
providing a substrate having walls defining at least one cavity, the cavity having a viewing surface; a fluid contained within the cavity; and a plurality of at least one type of particle within the fluid; and applying to the substrate an alternating electric field effective to cause movement of the particles so that the particles are visible at only a minor proportion of the viewing surface; and applying to the substrate a direct current electric field effective to cause movement of the particles such that they occupy substantially the entire viewing surface, thereby rendering the display substantially opaque. 2. A method according to claim 1 wherein the fluid is light-transmissive. 3. A method according to claim 1 wherein at least some of the at least one type of particle are electrically charged. 4. A method according to claim 1 wherein the plurality of at least one type of particle comprises a first type of particle having a first optical characteristic and a first electrophoretic mobility, and a second type of particle having a second optical characteristic different from the first optical characteristic and a second electrophoretic mobility different from the first electrophoretic mobility. 5. A method according to claim 4 wherein the first and second electrophoretic mobilities differ in sign, so that the first and second types of particles move in opposed directions in an electric field. 6. A method according to claim 5 further comprising:
applying an electric field of a first polarity to the cavity, thereby causing the first type of particles to approach the viewing surface and the cavity to display the first optical characteristic at the viewing surface; and
applying an electric field of a polarity opposite to the first polarity to the cavity, thereby causing the second type of particles to approach the viewing surface and the cavity to display the second optical characteristic at the viewing surface. 7. A method according to claim 6 further comprising providing a backing member disposed on the opposed side of the cavity from the viewing surface, at least part of the backing member having a third optical characteristic different from the first and second optical characteristics. 8. A method according to claim 7 wherein the backing member comprises areas having third and fourth optical characteristics different from each other and from the first and second optical characteristics. 9. A method according to claim 1 wherein the at least one type of particle is formed from an electrically conductive material. 10. A method according to claim 9 wherein the at least one type of particle is formed from a metal or carbon black. 11. A method according to claim 1 wherein the substrate comprises at least one capsule wall so that the display comprises at least one capsule. 12. A method according to claim 11 wherein the substrate comprises a plurality of capsules, the capsules being arranged in a single layer. 13. A method according to claim 1 wherein the substrate comprises a continuous phase surrounding a plurality of discrete droplets of the fluid having the at least one type of particle therein. 14. A method according to claim 1 wherein the substrate comprises a substantially rigid material having the at least one cavity formed therein, the substrate further comprising at least one cover member closing the at least one cavity. 15. A method for operating a display, the method comprising:
providing a substrate having walls defining at least one cavity, the cavity having a viewing surface; a fluid contained within the cavity; and a plurality of at least one type of particle within the fluid; and applying to the substrate an alternating electric field effective to cause movement of the particles laterally across the viewing surface so that the particles are visible at only a minor proportion of the viewing surface; and applying to the substrate a direct current electric field effective to cause movement of the particles such that they occupy substantially the entire viewing surface, thereby rendering the display substantially opaque. 16. A method according to claim 15 wherein the fluid is light-transmissive. 17. A method according to claim 15 wherein at least some of the at least one type of particle are electrically charged. 18. A method according to claim 17 wherein the plurality of at least one type of particle comprises a first type of particle having a first optical characteristic and a first electrophoretic mobility, and a second type of particle having a second optical characteristic different from the first optical characteristic and a second electrophoretic mobility different from the first electrophoretic mobility. 19. A method according to claim 18 wherein the first and second electrophoretic mobilities differ in sign, so that the first and second types of particles move in opposed directions in an electric field. 20. A method according to claim 19 further comprising:
applying an electric field of a first polarity to the cavity, thereby causing the first type of particles to approach the viewing surface and the cavity to display the first optical characteristic at the viewing surface; and
applying an electric field of a polarity opposite to the first polarity to the cavity, thereby causing the second type of particles to approach the viewing surface and the cavity to display the second optical characteristic at the viewing surface. 21. A method according to claim 20 further comprising providing a backing member disposed on the opposed side of the cavity from the viewing surface, at least part of the backing member having a third optical characteristic different from the first and second optical characteristics. 22. A method according to claim 21 wherein the backing member comprises areas having third and fourth optical characteristics different from each other and from the first and second optical characteristics. 23. A method according to claim 20 wherein the at least one type of particle is formed from an electrically conductive material. 24. A method according to claim 23 wherein the at least one type of particle is formed from a metal or carbon black. 25. A method according to claim 20 wherein the substrate comprises at least one capsule wall so that the display comprises at least one capsule. 26. A method according to claim 25 wherein the substrate comprises a plurality of capsules, the capsules being arranged in a single layer. 27. A method according to claim 20 wherein the substrate comprises a continuous phase surrounding a plurality of discrete droplets of the fluid having the at least one type of particle therein. 28. A method according to claim 20 wherein the substrate comprises a substantially rigid material having the at least one cavity formed therein, the substrate further comprising at least one cover member closing the at least one cavity. | 2,600 |
10,572 | 10,572 | 15,985,164 | 2,684 | A system includes a track having conductive rails, a signal generating circuit coupled to the conductive rails, and an electrical power source coupled to the conductive rails via the signal generating circuit. The signal generating circuit includes a power supply for generating trigger signals. The electrical power source provides an electrical signal to the conductive rails via the signal generating circuit. The signal generating circuit generates a first trigger signal within the electrical signal at a first time interval and generates a second trigger signal within the electrical signal at a second time interval. The first trigger signal corresponds to a beginning of a communication signal and the second trigger signal corresponds to an end of the communication signal. The communication signal is transmitted over a predetermined number of cycles of the electrical signal provided by the electrical power source. The predetermined number of cycles correspond to a coded communication. | 1. A system comprising:
a length of track having one or more conductive rails; a signal generating circuit electrically coupled to the one or more conductive rails of the length of track; and an electrical power source electrically coupled to the one or more conductive rails of the length of track via the signal generating circuit, wherein:
the signal generating circuit includes a power supply for generating a plurality of trigger signals,
the electrical power source provides an alternating current electrical signal to the one or more conductive rails of the length of track via the signal generating circuit,
the signal generating circuit generates a first trigger signal within the alternating current electrical signal at a first time interval and generates a second trigger signal within the alternating current electrical signal at a second time interval,
the first trigger signal corresponds to a beginning of a communication signal and the second trigger signal corresponds to an end of the communication signal,
the communication signal is transmitted over a predetermined number of cycles of the alternating current electrical signal provided by the electrical power source, and
the predetermined number of cycles correspond to a coded communication. 2. The system of claim 1, further comprising a cart comprising:
a wheel supported on the length of track and electrically coupled to the one or more conductive rails of the length of track; and a cart-computing device communicatively coupled to the wheel, the cart-computing device comprising a processor and a non-transitory computer-readable memory, wherein the non-transitory computer-readable memory comprises a machine-readable instruction set that, when executed, causes the processor to:
detect the first trigger signal transmitted over the one or more conductive rails of the length of track and the wheel of the cart by the signal generating circuit,
detect the second trigger signal transmitted over the one or more conductive rails of the length of track and the wheel of the cart by the signal generating circuit,
determine the predetermined number of cycles of the alternating current electrical signal provided by the electrical power source that occur between the first trigger signal and the second trigger signal, and
determine the coded communication from the predetermined number of cycles. 3. The system of claim 2, wherein the machine-readable instruction set, when executed, further causes the processor to: determine the predetermined number of cycles of the alternating current electrical signal based on a number of zero-crossings of the alternating current electrical signal that occur between the first trigger signal and the second trigger signal. 4. The system of claim 2, wherein:
the cart further comprises a drive motor rotatably coupled to the wheel such that an output of the drive motor propels the cart along the length of track and the drive motor electrically couples to the one or more conductive rails of the length of track via the wheel; the cart-computing device is communicatively coupled to the drive motor, and the machine-readable instruction set, when executed, further causes the processor to generate and transmit a control signal to the drive motor to cause the drive motor to operate in response to the coded communication. 5. The system of claim 1, further comprising a master controller communicatively coupled to the signal generating circuit, the master controller comprising a processor and a non-transitory computer-readable memory, wherein the non-transitory computer-readable memory comprises a machine-readable instruction set that, when executed, causes the processor to:
determine an action, encode an instruction to complete the action in the coded communication, and direct the signal generating circuit to generate the first trigger signal and the second trigger signal such that the coded communication contains the instruction. 6. The system of claim 1, wherein the first trigger signal comprises a first voltage pulse generated by the signal generating circuit during a first zero-crossing of the alternating current electrical signal and the second trigger signal comprises a second voltage pulse generated by the signal generating circuit during a subsequent zero-crossing of the alternating current electrical signal. 7. The system of claim 1, wherein the alternating current electrical signal comprises a positive peak voltage and a negative peak voltage, and the signal generating circuit generates the first trigger signal by reducing an amplitude of the positive peak voltage, the negative peak voltage, or both the positive peak voltage and the negative peak voltage to a trigger voltage level for one cycle of the alternating current electrical signal. 8. The system of claim 7, wherein the signal generating circuit generates the second trigger signal by reducing the amplitude of the positive peak voltage, the negative peak voltage, or both the positive peak voltage and the negative peak voltage to the trigger voltage level for one cycle of the alternating current electrical signal subsequent to the first trigger signal. 9. The system of claim 8, wherein the trigger voltage level of the first trigger signal is greater than an operating voltage of a cart receiving the alternating current electrical signal such that operation of the cart continues uninterrupted. 10. The system of claim 1, wherein the alternating current electrical signal comprises a positive peak voltage and a negative peak voltage, the signal generating circuit generates the first trigger signal by reducing an amplitude of the positive peak voltage, the negative peak voltage, or both the positive peak voltage and the negative peak voltage to a trigger voltage level for each cycle of the alternating current electrical signal of the communication signal until the signal generating circuit generates the second trigger signal by returning the alternating current electrical signal to the positive peak voltage and the negative peak voltage output by the electrical power source. 11. The system of claim 10, wherein the trigger voltage level of the first trigger signal is greater than an operating voltage of a cart receiving the alternating current electrical signal such that operation of the cart continues uninterrupted. 12. The system of claim 1, further comprising an assembly line grow pod having a plurality of carts for growing a plurality of plants, wherein the length of track is part of the assembly line grow pod and the plurality of carts are supported on the length of track. 13. A system comprising:
a length of track having one or more conductive rails; an electrical power source electrically coupled to the one or more conductive rails of the length of track; and a cart comprising:
one or more first wheels supported on the length of track and electrically coupled to the one or more conductive rails of the length of track,
a cart-computing device communicatively coupled to the one or more first wheels, and
a signal generating circuit electrically coupled to the cart-computing device and the one or more first wheels, wherein:
the signal generating circuit includes a power supply for generating a plurality of trigger signals,
the electrical power source provides an alternating current electrical signal to the one or more conductive rails of the length of track,
the signal generating circuit generates a first trigger signal within the alternating current electrical signal at a first time interval and generates a second trigger signal within the alternating current electrical signal at a second time interval,
the first trigger signal corresponds to a beginning of a communication signal and the second trigger signal corresponds to an end of the communication signal,
the communication signal is transmitted over a predetermined number of cycles of the alternating current electrical signal provided by the electrical power source, and
the predetermined number of cycles correspond to a coded communication. 14. The system of claim 13, wherein the first trigger signal comprises a first voltage pulse generated by the signal generating circuit during a first zero-crossing of the alternating current electrical signal and the second trigger signal comprises a second voltage pulse generated by the signal generating circuit during a subsequent zero-crossing of the alternating current electrical signal. 15. The system of claim 13, further comprising a master controller communicatively coupled to the one or more conductive rails of the length of track, the master controller comprising a processor and a non-transitory computer-readable memory, wherein the non-transitory computer-readable memory comprises a machine-readable instruction set that, when executed, causes the processor to:
detect the first trigger signal transmitted over the one or more conductive rails of the length of track and the one or more first wheels of the cart by the signal generating circuit, detect the second trigger signal transmitted over the one or more conductive rails of the length of track and the one or more first wheels of the cart by the signal generating circuit, determine the predetermined number of cycles of the alternating current electrical signal provided by the electrical power source that occur between the first trigger signal and the second trigger signal, and determine the coded communication from the predetermined number of cycles. 16. The system of claim 15, wherein the communication signal corresponds to a status information of the cart. 17. The system of claim 13, further comprising a second cart comprising:
one or more second wheels supported on the length of track and electrically coupled to the one or more conductive rails of the length of track, a second cart-computing device communicatively coupled to the one or more second wheels, the cart-computing device of the second cart comprising a processor and a non-transitory computer-readable memory, wherein the non-transitory computer-readable memory comprises a machine-readable instruction set that, when executed, causes the processor to:
detect the first trigger signal transmitted over the one or more conductive rails of the length of track and the one or more second wheels of the cart by the signal generating circuit,
detect the second trigger signal transmitted over the one or more conductive rails of the length of track and the one or more second wheels of the cart by the signal generating circuit,
determine the predetermined number of cycles of the alternating current electrical signal provided by the electrical power source occurring between the first trigger signal and the second trigger signal, and
determine the coded communication corresponding to the predetermined number of cycles. 18. The system of claim 17, wherein the communication signal corresponds to a control signal for controlling an operation of a drive motor of the second cart supported on the length of track. 19. A method for communicating via an alternating current electrical signal from a master controller to a cart supported on a length of track in an assembly line grow pod, the method comprising:
determining, by the master controller, an action to be completed by the cart; generating one or more coded communications for the action; generating a first trigger signal within the alternating current electrical signal from an electrical power source; determining when a predetermined number of cycles of the alternating current electrical signal corresponding to a coded communication of the one or more coded communications have propagated from the electrical power source following the first trigger signal; generating a second trigger signal within the alternating current electrical signal when the predetermined number of cycles of the alternating current electrical signal corresponding to the coded communication have propagated following the first trigger signal. 20. The method of claim 19, wherein the one or more coded communications for the cart to complete the action include a first coded communication for powering on a drive motor and a second coded communication for communicating a predefined period of time to power on the drive motor to cause the cart to advance along the length of track. | A system includes a track having conductive rails, a signal generating circuit coupled to the conductive rails, and an electrical power source coupled to the conductive rails via the signal generating circuit. The signal generating circuit includes a power supply for generating trigger signals. The electrical power source provides an electrical signal to the conductive rails via the signal generating circuit. The signal generating circuit generates a first trigger signal within the electrical signal at a first time interval and generates a second trigger signal within the electrical signal at a second time interval. The first trigger signal corresponds to a beginning of a communication signal and the second trigger signal corresponds to an end of the communication signal. The communication signal is transmitted over a predetermined number of cycles of the electrical signal provided by the electrical power source. The predetermined number of cycles correspond to a coded communication.1. A system comprising:
a length of track having one or more conductive rails; a signal generating circuit electrically coupled to the one or more conductive rails of the length of track; and an electrical power source electrically coupled to the one or more conductive rails of the length of track via the signal generating circuit, wherein:
the signal generating circuit includes a power supply for generating a plurality of trigger signals,
the electrical power source provides an alternating current electrical signal to the one or more conductive rails of the length of track via the signal generating circuit,
the signal generating circuit generates a first trigger signal within the alternating current electrical signal at a first time interval and generates a second trigger signal within the alternating current electrical signal at a second time interval,
the first trigger signal corresponds to a beginning of a communication signal and the second trigger signal corresponds to an end of the communication signal,
the communication signal is transmitted over a predetermined number of cycles of the alternating current electrical signal provided by the electrical power source, and
the predetermined number of cycles correspond to a coded communication. 2. The system of claim 1, further comprising a cart comprising:
a wheel supported on the length of track and electrically coupled to the one or more conductive rails of the length of track; and a cart-computing device communicatively coupled to the wheel, the cart-computing device comprising a processor and a non-transitory computer-readable memory, wherein the non-transitory computer-readable memory comprises a machine-readable instruction set that, when executed, causes the processor to:
detect the first trigger signal transmitted over the one or more conductive rails of the length of track and the wheel of the cart by the signal generating circuit,
detect the second trigger signal transmitted over the one or more conductive rails of the length of track and the wheel of the cart by the signal generating circuit,
determine the predetermined number of cycles of the alternating current electrical signal provided by the electrical power source that occur between the first trigger signal and the second trigger signal, and
determine the coded communication from the predetermined number of cycles. 3. The system of claim 2, wherein the machine-readable instruction set, when executed, further causes the processor to: determine the predetermined number of cycles of the alternating current electrical signal based on a number of zero-crossings of the alternating current electrical signal that occur between the first trigger signal and the second trigger signal. 4. The system of claim 2, wherein:
the cart further comprises a drive motor rotatably coupled to the wheel such that an output of the drive motor propels the cart along the length of track and the drive motor electrically couples to the one or more conductive rails of the length of track via the wheel; the cart-computing device is communicatively coupled to the drive motor, and the machine-readable instruction set, when executed, further causes the processor to generate and transmit a control signal to the drive motor to cause the drive motor to operate in response to the coded communication. 5. The system of claim 1, further comprising a master controller communicatively coupled to the signal generating circuit, the master controller comprising a processor and a non-transitory computer-readable memory, wherein the non-transitory computer-readable memory comprises a machine-readable instruction set that, when executed, causes the processor to:
determine an action, encode an instruction to complete the action in the coded communication, and direct the signal generating circuit to generate the first trigger signal and the second trigger signal such that the coded communication contains the instruction. 6. The system of claim 1, wherein the first trigger signal comprises a first voltage pulse generated by the signal generating circuit during a first zero-crossing of the alternating current electrical signal and the second trigger signal comprises a second voltage pulse generated by the signal generating circuit during a subsequent zero-crossing of the alternating current electrical signal. 7. The system of claim 1, wherein the alternating current electrical signal comprises a positive peak voltage and a negative peak voltage, and the signal generating circuit generates the first trigger signal by reducing an amplitude of the positive peak voltage, the negative peak voltage, or both the positive peak voltage and the negative peak voltage to a trigger voltage level for one cycle of the alternating current electrical signal. 8. The system of claim 7, wherein the signal generating circuit generates the second trigger signal by reducing the amplitude of the positive peak voltage, the negative peak voltage, or both the positive peak voltage and the negative peak voltage to the trigger voltage level for one cycle of the alternating current electrical signal subsequent to the first trigger signal. 9. The system of claim 8, wherein the trigger voltage level of the first trigger signal is greater than an operating voltage of a cart receiving the alternating current electrical signal such that operation of the cart continues uninterrupted. 10. The system of claim 1, wherein the alternating current electrical signal comprises a positive peak voltage and a negative peak voltage, the signal generating circuit generates the first trigger signal by reducing an amplitude of the positive peak voltage, the negative peak voltage, or both the positive peak voltage and the negative peak voltage to a trigger voltage level for each cycle of the alternating current electrical signal of the communication signal until the signal generating circuit generates the second trigger signal by returning the alternating current electrical signal to the positive peak voltage and the negative peak voltage output by the electrical power source. 11. The system of claim 10, wherein the trigger voltage level of the first trigger signal is greater than an operating voltage of a cart receiving the alternating current electrical signal such that operation of the cart continues uninterrupted. 12. The system of claim 1, further comprising an assembly line grow pod having a plurality of carts for growing a plurality of plants, wherein the length of track is part of the assembly line grow pod and the plurality of carts are supported on the length of track. 13. A system comprising:
a length of track having one or more conductive rails; an electrical power source electrically coupled to the one or more conductive rails of the length of track; and a cart comprising:
one or more first wheels supported on the length of track and electrically coupled to the one or more conductive rails of the length of track,
a cart-computing device communicatively coupled to the one or more first wheels, and
a signal generating circuit electrically coupled to the cart-computing device and the one or more first wheels, wherein:
the signal generating circuit includes a power supply for generating a plurality of trigger signals,
the electrical power source provides an alternating current electrical signal to the one or more conductive rails of the length of track,
the signal generating circuit generates a first trigger signal within the alternating current electrical signal at a first time interval and generates a second trigger signal within the alternating current electrical signal at a second time interval,
the first trigger signal corresponds to a beginning of a communication signal and the second trigger signal corresponds to an end of the communication signal,
the communication signal is transmitted over a predetermined number of cycles of the alternating current electrical signal provided by the electrical power source, and
the predetermined number of cycles correspond to a coded communication. 14. The system of claim 13, wherein the first trigger signal comprises a first voltage pulse generated by the signal generating circuit during a first zero-crossing of the alternating current electrical signal and the second trigger signal comprises a second voltage pulse generated by the signal generating circuit during a subsequent zero-crossing of the alternating current electrical signal. 15. The system of claim 13, further comprising a master controller communicatively coupled to the one or more conductive rails of the length of track, the master controller comprising a processor and a non-transitory computer-readable memory, wherein the non-transitory computer-readable memory comprises a machine-readable instruction set that, when executed, causes the processor to:
detect the first trigger signal transmitted over the one or more conductive rails of the length of track and the one or more first wheels of the cart by the signal generating circuit, detect the second trigger signal transmitted over the one or more conductive rails of the length of track and the one or more first wheels of the cart by the signal generating circuit, determine the predetermined number of cycles of the alternating current electrical signal provided by the electrical power source that occur between the first trigger signal and the second trigger signal, and determine the coded communication from the predetermined number of cycles. 16. The system of claim 15, wherein the communication signal corresponds to a status information of the cart. 17. The system of claim 13, further comprising a second cart comprising:
one or more second wheels supported on the length of track and electrically coupled to the one or more conductive rails of the length of track, a second cart-computing device communicatively coupled to the one or more second wheels, the cart-computing device of the second cart comprising a processor and a non-transitory computer-readable memory, wherein the non-transitory computer-readable memory comprises a machine-readable instruction set that, when executed, causes the processor to:
detect the first trigger signal transmitted over the one or more conductive rails of the length of track and the one or more second wheels of the cart by the signal generating circuit,
detect the second trigger signal transmitted over the one or more conductive rails of the length of track and the one or more second wheels of the cart by the signal generating circuit,
determine the predetermined number of cycles of the alternating current electrical signal provided by the electrical power source occurring between the first trigger signal and the second trigger signal, and
determine the coded communication corresponding to the predetermined number of cycles. 18. The system of claim 17, wherein the communication signal corresponds to a control signal for controlling an operation of a drive motor of the second cart supported on the length of track. 19. A method for communicating via an alternating current electrical signal from a master controller to a cart supported on a length of track in an assembly line grow pod, the method comprising:
determining, by the master controller, an action to be completed by the cart; generating one or more coded communications for the action; generating a first trigger signal within the alternating current electrical signal from an electrical power source; determining when a predetermined number of cycles of the alternating current electrical signal corresponding to a coded communication of the one or more coded communications have propagated from the electrical power source following the first trigger signal; generating a second trigger signal within the alternating current electrical signal when the predetermined number of cycles of the alternating current electrical signal corresponding to the coded communication have propagated following the first trigger signal. 20. The method of claim 19, wherein the one or more coded communications for the cart to complete the action include a first coded communication for powering on a drive motor and a second coded communication for communicating a predefined period of time to power on the drive motor to cause the cart to advance along the length of track. | 2,600 |
10,573 | 10,573 | 15,379,821 | 2,626 | The present invention provides a tethered active stylus, including: a conductive tip; and a driving-signal line coupled to the conductive tip, wherein the driving-signal line is configured to connect with a driving circuit of a touch controller, and the driving circuit is configured to provide driving signals to multiple electrodes of a touch screen controlled by the touch controller and the driving-signal line. | 1. A tethered active stylus comprising:
a conductive tip; and a driving-signal line electrically coupled to the conductive tip, wherein the driving-signal line is connected to a driving circuit of a touch controller, and the driving circuit sequentially provides a driving signal to a plurality of electrodes on a touch screen connected with the touch controller and the driving-signal line in a time-division multiplexing manner. 2. The tethered active stylus of claim 1, wherein the driving signals provided to the plurality of electrodes and the driving-signal line are the same. 3. The tethered active stylus of claim 1, further comprising a ground line electrically coupled to a ground potential of the touch controller. 4. The tethered active stylus of claim 1, further comprising:
a conductive core electrically coupled between the conductive tip and the driving-signal line; a core insulating material surrounding the conductive core; and a core shielding element surrounding the core insulating material, the core shielding element being conductive and electrically coupled to the ground line. 5. The tethered active stylus of claim 4, wherein a portion of the core insulating material near the conductive tip is not covered by the core insulating element. 6. The tethered active stylus of claim 4, wherein a portion of the core insulating material near the conductive tip protrudes from the body of the tethered active stylus. 7. The tethered active stylus of claim 3, further comprising i switches, each switch being located between the ground line and a switch line of the touch controller, wherein i is a positive integer. 8. The tethered active stylus of claim 1, further comprising a pressure sensor for sensing a force experienced at the conductive tip, and transmitting a force value experienced at the conductive tip back to the touch controller via a wire. 9. The tethered active stylus of claim 8, wherein the pressure sensor further includes:
a first element having a first impedance that changes with the force experienced for receiving a first signal including a first frequency group; a second element having a second impedance that does not change with the force experienced for receiving a second signal including a second frequency group; and a sensing line for receiving output signals from the first element and the second element. 10. The tethered active stylus of claim 9, wherein a force value returned by the sensing line is represented by a ratio of the signal strength M1 of the first frequency group and the signal strength M2 of the second frequency group. | The present invention provides a tethered active stylus, including: a conductive tip; and a driving-signal line coupled to the conductive tip, wherein the driving-signal line is configured to connect with a driving circuit of a touch controller, and the driving circuit is configured to provide driving signals to multiple electrodes of a touch screen controlled by the touch controller and the driving-signal line.1. A tethered active stylus comprising:
a conductive tip; and a driving-signal line electrically coupled to the conductive tip, wherein the driving-signal line is connected to a driving circuit of a touch controller, and the driving circuit sequentially provides a driving signal to a plurality of electrodes on a touch screen connected with the touch controller and the driving-signal line in a time-division multiplexing manner. 2. The tethered active stylus of claim 1, wherein the driving signals provided to the plurality of electrodes and the driving-signal line are the same. 3. The tethered active stylus of claim 1, further comprising a ground line electrically coupled to a ground potential of the touch controller. 4. The tethered active stylus of claim 1, further comprising:
a conductive core electrically coupled between the conductive tip and the driving-signal line; a core insulating material surrounding the conductive core; and a core shielding element surrounding the core insulating material, the core shielding element being conductive and electrically coupled to the ground line. 5. The tethered active stylus of claim 4, wherein a portion of the core insulating material near the conductive tip is not covered by the core insulating element. 6. The tethered active stylus of claim 4, wherein a portion of the core insulating material near the conductive tip protrudes from the body of the tethered active stylus. 7. The tethered active stylus of claim 3, further comprising i switches, each switch being located between the ground line and a switch line of the touch controller, wherein i is a positive integer. 8. The tethered active stylus of claim 1, further comprising a pressure sensor for sensing a force experienced at the conductive tip, and transmitting a force value experienced at the conductive tip back to the touch controller via a wire. 9. The tethered active stylus of claim 8, wherein the pressure sensor further includes:
a first element having a first impedance that changes with the force experienced for receiving a first signal including a first frequency group; a second element having a second impedance that does not change with the force experienced for receiving a second signal including a second frequency group; and a sensing line for receiving output signals from the first element and the second element. 10. The tethered active stylus of claim 9, wherein a force value returned by the sensing line is represented by a ratio of the signal strength M1 of the first frequency group and the signal strength M2 of the second frequency group. | 2,600 |
10,574 | 10,574 | 15,222,255 | 2,612 | Ground-penetrating radar (GPR) technology enables the detection of hidden objects that are underground or behind walls or other such surfaces. Embodiments of the present invention provide a realistic visualization of the hidden objects through so-called augmented reality techniques. Thanks to such visualization, interaction with hidden objects that are hazardous or delicate is easier and less prone to errors. Also, GPR-based data collection can be performed in non-real time, with object visualization occurring at a later time based on stored data that can also comprise annotations. This capability provides greater flexibility for scheduling activities related to the hidden objects. | 1. A system based on ground-penetrating radar (GPR) that visually depicts objects hidden by a surface, the system comprising:
a GPR unit that is moved along a path on the surface, wherein the GPR unit transmits a first radio signal and receives a second reflected radio signal, wherein the second reflected radio signal comprises one or more reflections of the first radio signal caused one or more of the objects hidden by the surface; a first processor that receives a representation of the second reflected radio signal and, based on the representation, generates a description of at least one of the hidden objects, wherein the description comprises an indication of a position of the at least one hidden object; a second processor that receives the description of the at least one hidden object and generates a visual specification of the object relative to an environment, wherein the visual specification comprises a specification of the position of the object relative to the environment; and a display subsystem that receives the visual specification and generates a composite image that combines an image of the environment and an image of the object, wherein the image of the object is placed, relative to the image of the environment, in accordance with the specification of the position of the object provided by the visual specification. 2. The system of claim 1 wherein the display subsystem comprises a wearable display unit wherein the part of the composite image that is the image of the environment is an actual image of a surrounding environment viewed through a material that is transparent, at least partially. 3. (canceled) 4. The system of claim 1 wherein the display subsystem comprises an electronic display unit that generates the entire composite image electronically, wherein the image of the environment is based on an image captured by a camera. 5. The system of claim 4 wherein the camera is part of the display subsystem and the image of the environment is a live image. 6. The system of claim 1 further comprising a wireless communication link that interconnects any two of the GPR unit, the first processor, the second processor, or the display subsystem. 7. (canceled) 8. The system of claim 6 wherein the display subsystem is located remotely relative to the GPR unit. 9. The system of claim 1 further comprising a non-real-time communication link that interconnects any two of the GPR unit, the first processor, the second processor, or the display subsystem. 10. (canceled) 11. The system of claim 1 further comprising a data-storage device that stores data;
wherein the stored data comprises, at least in part, one or more of (i) the representation of the second reflected radio signal, (ii) the description of the at least one hidden object, (iii) the visual specification of the object relative to the environment. 12. The system of claim 11 wherein the display subsystem generates the composite image based, at least in part, on data stored in the storage device. 13. The system of claim 12 wherein the display subsystem generates the composite image based, at least in part, on data stored in the storage device. 14. (canceled) 15. The system of claim 4 wherein the image of the environment is derived from a stored image retrieved from a storage medium. 16. The system of claim 1 further comprising a localization subsystem that generates an estimate of a position of the GPR unit;
wherein the indication of the position of the at least one hidden object is based on the estimate of the position of the GPR unit; and
wherein the estimate of the position of the GPR unit is relative to a reference frame. 17. The system of claim 16 wherein the reference frame comprises one or more of (a) a satellite that transmits a radio signal, (b) a global navigation satellite system (GNSS) satellite, (c) a global positioning system (GPS) satellite, (d) a transmitter of a radio-signal, (e) a source of a sound signal, (f) a source of an ultrasonic signal, (g) a visual marker, (h) an augmented-reality (AR) marker, (i) a visible marker placed by an operator of the system, (j) a visible pattern on the surface, (k) a grid on the surface, (I) a detectable feature of the surface, (m) one or more objects in the environment. 18. The system of claim 16 wherein the localization subsystem comprises a camera adapted to capture an image of the GPR unit while it is moved along the path on the surface, and wherein the estimate of the position of the GPR unit is based, at least in part, on the image of the GPR unit. 19-21. (canceled) 22. The system of claim 1 wherein the composite image further comprises an image of the path;
wherein the image of the path comprises at least one of (a) an image of a portion of the path that the GPR unit has followed in the past, and (b) an image of a portion of the path that the GPR unit is expected to follow in the future. 23-30. (canceled) 31. A method for visually depicting objects hidden by a surface and detected via ground-penetrating radar (GPR), wherein the GPR unit is moved along a path on the surface, the method comprising:
transmitting, by the GPR unit, a first radio signal; receiving, by the GPR unit, a second reflected radio signal, wherein the second reflected radio signal comprises one or more reflections of the first radio signal caused one or more of the objects hidden by the surface; receiving, by a first processor, a representation of the second reflected radio signal; generating, by the first processor, a description of at least one of the hidden objects, wherein the description comprises an indication of a position of the at least one hidden object, and wherein the description is based on the representation of the second reflected radio signal; receiving, by a second processor, the description of the at least one hidden object; generating, by the second processor, a visual specification of the at least one hidden object relative to an environment, wherein the visual specification comprises a specification of the position of the object relative to the environment; receiving, by a display system, the visual specification; and generating, by the display system, a composite image that combines an image of the environment and an image of the object, wherein the image of the object is placed, relative to the image of the environment, in accordance with the specification of the position of the object provided by the visual specification. 32. The method of claim 31 wherein the first processor and the second processor are the same processor. 33. The method of claim 31 wherein at least one of the first processor and the second processor is part of at least one of the GPR unit and the display system. 34. The method of claim 31 wherein the display system comprises a wearable display unit wherein the part of the composite image that is the image of the environment is an actual image of a surrounding environment viewed through a medium that is transparent, at least partially. 35. (canceled) 36. The method of claim 31 wherein generating the composite image comprises:
combining, electronically, the image of the environment and the image of the object; and
generating, by an electronic display unit, the entire composite image;
wherein the image of the environment is based on an image captured by a camera. 37. The method of claim 36 wherein the camera is part of the display system and the image of the environment is a live image. 38-39. (canceled) 40. The method of any one of claim 31 further comprising retrieving the image of the environment from a storage medium. 41. The method of claim 31 further comprising generating, by a localization system, an estimate of a position of the GPR unit;
wherein the indication of the position of the at least one hidden object is based on the estimate of the position of the GPR unit; and
wherein the estimate of the position of the GPR unit is relative to a reference frame. 42. The method of claim 41 wherein the reference frame comprises one or more of (a) a satellite that transmits a radio signal, (b) a global navigation satellite system (GNSS) satellite, (c) a global positioning system (GPS) satellite, (d) a transmitter of a radio-signal, (e) a source of a sound signal, (f) a source of an ultrasonic signal, (g) a visual marker, (h) an augmented-reality (AR) marker, (i) a visible marker placed by an operator of the system, (j) a visible pattern on the surface, (k) a grid on the surface, (I) a detectable feature of the surface, (m) one or more objects in the environment. 43. The method of claim 41 wherein generating, by the localization system, the estimate of the position of the GPR unit further comprises capturing, by a camera, an image of the GPR unit while it is moved along the path on the surface, and wherein the estimate of the position of the GPR unit is based, at least in part, on the image of the GPR unit. 44-46. (canceled) 47. The method of claim 31 wherein generating, by the display system, the composite image, comprises generating an image of the path;
wherein the image of the path appears in the composite image; and
wherein the image of the path comprises at least one of (a) an image of a portion of the path that the GPR unit has followed in the past, and (b) an image of a portion of the path that the GPR unit is expected to follow in the future. 48-54. (canceled) | Ground-penetrating radar (GPR) technology enables the detection of hidden objects that are underground or behind walls or other such surfaces. Embodiments of the present invention provide a realistic visualization of the hidden objects through so-called augmented reality techniques. Thanks to such visualization, interaction with hidden objects that are hazardous or delicate is easier and less prone to errors. Also, GPR-based data collection can be performed in non-real time, with object visualization occurring at a later time based on stored data that can also comprise annotations. This capability provides greater flexibility for scheduling activities related to the hidden objects.1. A system based on ground-penetrating radar (GPR) that visually depicts objects hidden by a surface, the system comprising:
a GPR unit that is moved along a path on the surface, wherein the GPR unit transmits a first radio signal and receives a second reflected radio signal, wherein the second reflected radio signal comprises one or more reflections of the first radio signal caused one or more of the objects hidden by the surface; a first processor that receives a representation of the second reflected radio signal and, based on the representation, generates a description of at least one of the hidden objects, wherein the description comprises an indication of a position of the at least one hidden object; a second processor that receives the description of the at least one hidden object and generates a visual specification of the object relative to an environment, wherein the visual specification comprises a specification of the position of the object relative to the environment; and a display subsystem that receives the visual specification and generates a composite image that combines an image of the environment and an image of the object, wherein the image of the object is placed, relative to the image of the environment, in accordance with the specification of the position of the object provided by the visual specification. 2. The system of claim 1 wherein the display subsystem comprises a wearable display unit wherein the part of the composite image that is the image of the environment is an actual image of a surrounding environment viewed through a material that is transparent, at least partially. 3. (canceled) 4. The system of claim 1 wherein the display subsystem comprises an electronic display unit that generates the entire composite image electronically, wherein the image of the environment is based on an image captured by a camera. 5. The system of claim 4 wherein the camera is part of the display subsystem and the image of the environment is a live image. 6. The system of claim 1 further comprising a wireless communication link that interconnects any two of the GPR unit, the first processor, the second processor, or the display subsystem. 7. (canceled) 8. The system of claim 6 wherein the display subsystem is located remotely relative to the GPR unit. 9. The system of claim 1 further comprising a non-real-time communication link that interconnects any two of the GPR unit, the first processor, the second processor, or the display subsystem. 10. (canceled) 11. The system of claim 1 further comprising a data-storage device that stores data;
wherein the stored data comprises, at least in part, one or more of (i) the representation of the second reflected radio signal, (ii) the description of the at least one hidden object, (iii) the visual specification of the object relative to the environment. 12. The system of claim 11 wherein the display subsystem generates the composite image based, at least in part, on data stored in the storage device. 13. The system of claim 12 wherein the display subsystem generates the composite image based, at least in part, on data stored in the storage device. 14. (canceled) 15. The system of claim 4 wherein the image of the environment is derived from a stored image retrieved from a storage medium. 16. The system of claim 1 further comprising a localization subsystem that generates an estimate of a position of the GPR unit;
wherein the indication of the position of the at least one hidden object is based on the estimate of the position of the GPR unit; and
wherein the estimate of the position of the GPR unit is relative to a reference frame. 17. The system of claim 16 wherein the reference frame comprises one or more of (a) a satellite that transmits a radio signal, (b) a global navigation satellite system (GNSS) satellite, (c) a global positioning system (GPS) satellite, (d) a transmitter of a radio-signal, (e) a source of a sound signal, (f) a source of an ultrasonic signal, (g) a visual marker, (h) an augmented-reality (AR) marker, (i) a visible marker placed by an operator of the system, (j) a visible pattern on the surface, (k) a grid on the surface, (I) a detectable feature of the surface, (m) one or more objects in the environment. 18. The system of claim 16 wherein the localization subsystem comprises a camera adapted to capture an image of the GPR unit while it is moved along the path on the surface, and wherein the estimate of the position of the GPR unit is based, at least in part, on the image of the GPR unit. 19-21. (canceled) 22. The system of claim 1 wherein the composite image further comprises an image of the path;
wherein the image of the path comprises at least one of (a) an image of a portion of the path that the GPR unit has followed in the past, and (b) an image of a portion of the path that the GPR unit is expected to follow in the future. 23-30. (canceled) 31. A method for visually depicting objects hidden by a surface and detected via ground-penetrating radar (GPR), wherein the GPR unit is moved along a path on the surface, the method comprising:
transmitting, by the GPR unit, a first radio signal; receiving, by the GPR unit, a second reflected radio signal, wherein the second reflected radio signal comprises one or more reflections of the first radio signal caused one or more of the objects hidden by the surface; receiving, by a first processor, a representation of the second reflected radio signal; generating, by the first processor, a description of at least one of the hidden objects, wherein the description comprises an indication of a position of the at least one hidden object, and wherein the description is based on the representation of the second reflected radio signal; receiving, by a second processor, the description of the at least one hidden object; generating, by the second processor, a visual specification of the at least one hidden object relative to an environment, wherein the visual specification comprises a specification of the position of the object relative to the environment; receiving, by a display system, the visual specification; and generating, by the display system, a composite image that combines an image of the environment and an image of the object, wherein the image of the object is placed, relative to the image of the environment, in accordance with the specification of the position of the object provided by the visual specification. 32. The method of claim 31 wherein the first processor and the second processor are the same processor. 33. The method of claim 31 wherein at least one of the first processor and the second processor is part of at least one of the GPR unit and the display system. 34. The method of claim 31 wherein the display system comprises a wearable display unit wherein the part of the composite image that is the image of the environment is an actual image of a surrounding environment viewed through a medium that is transparent, at least partially. 35. (canceled) 36. The method of claim 31 wherein generating the composite image comprises:
combining, electronically, the image of the environment and the image of the object; and
generating, by an electronic display unit, the entire composite image;
wherein the image of the environment is based on an image captured by a camera. 37. The method of claim 36 wherein the camera is part of the display system and the image of the environment is a live image. 38-39. (canceled) 40. The method of any one of claim 31 further comprising retrieving the image of the environment from a storage medium. 41. The method of claim 31 further comprising generating, by a localization system, an estimate of a position of the GPR unit;
wherein the indication of the position of the at least one hidden object is based on the estimate of the position of the GPR unit; and
wherein the estimate of the position of the GPR unit is relative to a reference frame. 42. The method of claim 41 wherein the reference frame comprises one or more of (a) a satellite that transmits a radio signal, (b) a global navigation satellite system (GNSS) satellite, (c) a global positioning system (GPS) satellite, (d) a transmitter of a radio-signal, (e) a source of a sound signal, (f) a source of an ultrasonic signal, (g) a visual marker, (h) an augmented-reality (AR) marker, (i) a visible marker placed by an operator of the system, (j) a visible pattern on the surface, (k) a grid on the surface, (I) a detectable feature of the surface, (m) one or more objects in the environment. 43. The method of claim 41 wherein generating, by the localization system, the estimate of the position of the GPR unit further comprises capturing, by a camera, an image of the GPR unit while it is moved along the path on the surface, and wherein the estimate of the position of the GPR unit is based, at least in part, on the image of the GPR unit. 44-46. (canceled) 47. The method of claim 31 wherein generating, by the display system, the composite image, comprises generating an image of the path;
wherein the image of the path appears in the composite image; and
wherein the image of the path comprises at least one of (a) an image of a portion of the path that the GPR unit has followed in the past, and (b) an image of a portion of the path that the GPR unit is expected to follow in the future. 48-54. (canceled) | 2,600 |
10,575 | 10,575 | 11,842,924 | 2,648 | A smart artefact ( 101 ) and user terminal ( 102 ) according to the invention have a first, short range interface ( 112, 121 ) for initiating an interactive session, and a second, long range interface ( 111, 124 ), for continuing the interactive session. The smart artefact ( 101 ) and user terminal ( 102 ) thereto also contain means that transfer the interactive session from the first interface ( 112, 121 ) towards the second interface ( 121, 124 ). | 1. A smart artefact (101) having first interfacing means (112), active within a short range, for initiating an interactive session with a user terminal (102),
CHARACTERIZED IN THAT said smart artefact (101) further has: second interfacing means (111), active within a substantially longer range than said short range, for continuing said interactive session with said user terminal (102), and means for transferring said interactive session from said first interfacing means (112) towards said second interfacing means (111). 2. A smart artefact (101) according to claim 1,
CHARACTERIZED IN THAT said first interfacing means (112) are an instantiation of one or more of: a touch-based interface; a Radio Frequency Identification (RFID) tag; a Radio Frequency Identification (RFID) tag reader; a barcode; a barcode scanner; a shotcode; a shotcode scanner; a Uniform Resource Locator (URL); a Uniform Resource Locator (URL) reader; a Near Field Communication (NFC) interface; an infrared interface; a Bluetooth interface. 3. A smart artefact (101) according to claim 1,
CHARACTERIZED IN THAT said second interfacing means (111) are an instantiation of one or more of: a Bluetooth interface or IEEE 802.15 interface; a Wireless Local Area Network (WLAN) interface; a Wireless Fidelity (WiFi) interface or IEEE 802.11 interface; a Worldwide Interoperability for Microwave Access (WiMAX) interface or IEEE 802.16 interface; a Wireless Broadband (WiBro) interface; a HomeRF interface; a Global System for Mobile communications (GSM) interface; a Wireless Application Protocol (WAP) interface; a Universal Mobile Telephone System (UMTS) interface. 4. A user terminal (102) having first interfacing means (121), active within a short range, for initiating an interactive session with a smart artefact (101),
CHARACTERIZED IN THAT said user terminal (102) further has second interfacing means (124), active within a substantially longer range than said short range, for continuing said interactive session with said smart artefact (101), and means for transferring said interactive session from said first interfacing means (121) towards said second interfacing means (124). | A smart artefact ( 101 ) and user terminal ( 102 ) according to the invention have a first, short range interface ( 112, 121 ) for initiating an interactive session, and a second, long range interface ( 111, 124 ), for continuing the interactive session. The smart artefact ( 101 ) and user terminal ( 102 ) thereto also contain means that transfer the interactive session from the first interface ( 112, 121 ) towards the second interface ( 121, 124 ).1. A smart artefact (101) having first interfacing means (112), active within a short range, for initiating an interactive session with a user terminal (102),
CHARACTERIZED IN THAT said smart artefact (101) further has: second interfacing means (111), active within a substantially longer range than said short range, for continuing said interactive session with said user terminal (102), and means for transferring said interactive session from said first interfacing means (112) towards said second interfacing means (111). 2. A smart artefact (101) according to claim 1,
CHARACTERIZED IN THAT said first interfacing means (112) are an instantiation of one or more of: a touch-based interface; a Radio Frequency Identification (RFID) tag; a Radio Frequency Identification (RFID) tag reader; a barcode; a barcode scanner; a shotcode; a shotcode scanner; a Uniform Resource Locator (URL); a Uniform Resource Locator (URL) reader; a Near Field Communication (NFC) interface; an infrared interface; a Bluetooth interface. 3. A smart artefact (101) according to claim 1,
CHARACTERIZED IN THAT said second interfacing means (111) are an instantiation of one or more of: a Bluetooth interface or IEEE 802.15 interface; a Wireless Local Area Network (WLAN) interface; a Wireless Fidelity (WiFi) interface or IEEE 802.11 interface; a Worldwide Interoperability for Microwave Access (WiMAX) interface or IEEE 802.16 interface; a Wireless Broadband (WiBro) interface; a HomeRF interface; a Global System for Mobile communications (GSM) interface; a Wireless Application Protocol (WAP) interface; a Universal Mobile Telephone System (UMTS) interface. 4. A user terminal (102) having first interfacing means (121), active within a short range, for initiating an interactive session with a smart artefact (101),
CHARACTERIZED IN THAT said user terminal (102) further has second interfacing means (124), active within a substantially longer range than said short range, for continuing said interactive session with said smart artefact (101), and means for transferring said interactive session from said first interfacing means (121) towards said second interfacing means (124). | 2,600 |
10,576 | 10,576 | 15,435,126 | 2,694 | A user interface, a computer program product, a signal sequence, a means of transportation and a method for classifying a user gesture performed freely in space. A first gesture and/or a second gesture may be detected by a sensor. A processor may compare a first characteristic of the first gesture with a first characteristic of a second gesture, and, as a function of the comparison, the first gesture may be classified in contrast to the second gesture as an intended user input. | 1-13. (canceled) 14. A processor-based method for detecting an input for a device, comprising:
detecting, via a sensing apparatus, a first gesture in a detection area for the device; detecting, via a sensing apparatus, a second gesture in the detection area for the device, wherein the first gesture and second gesture are detected chronologically in a time period; comparing, via an evaluation unit, at least one characteristic of the first gesture to at least one characteristic of the second gesture; and detecting, via a processor, a predetermined user input for the device based on the comparing of the at least one characteristic of the first gesture to the at least one characteristic of the second gesture. 15. The processor-based method of claim 14, wherein detecting the first gesture and the second gesture comprises detecting movements in a horizontal and vertical plane in the detection area for the device. 16. The processor-based method of claim 14, wherein detecting the first gesture and the second gesture comprises detecting movements in at least one of a horizontal and vertical compensation plane in the detection area for the device, and converting the at least one of the horizontal and vertical compensation plane to a reference plane. 17. The processor-based method of claim 14, wherein the at least one characteristic of the first gesture comprises at least one of direction, curvature, speed, length, and/or distance of trajectory of the gesture, and wherein the at least one characteristic of the second gesture comprises at least one of direction, curvature, speed, length, and/or distance of trajectory of the second gesture. 18. The processor-based method of claim 14, wherein the first gesture and second gesture are detected chronologically in a continuous time period. 19. The processor-based method of claim 14, further comprising manipulating a user interface of the device based on the detected predetermined user input. 20. The processor-based method of claim 14, wherein one of the at least one characteristic of the first gesture and the second gesture comprises a curvature of a trajectory, and wherein
the curvature of the trajectory of the first gesture comprises a convex orientation relative to a surface of a display unit of the device, and/or the curvature of the trajectory of the second gesture comprises a concave orientation relative to a surface of a display unit of the device. 21. A system for detecting an input for a device, comprising:
a sensing apparatus for detecting a first gesture in a detection area for the device, and for detecting a second gesture in the detection area for the device, wherein the sensing apparatus is configured to detect the first gesture and second gesture chronologically in a time period; an evaluation unit, operatively coupled to the sensing apparatus, wherein the evaluation unit is operable to compare at least one characteristic of the first gesture to at least one characteristic of the second gesture; and a processing apparatus, operatively coupled to the evaluation unit, wherein the processing apparatus is operable to detect a predetermined user input for the device based on the comparing of the at least one characteristic of the first gesture to the at least one characteristic of the second gesture. 22. The system of claim 21, wherein the evaluation unit is operable to detect movements in a horizontal and vertical plane in the detection area for the device. 23. The system of claim 21, wherein evaluation unit is operable to detect movements in at least one of a horizontal and vertical compensation plane in the detection area for the device, and convert the at least one of the horizontal and vertical compensation plane to a reference plane. 24. The system of claim 21, wherein the at least one characteristic of the first gesture comprises at least one of direction, curvature, speed, length, and/or distance of trajectory of the gesture, and wherein the at least one characteristic of the second gesture comprises at least one of direction, curvature, speed, length, and/or distance of trajectory of the second gesture. 25. The system of claim 21, wherein the evaluation unit is configured to detect the first gesture and second gesture chronologically in a continuous time period. 26. The system of claim 21, wherein the processing apparatus is operable to manipulate a user interface of the device based on the detected predetermined user input. 27. A system for detecting an input for a device, comprising:
a sensing apparatus for detecting a first gesture in a detection area for the device, and for detecting a second gesture in the detection area for the device, wherein the sensing apparatus is configured to detect the first gesture and second gesture chronologically in a time period in three-dimensional space, and wherein the second gesture is different from the first gesture; an evaluation unit, operatively coupled to the sensing apparatus, wherein the evaluation unit is operable to compare at least one characteristic of the first gesture to at least one characteristic of the second gesture; and a processing apparatus, operatively coupled to the evaluation unit, wherein the processing apparatus is operable to detect the first gesture as an intentional gesture and detect a predetermined user input for the device based on the comparing of the at least one characteristic of the first gesture to the at least one characteristic of the second gesture. 28. The system of claim 27, wherein the evaluation unit is operable to detect movements in a horizontal and vertical plane in the detection area for the device. 29. The system of claim 27, wherein evaluation unit is operable to detect movements in at least one of a horizontal and vertical compensation plane in the detection area for the device, and convert the at least one of the horizontal and vertical compensation plane to a reference plane. 30. The system of claim 27, wherein the at least one characteristic of the first gesture comprises at least one of direction, curvature, speed, length, and/or distance of trajectory of the gesture, and wherein the at least one characteristic of the second gesture comprises at least one of direction, curvature, speed, length, and/or distance of trajectory of the second gesture. 32. The system of claim 27, wherein the evaluation unit is configured to detect the first gesture and second gesture chronologically in a continuous time period. 33. The system of claim 27, wherein the processing apparatus is operable to manipulate a user interface of the device based on the detected predetermined user input. | A user interface, a computer program product, a signal sequence, a means of transportation and a method for classifying a user gesture performed freely in space. A first gesture and/or a second gesture may be detected by a sensor. A processor may compare a first characteristic of the first gesture with a first characteristic of a second gesture, and, as a function of the comparison, the first gesture may be classified in contrast to the second gesture as an intended user input.1-13. (canceled) 14. A processor-based method for detecting an input for a device, comprising:
detecting, via a sensing apparatus, a first gesture in a detection area for the device; detecting, via a sensing apparatus, a second gesture in the detection area for the device, wherein the first gesture and second gesture are detected chronologically in a time period; comparing, via an evaluation unit, at least one characteristic of the first gesture to at least one characteristic of the second gesture; and detecting, via a processor, a predetermined user input for the device based on the comparing of the at least one characteristic of the first gesture to the at least one characteristic of the second gesture. 15. The processor-based method of claim 14, wherein detecting the first gesture and the second gesture comprises detecting movements in a horizontal and vertical plane in the detection area for the device. 16. The processor-based method of claim 14, wherein detecting the first gesture and the second gesture comprises detecting movements in at least one of a horizontal and vertical compensation plane in the detection area for the device, and converting the at least one of the horizontal and vertical compensation plane to a reference plane. 17. The processor-based method of claim 14, wherein the at least one characteristic of the first gesture comprises at least one of direction, curvature, speed, length, and/or distance of trajectory of the gesture, and wherein the at least one characteristic of the second gesture comprises at least one of direction, curvature, speed, length, and/or distance of trajectory of the second gesture. 18. The processor-based method of claim 14, wherein the first gesture and second gesture are detected chronologically in a continuous time period. 19. The processor-based method of claim 14, further comprising manipulating a user interface of the device based on the detected predetermined user input. 20. The processor-based method of claim 14, wherein one of the at least one characteristic of the first gesture and the second gesture comprises a curvature of a trajectory, and wherein
the curvature of the trajectory of the first gesture comprises a convex orientation relative to a surface of a display unit of the device, and/or the curvature of the trajectory of the second gesture comprises a concave orientation relative to a surface of a display unit of the device. 21. A system for detecting an input for a device, comprising:
a sensing apparatus for detecting a first gesture in a detection area for the device, and for detecting a second gesture in the detection area for the device, wherein the sensing apparatus is configured to detect the first gesture and second gesture chronologically in a time period; an evaluation unit, operatively coupled to the sensing apparatus, wherein the evaluation unit is operable to compare at least one characteristic of the first gesture to at least one characteristic of the second gesture; and a processing apparatus, operatively coupled to the evaluation unit, wherein the processing apparatus is operable to detect a predetermined user input for the device based on the comparing of the at least one characteristic of the first gesture to the at least one characteristic of the second gesture. 22. The system of claim 21, wherein the evaluation unit is operable to detect movements in a horizontal and vertical plane in the detection area for the device. 23. The system of claim 21, wherein evaluation unit is operable to detect movements in at least one of a horizontal and vertical compensation plane in the detection area for the device, and convert the at least one of the horizontal and vertical compensation plane to a reference plane. 24. The system of claim 21, wherein the at least one characteristic of the first gesture comprises at least one of direction, curvature, speed, length, and/or distance of trajectory of the gesture, and wherein the at least one characteristic of the second gesture comprises at least one of direction, curvature, speed, length, and/or distance of trajectory of the second gesture. 25. The system of claim 21, wherein the evaluation unit is configured to detect the first gesture and second gesture chronologically in a continuous time period. 26. The system of claim 21, wherein the processing apparatus is operable to manipulate a user interface of the device based on the detected predetermined user input. 27. A system for detecting an input for a device, comprising:
a sensing apparatus for detecting a first gesture in a detection area for the device, and for detecting a second gesture in the detection area for the device, wherein the sensing apparatus is configured to detect the first gesture and second gesture chronologically in a time period in three-dimensional space, and wherein the second gesture is different from the first gesture; an evaluation unit, operatively coupled to the sensing apparatus, wherein the evaluation unit is operable to compare at least one characteristic of the first gesture to at least one characteristic of the second gesture; and a processing apparatus, operatively coupled to the evaluation unit, wherein the processing apparatus is operable to detect the first gesture as an intentional gesture and detect a predetermined user input for the device based on the comparing of the at least one characteristic of the first gesture to the at least one characteristic of the second gesture. 28. The system of claim 27, wherein the evaluation unit is operable to detect movements in a horizontal and vertical plane in the detection area for the device. 29. The system of claim 27, wherein evaluation unit is operable to detect movements in at least one of a horizontal and vertical compensation plane in the detection area for the device, and convert the at least one of the horizontal and vertical compensation plane to a reference plane. 30. The system of claim 27, wherein the at least one characteristic of the first gesture comprises at least one of direction, curvature, speed, length, and/or distance of trajectory of the gesture, and wherein the at least one characteristic of the second gesture comprises at least one of direction, curvature, speed, length, and/or distance of trajectory of the second gesture. 32. The system of claim 27, wherein the evaluation unit is configured to detect the first gesture and second gesture chronologically in a continuous time period. 33. The system of claim 27, wherein the processing apparatus is operable to manipulate a user interface of the device based on the detected predetermined user input. | 2,600 |
10,577 | 10,577 | 15,471,436 | 2,659 | A model-pair is selected to recognize spoken words in a speech signal generated from a speech, which includes an acoustic model and a language model. A degree of disjointedness between the acoustic model and the language model is computed relative to the speech by comparing a first recognition output produced from the acoustic model and a second recognition output produced from the language model. When the acoustic model incorrectly recognizes a portion of the speech signal as a first word and the language model correctly recognizes the portion of the speech signal as a second word, a textual representation of the second word is determined and associated with a set of sound descriptors to generate a training speech pattern. Using the training speech pattern, the acoustic model is trained to recognize the portion of the speech signal as the second word. | 1. A method comprising:
selecting, to recognize spoken words in a speech signal generated from a speech, a model-pair, the model-pair comprising an acoustic model and a language model; computing a degree of disjointedness between the acoustic model and the language model relative to the speech by comparing, responsive to the model pair performing speech recognition on the speech signal, a first recognition output produced from the acoustic model and a second recognition output produced from the language model; determining, responsive to the acoustic model incorrectly recognizing a portion of the speech signal as a first word and the language model correctly recognizing the portion of the speech signal as a second word, a textual representation of the second word; associating with the textual representation, a set of sound descriptors; generating, using the textual representation and the set of sound descriptors, a training speech pattern; and training, using the training speech pattern to produce a retrained acoustic model, the acoustic model to recognize the portion of the speech signal as the second word, the training causing the retrained acoustic model and the language model to recognize the portion of the speech signal as the second word. 2. The method of claim 1, further comprising:
determining, responsive to the language model incorrectly recognizing a second portion of the speech signal as a third word and the acoustic model correctly recognizing the second portion of the speech signal as a fourth word, a second textual representation of the fourth word; training, using the second textual representation to produce a retrained language model, the language model to recognize the second portion of the speech signal as the fourth word, the training causing the retrained language model and the acoustic model to recognize the second portion of the speech signal as the fourth word. 3. The method of claim 2, further comprising:
causing a second degree of disjointedness between (i) either of the acoustic model and the retrained acoustic model and (ii) the retrained language model, to be lower than the degree of disjointedness. 4. The method of claim 2, further comprising:
determining a severity of an error associated with the language model incorrectly recognizing the second portion of the speech signal as the third word; boosting a number of occurrences of the second textual representation in a language training data used in the training of the language model, wherein the number of occurrences is a function of the severity. 5. The method of claim 1, further comprising:
determining a severity of an error associated with the acoustic model incorrectly recognizing the portion of the speech signal as the first word; boosting a number of occurrences of the training speech pattern in an acoustic training data used in the training, wherein the number of occurrences is a function of the severity. 6. The method of claim 1, wherein the set of sound descriptors corresponds to a set of characteristics of a sound produced by a speaker of the speech. 7. The method of claim 1, further comprising:
operating, as a part of the determining the textual representation, the language model on the portion of the speech signal, wherein the language model produces the textual representation. 8. The method of claim 1, wherein the degree of disjointedness is a function of a number of words in the speech that one model in the model-pair recognizes correctly and the second model in the model-pair recognizes incorrectly. 9. The method of claim 1, further comprising:
configuring a neural Turing machine to correlate a set of inputs to pairs of acoustic models and language models in a models library, the set of inputs comprising a vector and a disjoin tolerance, wherein the vector comprises a numeric representation of a word, and wherein the disjoin tolerance is a limit applicable to the degree of disjointedness; and outputting from the neural Turing machine the model-pair responsive to the model-pair relating to the set of inputs. 10. The method of claim 9, wherein the set of inputs further comprise (i) a performance specification, (ii) a sound descriptor. 11. The method of claim 10, wherein the performance specification specifies a minimum acceptable word recognition rate for a subject-matter domain of the speech. 12. The method of claim 10, wherein the performance specification specifies a maximum acceptable word error rate for a subject-matter domain of the speech. 13. The method of claim 10, wherein the sound descriptor comprises a prosody of the speech. 14. The method of claim 10, wherein the sound descriptor comprises an accent used by a speaker of the speech. 15. The method of claim 10, wherein the sound descriptor comprises a dialect of a language used by a speaker of the speech. 16. A computer usable program product comprising one or more computer-readable storage devices, and program instructions stored on at least one of the one or more storage devices, the stored program instructions comprising:
program instructions to select, to recognize spoken words in a speech signal generated from a speech, a model-pair, the model-pair comprising an acoustic model and a language model; program instructions to compute a degree of disjointedness between the acoustic model and the language model relative to the speech by comparing, responsive to the model pair performing speech recognition on the speech signal, a first recognition output produced from the acoustic model and a second recognition output produced from the language model; program instructions to determine, responsive to the acoustic model incorrectly recognizing a portion of the speech signal as a first word and the language model correctly recognizing the portion of the speech signal as a second word, a textual representation of the second word; program instructions to associate with the textual representation, a set of sound descriptors; program instructions to generate, using the textual representation and the set of sound descriptors, a training speech pattern; and program instructions to train, using the training speech pattern to produce a retrained acoustic model, the acoustic model to recognize the portion of the speech signal as the second word, the training causing the retrained acoustic model and the language model to recognize the portion of the speech signal as the second word. 17. The computer usable program product of claim 16, further comprising:
program instructions to determine, responsive to the language model incorrectly recognizing a second portion of the speech signal as a third word and the acoustic model correctly recognizing the second portion of the speech signal as a fourth word, a second textual representation of the fourth word; program instructions to train, using the second textual representation to produce a retrained language model, the language model to recognize the second portion of the speech signal as the fourth word, the training causing the retrained language model and the acoustic model to recognize the second portion of the speech signal as the fourth word. 18. The computer usable program product of claim 16, wherein the computer usable code is stored in a computer readable storage device in a data processing system, and wherein the computer usable code is transferred over a network from a remote data processing system. 19. The computer usable program product of claim 16, wherein the computer usable code is stored in a computer readable storage device in a server data processing system, and wherein the computer usable code is downloaded over a network to a remote data processing system for use in a computer readable storage device associated with the remote data processing system. 20. A computer system comprising one or more processors, one or more computer-readable memories, and one or more computer-readable storage devices, and program instructions stored on at least one of the one or more storage devices for execution by at least one of the one or more processors via at least one of the one or more memories, the stored program instructions comprising:
program instructions to select, to recognize spoken words in a speech signal generated from a speech, a model-pair, the model-pair comprising an acoustic model and a language model; program instructions to compute a degree of disjointedness between the acoustic model and the language model relative to the speech by comparing, responsive to the model pair performing speech recognition on the speech signal, a first recognition output produced from the acoustic model and a second recognition output produced from the language model; program instructions to determine, responsive to the acoustic model incorrectly recognizing a portion of the speech signal as a first word and the language model correctly recognizing the portion of the speech signal as a second word, a textual representation of the second word; program instructions to associate with the textual representation, a set of sound descriptors; program instructions to generate, using the textual representation and the set of sound descriptors, a training speech pattern; and program instructions to train, using the training speech pattern to produce a retrained acoustic model, the acoustic model to recognize the portion of the speech signal as the second word, the training causing the retrained acoustic model and the language model to recognize the portion of the speech signal as the second word. 21. A method comprising:
determining, that an acoustic model has incorrectly recognized a first portion of a speech signal as a first word and a language model has correctly recognized the first portion as a second word; determining that the language model has incorrectly recognized a second portion of the speech signal as a third word and the acoustic model has correctly recognized the second portion as a fourth word; generating, using a textual representation of the second word and a set of sound descriptors, a training speech pattern; training, using the training speech pattern, the acoustic model to recognize the first portion of the speech signal as the second word; and training, using a textual representation of the fourth word, the language model to recognize the second portion of the speech signal as the fourth word. 22. The method of claim 21, further comprising:
selecting, to recognize spoken words in the speech signal generated from a speech, a model-pair, the model-pair comprising the acoustic model and the language model; computing a degree of disjointedness between the acoustic model and the language model relative to the speech by comparing, responsive to the model pair performing speech recognition on the speech signal, a first recognition output produced from the acoustic model and a second recognition output produced from the language model. 23. The method of claim 22, further comprising:
causing a second degree of disjointedness between (i) either of the acoustic model and the retrained acoustic model and (ii) the retrained language model, to be lower than the degree of disjointedness. 24. The method of claim 21, further comprising:
determining a severity of an error associated with the language model incorrectly recognizing the second portion of the speech signal as the third word; boosting a number of occurrences of the second textual representation in a language training data used in the training of the language model, wherein the number of occurrences is a function of the severity. 25. The method of claim 21, further comprising:
determining a severity of an error associated with the acoustic model incorrectly recognizing the portion of the speech signal as the first word; boosting a number of occurrences of the training speech pattern in an acoustic training data used in the training, wherein the number of occurrences is a function of the severity. | A model-pair is selected to recognize spoken words in a speech signal generated from a speech, which includes an acoustic model and a language model. A degree of disjointedness between the acoustic model and the language model is computed relative to the speech by comparing a first recognition output produced from the acoustic model and a second recognition output produced from the language model. When the acoustic model incorrectly recognizes a portion of the speech signal as a first word and the language model correctly recognizes the portion of the speech signal as a second word, a textual representation of the second word is determined and associated with a set of sound descriptors to generate a training speech pattern. Using the training speech pattern, the acoustic model is trained to recognize the portion of the speech signal as the second word.1. A method comprising:
selecting, to recognize spoken words in a speech signal generated from a speech, a model-pair, the model-pair comprising an acoustic model and a language model; computing a degree of disjointedness between the acoustic model and the language model relative to the speech by comparing, responsive to the model pair performing speech recognition on the speech signal, a first recognition output produced from the acoustic model and a second recognition output produced from the language model; determining, responsive to the acoustic model incorrectly recognizing a portion of the speech signal as a first word and the language model correctly recognizing the portion of the speech signal as a second word, a textual representation of the second word; associating with the textual representation, a set of sound descriptors; generating, using the textual representation and the set of sound descriptors, a training speech pattern; and training, using the training speech pattern to produce a retrained acoustic model, the acoustic model to recognize the portion of the speech signal as the second word, the training causing the retrained acoustic model and the language model to recognize the portion of the speech signal as the second word. 2. The method of claim 1, further comprising:
determining, responsive to the language model incorrectly recognizing a second portion of the speech signal as a third word and the acoustic model correctly recognizing the second portion of the speech signal as a fourth word, a second textual representation of the fourth word; training, using the second textual representation to produce a retrained language model, the language model to recognize the second portion of the speech signal as the fourth word, the training causing the retrained language model and the acoustic model to recognize the second portion of the speech signal as the fourth word. 3. The method of claim 2, further comprising:
causing a second degree of disjointedness between (i) either of the acoustic model and the retrained acoustic model and (ii) the retrained language model, to be lower than the degree of disjointedness. 4. The method of claim 2, further comprising:
determining a severity of an error associated with the language model incorrectly recognizing the second portion of the speech signal as the third word; boosting a number of occurrences of the second textual representation in a language training data used in the training of the language model, wherein the number of occurrences is a function of the severity. 5. The method of claim 1, further comprising:
determining a severity of an error associated with the acoustic model incorrectly recognizing the portion of the speech signal as the first word; boosting a number of occurrences of the training speech pattern in an acoustic training data used in the training, wherein the number of occurrences is a function of the severity. 6. The method of claim 1, wherein the set of sound descriptors corresponds to a set of characteristics of a sound produced by a speaker of the speech. 7. The method of claim 1, further comprising:
operating, as a part of the determining the textual representation, the language model on the portion of the speech signal, wherein the language model produces the textual representation. 8. The method of claim 1, wherein the degree of disjointedness is a function of a number of words in the speech that one model in the model-pair recognizes correctly and the second model in the model-pair recognizes incorrectly. 9. The method of claim 1, further comprising:
configuring a neural Turing machine to correlate a set of inputs to pairs of acoustic models and language models in a models library, the set of inputs comprising a vector and a disjoin tolerance, wherein the vector comprises a numeric representation of a word, and wherein the disjoin tolerance is a limit applicable to the degree of disjointedness; and outputting from the neural Turing machine the model-pair responsive to the model-pair relating to the set of inputs. 10. The method of claim 9, wherein the set of inputs further comprise (i) a performance specification, (ii) a sound descriptor. 11. The method of claim 10, wherein the performance specification specifies a minimum acceptable word recognition rate for a subject-matter domain of the speech. 12. The method of claim 10, wherein the performance specification specifies a maximum acceptable word error rate for a subject-matter domain of the speech. 13. The method of claim 10, wherein the sound descriptor comprises a prosody of the speech. 14. The method of claim 10, wherein the sound descriptor comprises an accent used by a speaker of the speech. 15. The method of claim 10, wherein the sound descriptor comprises a dialect of a language used by a speaker of the speech. 16. A computer usable program product comprising one or more computer-readable storage devices, and program instructions stored on at least one of the one or more storage devices, the stored program instructions comprising:
program instructions to select, to recognize spoken words in a speech signal generated from a speech, a model-pair, the model-pair comprising an acoustic model and a language model; program instructions to compute a degree of disjointedness between the acoustic model and the language model relative to the speech by comparing, responsive to the model pair performing speech recognition on the speech signal, a first recognition output produced from the acoustic model and a second recognition output produced from the language model; program instructions to determine, responsive to the acoustic model incorrectly recognizing a portion of the speech signal as a first word and the language model correctly recognizing the portion of the speech signal as a second word, a textual representation of the second word; program instructions to associate with the textual representation, a set of sound descriptors; program instructions to generate, using the textual representation and the set of sound descriptors, a training speech pattern; and program instructions to train, using the training speech pattern to produce a retrained acoustic model, the acoustic model to recognize the portion of the speech signal as the second word, the training causing the retrained acoustic model and the language model to recognize the portion of the speech signal as the second word. 17. The computer usable program product of claim 16, further comprising:
program instructions to determine, responsive to the language model incorrectly recognizing a second portion of the speech signal as a third word and the acoustic model correctly recognizing the second portion of the speech signal as a fourth word, a second textual representation of the fourth word; program instructions to train, using the second textual representation to produce a retrained language model, the language model to recognize the second portion of the speech signal as the fourth word, the training causing the retrained language model and the acoustic model to recognize the second portion of the speech signal as the fourth word. 18. The computer usable program product of claim 16, wherein the computer usable code is stored in a computer readable storage device in a data processing system, and wherein the computer usable code is transferred over a network from a remote data processing system. 19. The computer usable program product of claim 16, wherein the computer usable code is stored in a computer readable storage device in a server data processing system, and wherein the computer usable code is downloaded over a network to a remote data processing system for use in a computer readable storage device associated with the remote data processing system. 20. A computer system comprising one or more processors, one or more computer-readable memories, and one or more computer-readable storage devices, and program instructions stored on at least one of the one or more storage devices for execution by at least one of the one or more processors via at least one of the one or more memories, the stored program instructions comprising:
program instructions to select, to recognize spoken words in a speech signal generated from a speech, a model-pair, the model-pair comprising an acoustic model and a language model; program instructions to compute a degree of disjointedness between the acoustic model and the language model relative to the speech by comparing, responsive to the model pair performing speech recognition on the speech signal, a first recognition output produced from the acoustic model and a second recognition output produced from the language model; program instructions to determine, responsive to the acoustic model incorrectly recognizing a portion of the speech signal as a first word and the language model correctly recognizing the portion of the speech signal as a second word, a textual representation of the second word; program instructions to associate with the textual representation, a set of sound descriptors; program instructions to generate, using the textual representation and the set of sound descriptors, a training speech pattern; and program instructions to train, using the training speech pattern to produce a retrained acoustic model, the acoustic model to recognize the portion of the speech signal as the second word, the training causing the retrained acoustic model and the language model to recognize the portion of the speech signal as the second word. 21. A method comprising:
determining, that an acoustic model has incorrectly recognized a first portion of a speech signal as a first word and a language model has correctly recognized the first portion as a second word; determining that the language model has incorrectly recognized a second portion of the speech signal as a third word and the acoustic model has correctly recognized the second portion as a fourth word; generating, using a textual representation of the second word and a set of sound descriptors, a training speech pattern; training, using the training speech pattern, the acoustic model to recognize the first portion of the speech signal as the second word; and training, using a textual representation of the fourth word, the language model to recognize the second portion of the speech signal as the fourth word. 22. The method of claim 21, further comprising:
selecting, to recognize spoken words in the speech signal generated from a speech, a model-pair, the model-pair comprising the acoustic model and the language model; computing a degree of disjointedness between the acoustic model and the language model relative to the speech by comparing, responsive to the model pair performing speech recognition on the speech signal, a first recognition output produced from the acoustic model and a second recognition output produced from the language model. 23. The method of claim 22, further comprising:
causing a second degree of disjointedness between (i) either of the acoustic model and the retrained acoustic model and (ii) the retrained language model, to be lower than the degree of disjointedness. 24. The method of claim 21, further comprising:
determining a severity of an error associated with the language model incorrectly recognizing the second portion of the speech signal as the third word; boosting a number of occurrences of the second textual representation in a language training data used in the training of the language model, wherein the number of occurrences is a function of the severity. 25. The method of claim 21, further comprising:
determining a severity of an error associated with the acoustic model incorrectly recognizing the portion of the speech signal as the first word; boosting a number of occurrences of the training speech pattern in an acoustic training data used in the training, wherein the number of occurrences is a function of the severity. | 2,600 |
10,578 | 10,578 | 15,202,525 | 2,657 | Computationally implemented methods and systems include detecting speech data related to a speech-facilitated transaction, acquiring adaptation data that is at least partly based on at least one speech interaction of a particular party that is discrete from the detected speech data, wherein at least a portion of the adaptation data has been stored on a particular device associated with the particular party, obtaining a destination of one or more of the adaptation data and the speech data, and transmitting one or more of the speech data and the adaptation data to the acquired destination. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 1-269. (canceled) 270. A device, comprising:
a memory; and a processor operably coupled to the memory, the processor including:
a speech data related to speech facilitated transaction detecting module;
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module configured to acquire adaptation data that is at least partly based on at least one speech interaction of a particular party that is discrete from the detected speech data, wherein at least a portion of the adaptation data has been stored on a particular device associated with the particular party;
a destination of one or more of the adaptation data and the detected speech data acquiring module configured to acquire the destination of one or more of the adaptation data and the detected speech data at least partially based on one or more of the adaptation data and the detected speech data; and
an acquired destination of one or more of the adaptation data and the detected speech data transmitting module. 271. The device of claim 270, wherein said speech data related to speech facilitated transaction detecting module comprises:
an adaptation data receiving module; and a reception of adaptation data-based speech data transferring determination module. 272. The device of claim 270, wherein said speech data related to speech facilitated transaction detecting module comprises:
a transmission of speech data by device detecting module; and a data regarding detected device transmitting speech data collecting module configured to collect data regarding the detected device that is transmitting speech data. 273. The device of claim 270, wherein said speech data related to speech facilitated transaction detecting module comprises:
a speech data comprising previously recorded particular party speech and timestamp of recording speech receiving module. 274. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party occurring prior to speech interaction generating detected speech data, and has been stored on a particular party-associated particular device acquiring module. 275. (canceled) 276. (canceled) 277. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party occurring prior to speech interaction generating detected speech data, and has been stored on a particular party-associated particular device acquiring in response to condition module. 278. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device module. 279. The device of claim 278, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device module comprises:
an adaptation data originating at further device and at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device module. 280. The device of claim 278, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device related to the particular device module. 281. The device of claim 280, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device related to the particular device module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device associated with the particular party module. 282. The device of claim 280, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device related to the particular device module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device at least partially controlled by the particular device module. 283. The device of claim 278, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device that received the adaptation data from the particular device module. 284. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party with particular type of device separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module. 285. The device of claim 284, wherein said adaptation data at least partly based on discrete speech interaction of particular party with particular type of device separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party with device of same type as target device configured to receive speech data, said discrete interaction separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module. 286. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party with particular device separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module. 287. (canceled) 288. (canceled) 289. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data and using same utterance as speech that is part of speech data, and has been stored on a particular party-associated particular device acquiring module. 290. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party and using same utterance as speech that is part of speech data at a different time than speech that is part of the speech data receiving module. 291. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data comprising a location of instructions for modifying one or more portions of a speech recognition component of a target device that are at least partly based on one or more particular party speech interactions, and has been stored on a particular party-associated particular device acquiring module. 292. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and is temporarily stored on the particular-party associated particular device until remote server deposit receiving module. 293. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and was transmitted from a first device to a second device using the particular party-associated particular device as a channel configured to facilitate the transaction receiving module. 294. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data receiving module; and a further data adding to adaptation data module configured to add further data to the received adaptation data. 295. The device of claim 270, wherein said destination of one or more of the adaptation data and the detected speech data acquiring module configured to acquire the destination of one or more of the adaptation data and the detected speech data at least partially based on one or more of the adaptation data and the detected speech data comprises:
a data regarding target device configured to process speech data module configured to receive data regarding a target device configured to process the detected speech data. 296. The device of claim 295, wherein said data regarding target device configured to process speech data module comprises:
a data comprising a target device configured to process detected speech data address receiving module. 297. The device of claim 295, wherein said data regarding target device configured to process speech data module comprises:
a data comprising a target device configured to process detected speech data location receiving module. 298. The device of claim 270, wherein said destination of one or more of the adaptation data and the detected speech data acquiring module configured to acquire the destination of one or more of the adaptation data and the detected speech data at least partially based on one or more of the adaptation data and the detected speech data comprises:
a target device location as destination of one or more of the adaptation data and the detected speech data determining module. 299. The device of claim 298, wherein said target device location as destination of one or more of the adaptation data and the detected speech data determining module comprises:
a target device network location as destination of one or more of the adaptation data and the detected speech data determining module. 300. The device of claim 270, wherein said destination of one or more of the adaptation data and the detected speech data acquiring module configured to acquire the destination of one or more of the adaptation data and the detected speech data at least partially based on one or more of the adaptation data and the detected speech data comprises:
a type of device for which one or more of the adaptation data and the detected speech data is a destination obtaining module. 301. The device of claim 270, wherein said destination of one or more of the adaptation data and the detected speech data acquiring module configured to acquire the destination of one or more of the adaptation data and the detected speech data at least partially based on one or more of the adaptation data and the detected speech data comprises:
a data regarding at least one other device configured to process detected speech data obtaining module; and a destination of the detected speech data determining based on acquired data regarding at least one other device determining module. 302. The device of claim 301, wherein said data regarding at least one other device configured to process detected speech data obtaining module comprises:
an at least one or more other device configured to process detected speech data detecting module configured to detect one or more other devices configured to process detected speech data. 303. The device of claim 301, wherein said data regarding at least one other device configured to process detected speech data obtaining module comprises:
a data regarding at least one other device configured to process detected speech data acquiring from adaptation data module. 304. The device of claim 301, wherein said data regarding at least one other device configured to process detected speech data obtaining module comprises:
a detecting at least one or more other devices configured to process detected speech data module; and a determining whether detected speech data is intended to be applied by one of the one or more other devices module. 305. The device of claim 301, wherein said data regarding at least one other device configured to process detected speech data obtaining module comprises:
a detecting one or more other devices configured to process detected speech data module; a signal requesting data regarding a capability of the one or more other devices transmitting module; and a data regarding capability of the one or more other devices receiving module. 306. The device of claim 301, wherein said data regarding at least one other device configured to process detected speech data obtaining module comprises:
a one or more other devices configured to process detected speech data detecting module; and a capability of the detected one or more other devices configured to process detected speech data receiving module. 307. The device of claim 270, wherein said acquired destination of one or more of the adaptation data and the detected speech data transmitting module comprises:
an other device data regarding a capability of one or more other devices configured to process detected speech data obtaining module configured to obtain other device data regarding a capability of one or more other devices configured to process detected speech data; and a destination for one or more of the adaptation data and the detected speech data determining at least partly based on the acquired other device data module. 308. The device of claim 270, wherein said acquired destination of one or more of the adaptation data and the detected speech data transmitting module comprises:
an acquired destination of one or more of the adaptation data and the detected speech data transmitting to target device module. 309. The device of claim 308, wherein said acquired destination of one or more of the adaptation data and the detected speech data transmitting to target device module comprises:
a detected speech data to target device acquired as destination transmitting module. 310. The device of claim 309, wherein said detected speech data to target device acquired as destination transmitting module comprises:
a detected speech data converting into target device recognizable data module configured to convert detected speech data into data that is recognizable by the target device; and a converted detected speech data transmitting to target device acquired as destination module. 311. The device of claim 310, wherein said detected speech data converting into target device recognizable data module comprises:
a detected target device unrecognizable speech data converting into target device recognizable data module. 312. The device of claim 270, wherein said acquired destination of one or more of the adaptation data and the detected speech data transmitting module comprises:
a one or more filters specified by the acquired adaptation data applying to detected speech data module; and a filter-applied detected speech data transmitting to acquired destination module. 313. The device of claim 270, wherein said acquired destination of one or more of the adaptation data and the detected speech data transmitting module comprises:
a one or more of detected speech data and adaptation data transmitting to particular target device component module configured to transmit one or more of the speech data and the adaptation data to a particular component of a target device. 314. The device of claim 313, wherein said one or more of detected speech data and adaptation data transmitting to particular target device component module comprises:
a one or more of detected speech data and adaptation data transmitting to target device speech recognition component module. 315. The device of claim 270, wherein said acquired destination of one or more of the adaptation data and the detected speech data transmitting module comprises:
a one or more of speech data and adaptation data configured to be processed by a target device transmitting to further device module configured to transmit one or more of the detected speech data and the adaptation data to a further device, said one or more of the detected speech data and the adaptation data configured to be processed by a target device. | Computationally implemented methods and systems include detecting speech data related to a speech-facilitated transaction, acquiring adaptation data that is at least partly based on at least one speech interaction of a particular party that is discrete from the detected speech data, wherein at least a portion of the adaptation data has been stored on a particular device associated with the particular party, obtaining a destination of one or more of the adaptation data and the speech data, and transmitting one or more of the speech data and the adaptation data to the acquired destination. In addition to the foregoing, other aspects are described in the claims, drawings, and text.1-269. (canceled) 270. A device, comprising:
a memory; and a processor operably coupled to the memory, the processor including:
a speech data related to speech facilitated transaction detecting module;
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module configured to acquire adaptation data that is at least partly based on at least one speech interaction of a particular party that is discrete from the detected speech data, wherein at least a portion of the adaptation data has been stored on a particular device associated with the particular party;
a destination of one or more of the adaptation data and the detected speech data acquiring module configured to acquire the destination of one or more of the adaptation data and the detected speech data at least partially based on one or more of the adaptation data and the detected speech data; and
an acquired destination of one or more of the adaptation data and the detected speech data transmitting module. 271. The device of claim 270, wherein said speech data related to speech facilitated transaction detecting module comprises:
an adaptation data receiving module; and a reception of adaptation data-based speech data transferring determination module. 272. The device of claim 270, wherein said speech data related to speech facilitated transaction detecting module comprises:
a transmission of speech data by device detecting module; and a data regarding detected device transmitting speech data collecting module configured to collect data regarding the detected device that is transmitting speech data. 273. The device of claim 270, wherein said speech data related to speech facilitated transaction detecting module comprises:
a speech data comprising previously recorded particular party speech and timestamp of recording speech receiving module. 274. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party occurring prior to speech interaction generating detected speech data, and has been stored on a particular party-associated particular device acquiring module. 275. (canceled) 276. (canceled) 277. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party occurring prior to speech interaction generating detected speech data, and has been stored on a particular party-associated particular device acquiring in response to condition module. 278. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device module. 279. The device of claim 278, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device module comprises:
an adaptation data originating at further device and at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device module. 280. The device of claim 278, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device related to the particular device module. 281. The device of claim 280, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device related to the particular device module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device associated with the particular party module. 282. The device of claim 280, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device related to the particular device module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device at least partially controlled by the particular device module. 283. The device of claim 278, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring from a further device that received the adaptation data from the particular device module. 284. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party with particular type of device separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module. 285. The device of claim 284, wherein said adaptation data at least partly based on discrete speech interaction of particular party with particular type of device separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party with device of same type as target device configured to receive speech data, said discrete interaction separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module. 286. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party with particular device separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module. 287. (canceled) 288. (canceled) 289. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data and using same utterance as speech that is part of speech data, and has been stored on a particular party-associated particular device acquiring module. 290. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party and using same utterance as speech that is part of speech data at a different time than speech that is part of the speech data receiving module. 291. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data comprising a location of instructions for modifying one or more portions of a speech recognition component of a target device that are at least partly based on one or more particular party speech interactions, and has been stored on a particular party-associated particular device acquiring module. 292. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and is temporarily stored on the particular-party associated particular device until remote server deposit receiving module. 293. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and was transmitted from a first device to a second device using the particular party-associated particular device as a channel configured to facilitate the transaction receiving module. 294. The device of claim 270, wherein said adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data, and has been stored on a particular party-associated particular device acquiring module comprises:
an adaptation data at least partly based on discrete speech interaction of particular party separate from detected speech data receiving module; and a further data adding to adaptation data module configured to add further data to the received adaptation data. 295. The device of claim 270, wherein said destination of one or more of the adaptation data and the detected speech data acquiring module configured to acquire the destination of one or more of the adaptation data and the detected speech data at least partially based on one or more of the adaptation data and the detected speech data comprises:
a data regarding target device configured to process speech data module configured to receive data regarding a target device configured to process the detected speech data. 296. The device of claim 295, wherein said data regarding target device configured to process speech data module comprises:
a data comprising a target device configured to process detected speech data address receiving module. 297. The device of claim 295, wherein said data regarding target device configured to process speech data module comprises:
a data comprising a target device configured to process detected speech data location receiving module. 298. The device of claim 270, wherein said destination of one or more of the adaptation data and the detected speech data acquiring module configured to acquire the destination of one or more of the adaptation data and the detected speech data at least partially based on one or more of the adaptation data and the detected speech data comprises:
a target device location as destination of one or more of the adaptation data and the detected speech data determining module. 299. The device of claim 298, wherein said target device location as destination of one or more of the adaptation data and the detected speech data determining module comprises:
a target device network location as destination of one or more of the adaptation data and the detected speech data determining module. 300. The device of claim 270, wherein said destination of one or more of the adaptation data and the detected speech data acquiring module configured to acquire the destination of one or more of the adaptation data and the detected speech data at least partially based on one or more of the adaptation data and the detected speech data comprises:
a type of device for which one or more of the adaptation data and the detected speech data is a destination obtaining module. 301. The device of claim 270, wherein said destination of one or more of the adaptation data and the detected speech data acquiring module configured to acquire the destination of one or more of the adaptation data and the detected speech data at least partially based on one or more of the adaptation data and the detected speech data comprises:
a data regarding at least one other device configured to process detected speech data obtaining module; and a destination of the detected speech data determining based on acquired data regarding at least one other device determining module. 302. The device of claim 301, wherein said data regarding at least one other device configured to process detected speech data obtaining module comprises:
an at least one or more other device configured to process detected speech data detecting module configured to detect one or more other devices configured to process detected speech data. 303. The device of claim 301, wherein said data regarding at least one other device configured to process detected speech data obtaining module comprises:
a data regarding at least one other device configured to process detected speech data acquiring from adaptation data module. 304. The device of claim 301, wherein said data regarding at least one other device configured to process detected speech data obtaining module comprises:
a detecting at least one or more other devices configured to process detected speech data module; and a determining whether detected speech data is intended to be applied by one of the one or more other devices module. 305. The device of claim 301, wherein said data regarding at least one other device configured to process detected speech data obtaining module comprises:
a detecting one or more other devices configured to process detected speech data module; a signal requesting data regarding a capability of the one or more other devices transmitting module; and a data regarding capability of the one or more other devices receiving module. 306. The device of claim 301, wherein said data regarding at least one other device configured to process detected speech data obtaining module comprises:
a one or more other devices configured to process detected speech data detecting module; and a capability of the detected one or more other devices configured to process detected speech data receiving module. 307. The device of claim 270, wherein said acquired destination of one or more of the adaptation data and the detected speech data transmitting module comprises:
an other device data regarding a capability of one or more other devices configured to process detected speech data obtaining module configured to obtain other device data regarding a capability of one or more other devices configured to process detected speech data; and a destination for one or more of the adaptation data and the detected speech data determining at least partly based on the acquired other device data module. 308. The device of claim 270, wherein said acquired destination of one or more of the adaptation data and the detected speech data transmitting module comprises:
an acquired destination of one or more of the adaptation data and the detected speech data transmitting to target device module. 309. The device of claim 308, wherein said acquired destination of one or more of the adaptation data and the detected speech data transmitting to target device module comprises:
a detected speech data to target device acquired as destination transmitting module. 310. The device of claim 309, wherein said detected speech data to target device acquired as destination transmitting module comprises:
a detected speech data converting into target device recognizable data module configured to convert detected speech data into data that is recognizable by the target device; and a converted detected speech data transmitting to target device acquired as destination module. 311. The device of claim 310, wherein said detected speech data converting into target device recognizable data module comprises:
a detected target device unrecognizable speech data converting into target device recognizable data module. 312. The device of claim 270, wherein said acquired destination of one or more of the adaptation data and the detected speech data transmitting module comprises:
a one or more filters specified by the acquired adaptation data applying to detected speech data module; and a filter-applied detected speech data transmitting to acquired destination module. 313. The device of claim 270, wherein said acquired destination of one or more of the adaptation data and the detected speech data transmitting module comprises:
a one or more of detected speech data and adaptation data transmitting to particular target device component module configured to transmit one or more of the speech data and the adaptation data to a particular component of a target device. 314. The device of claim 313, wherein said one or more of detected speech data and adaptation data transmitting to particular target device component module comprises:
a one or more of detected speech data and adaptation data transmitting to target device speech recognition component module. 315. The device of claim 270, wherein said acquired destination of one or more of the adaptation data and the detected speech data transmitting module comprises:
a one or more of speech data and adaptation data configured to be processed by a target device transmitting to further device module configured to transmit one or more of the detected speech data and the adaptation data to a further device, said one or more of the detected speech data and the adaptation data configured to be processed by a target device. | 2,600 |
10,579 | 10,579 | 15,772,202 | 2,627 | Devices, systems and methods are provided for playing back video or other dynamic content at a point of interaction with one or more users via a device that is configured on a sticker, label, card or other substrate. The device is self-powered and employs ambient radio frequency energy harvesting to charge a renewable, rechargeable energy storage element. The device has a display for displaying static content until dynamic content such as the video content is output in response to a user input. The display on the device can be used for outputting the video content, or the device can transmit the video content to an NFC-enabled smart phone for display without the smart phone having to download the video content from the internet. | 1. A device for display of dynamic content at a point of user interaction comprising:
a display; a memory device for storing content comprising the dynamic content; a processor configured to controllably output the stored content; and an ambient energy collecting and storage device configured to receive ambient radio frequency energy available at the point of user interaction and charge an energy storage element without use of an external AC or DC power source, the energy storage element being configured to supply power to the display, the memory and the processor to controllably output the stored content. 2. The device of claim 1, wherein the device comprises a substrate on which the display, the memory device, the processor and the ambient energy collecting and storage device are mounted, the substrate having a top side from which the display is viewable by a user and a bottom side for mounting the substrate at the point of user interaction. 3. The device of claim 2, wherein the substrate has dimensions comprising a height in the range of 2″-5″ and a length in the range of 4″-7″. 4. The device of claim 2, wherein the substrate has dimensions comprising a thickness in the range of 0.125″-0.5″. 5. The device of claim 1, wherein the dynamic content is a video segment. 6. The device of claim 5, wherein the video segment has a duration in the range of 5 seconds-180 seconds. 7. The device of claim 1, wherein the processor is configured to playback the dynamic content via the display automatically in response to a user input. 8. The device of claim 7, wherein the device comprises a user input device selected from the group consisting of a tactile switch, a touchscreen area on the display, a user proximity sensor. 9. The device of claim 7, wherein the processor plays the dynamic content one time per user input. 10. The device of claim 7, wherein the processor plays the dynamic content continuously in a loop. 11. The device of claim 7, wherein the processor plays the dynamic content periodically. 12. The device of claim 1, wherein the memory device stores static content, and the processor is configured to display the static content until a user input signal is received and then output the dynamic content in response to the user input signal. 13. The device of claim 12, wherein the user input signal is received from a user-activated input device selected from the group consisting a tactile switch, a touchscreen area on the display, and a user proximity sensor. 14. The device of claim 12, wherein the processor is configured to transmit the stored dynamic content to a proximal smart mobile device in response to a user input for playback on a display of the smart mobile device. 15. The device of claim 14, wherein the smart mobile device is near field communication or NFC-enabled and the device further comprises a near field communication circuit and the user input is a user bringing the smart mobile device into proximity with the near field communication circuit of the device, and the processor is configured to transmit the stored dynamic content to the smart mobile device via the near field communication circuit when the near field communication circuit is activated by proximity of the smart mobile device. 16. The device of claim 1, wherein the dynamic content is displayed as a series of successive images. 17. The device of claim 1, wherein the ambient energy collecting and storage device comprises at least one antenna configured to receive ambient radio frequency signals, a matching circuit, a voltage multiplier and an energy storage element. 18. The device of claim 1, wherein the ambient energy collecting and storage device comprises a photovoltaic cell configured to collect ambient light energy and produce a corresponding voltage input to a voltage multiplier connected to an energy storage element, or directly to the energy storage element. | Devices, systems and methods are provided for playing back video or other dynamic content at a point of interaction with one or more users via a device that is configured on a sticker, label, card or other substrate. The device is self-powered and employs ambient radio frequency energy harvesting to charge a renewable, rechargeable energy storage element. The device has a display for displaying static content until dynamic content such as the video content is output in response to a user input. The display on the device can be used for outputting the video content, or the device can transmit the video content to an NFC-enabled smart phone for display without the smart phone having to download the video content from the internet.1. A device for display of dynamic content at a point of user interaction comprising:
a display; a memory device for storing content comprising the dynamic content; a processor configured to controllably output the stored content; and an ambient energy collecting and storage device configured to receive ambient radio frequency energy available at the point of user interaction and charge an energy storage element without use of an external AC or DC power source, the energy storage element being configured to supply power to the display, the memory and the processor to controllably output the stored content. 2. The device of claim 1, wherein the device comprises a substrate on which the display, the memory device, the processor and the ambient energy collecting and storage device are mounted, the substrate having a top side from which the display is viewable by a user and a bottom side for mounting the substrate at the point of user interaction. 3. The device of claim 2, wherein the substrate has dimensions comprising a height in the range of 2″-5″ and a length in the range of 4″-7″. 4. The device of claim 2, wherein the substrate has dimensions comprising a thickness in the range of 0.125″-0.5″. 5. The device of claim 1, wherein the dynamic content is a video segment. 6. The device of claim 5, wherein the video segment has a duration in the range of 5 seconds-180 seconds. 7. The device of claim 1, wherein the processor is configured to playback the dynamic content via the display automatically in response to a user input. 8. The device of claim 7, wherein the device comprises a user input device selected from the group consisting of a tactile switch, a touchscreen area on the display, a user proximity sensor. 9. The device of claim 7, wherein the processor plays the dynamic content one time per user input. 10. The device of claim 7, wherein the processor plays the dynamic content continuously in a loop. 11. The device of claim 7, wherein the processor plays the dynamic content periodically. 12. The device of claim 1, wherein the memory device stores static content, and the processor is configured to display the static content until a user input signal is received and then output the dynamic content in response to the user input signal. 13. The device of claim 12, wherein the user input signal is received from a user-activated input device selected from the group consisting a tactile switch, a touchscreen area on the display, and a user proximity sensor. 14. The device of claim 12, wherein the processor is configured to transmit the stored dynamic content to a proximal smart mobile device in response to a user input for playback on a display of the smart mobile device. 15. The device of claim 14, wherein the smart mobile device is near field communication or NFC-enabled and the device further comprises a near field communication circuit and the user input is a user bringing the smart mobile device into proximity with the near field communication circuit of the device, and the processor is configured to transmit the stored dynamic content to the smart mobile device via the near field communication circuit when the near field communication circuit is activated by proximity of the smart mobile device. 16. The device of claim 1, wherein the dynamic content is displayed as a series of successive images. 17. The device of claim 1, wherein the ambient energy collecting and storage device comprises at least one antenna configured to receive ambient radio frequency signals, a matching circuit, a voltage multiplier and an energy storage element. 18. The device of claim 1, wherein the ambient energy collecting and storage device comprises a photovoltaic cell configured to collect ambient light energy and produce a corresponding voltage input to a voltage multiplier connected to an energy storage element, or directly to the energy storage element. | 2,600 |
10,580 | 10,580 | 15,480,521 | 2,697 | Image sensors may include an array of image sensor pixels. A subset of the array, sometimes referred to as a region of interest (“ROI”), can be read out using vertical readout lines and diagonal readout lines. The diagonal readout lines enable multiple adjacent or nonadjacent rows in the ROI to be simultaneously read out. Configured and operated in this way, frame rate gains can be achieved by reducing the ROI size in both the Y direction and the X direction. | 1. An image sensor comprising:
an array of image sensor pixels; data converters that receive signals from the array of image sensor pixels; a plurality of vertical readout lines coupling the array of image sensor pixels to the data converters; and a plurality of diagonal readout lines coupling the array of image sensor pixels to the data converters. 2. The image sensor of claim 1, wherein the data converters comprise analog-to-digital converters. 3. The image sensor of claim 1, wherein the array of image sensor pixels are arranged in rows and columns, and each image sensor pixel in a given one of the columns is coupled to a corresponding common vertical readout line in the plurality of vertical readout lines. 4. The image sensor of claim 3, wherein each image sensor pixel in the given one of the columns is coupled to a different respective diagonal readout line in the plurality of diagonal readout lines. 5. The image sensor of claim 1, wherein a subset of the array comprises a region of interest (ROI), and the array is configured to simultaneously transfer signals from at least two adjacent rows in the ROI to the data converters. 6. The image sensor of claim 1, wherein a subset of the array comprises a region of interest (ROI), and the array is configured to simultaneously transfer signals from at least two nonadjacent rows in the ROI to the data converters. 7. The image sensor of claim 1, wherein a subset of the array comprises a region of interest (ROI), and the array is configured to simultaneously transfer signals from at least four rows in the ROI to the data converters. 8. The image sensor of claim 1, wherein an image sensor pixel in the array is coupled to at least one of the plurality of vertical readout lines and is also coupled to at least one of the plurality of diagonal readout lines. 9. The image sensor of claim 8, wherein the image sensor pixel comprises:
a source follower transistor; a first select transistor that is coupled between the source follower transistor and the at least one of the plurality of vertical readout lines; and a second select transistor that is coupled between the first select transistor and the at least one of the plurality of vertical readout lines. 10. The image sensor of claim 9, wherein the image sensor pixel further comprises:
a third select transistor coupled between the first select transistor and the at least one of the plurality of diagonal readout lines. 11. The image sensor of claim 10, wherein the image sensor pixel further comprises:
fourth and fifth select transistors coupled in series between the at least one of the plurality of diagonal readout lines and the at least one of the plurality of vertical readout lines. 12. A method of operating an image sensor, the method comprising:
using an array of pixels in the image sensor to capture an image, wherein the array of pixels are arranged in rows and columns; simultaneously reading signals out from at least two different rows in the array; and receiving the signals from the at least two different rows in the array at a plurality of data converters. 13. The method of claim 12, further comprising:
routing the signals from one of the at least two different rows to the plurality of data converters via vertical readout lines; and routing the signals from another one of the at least two different rows to the plurality of data converters via diagonal readout lines. 14. The method of claim 12, wherein simultaneously reading signals out from at least two different rows in the array comprise simultaneously reading signals out from at least two adjacent rows in the array. 15. The method of claim 12, wherein simultaneously reading signals out from at least two different rows in the array comprise simultaneously reading signals out from at least two nonadjacent rows in the array. 16. The method of claim 12, wherein simultaneously reading signals out from at least two different rows in the array comprise simultaneously reading signals out from at least four different rows in the array. 17. An electronic device comprising:
a camera module with an image sensor, the image sensor comprising:
an array of pixels;
data converters that receive signals from the array of pixels;
diagonal readout lines that route the signals from the array of pixels to the data converters. 18. The electronic device of claim 17, wherein the array of pixels is configured to simultaneously transfer signals from at least two rows in the array to the data converters. 19. The electronic device of claim 17, the image sensor further comprising:
vertical column readout lines that route the signals from the array of pixels to the data converters, wherein the vertical column readout lines and the diagonal readout lines are neither parallel nor perpendicular to each other. 20. The electronic device of claim 19, wherein a pixel in the array of pixels comprises:
a source follower transistor; a first select transistor coupled between the source follower transistor and at least one of the vertical column readout lines, the first select transistor is controlled by a first global vertical select signal; a second select transistor coupled between the first select transistor and the at least one of the vertical column readout lines, the second select transistor is controlled by a first horizontal select signal; a third select transistor coupled between the first select transistor and at least one of the diagonal readout lines, the third select transistor is controlled by a second horizontal select signal; a fourth select transistor coupled between the at least one of the diagonal readout lines and the at least one of the vertical column readout lines, the fourth select transistor is controlled by a second global vertical select signal; and a fifth select transistor coupled between the fourth select transistor and the at least one of the vertical column readout lines, the fifth select transistor is controlled by a third horizontal select signal. | Image sensors may include an array of image sensor pixels. A subset of the array, sometimes referred to as a region of interest (“ROI”), can be read out using vertical readout lines and diagonal readout lines. The diagonal readout lines enable multiple adjacent or nonadjacent rows in the ROI to be simultaneously read out. Configured and operated in this way, frame rate gains can be achieved by reducing the ROI size in both the Y direction and the X direction.1. An image sensor comprising:
an array of image sensor pixels; data converters that receive signals from the array of image sensor pixels; a plurality of vertical readout lines coupling the array of image sensor pixels to the data converters; and a plurality of diagonal readout lines coupling the array of image sensor pixels to the data converters. 2. The image sensor of claim 1, wherein the data converters comprise analog-to-digital converters. 3. The image sensor of claim 1, wherein the array of image sensor pixels are arranged in rows and columns, and each image sensor pixel in a given one of the columns is coupled to a corresponding common vertical readout line in the plurality of vertical readout lines. 4. The image sensor of claim 3, wherein each image sensor pixel in the given one of the columns is coupled to a different respective diagonal readout line in the plurality of diagonal readout lines. 5. The image sensor of claim 1, wherein a subset of the array comprises a region of interest (ROI), and the array is configured to simultaneously transfer signals from at least two adjacent rows in the ROI to the data converters. 6. The image sensor of claim 1, wherein a subset of the array comprises a region of interest (ROI), and the array is configured to simultaneously transfer signals from at least two nonadjacent rows in the ROI to the data converters. 7. The image sensor of claim 1, wherein a subset of the array comprises a region of interest (ROI), and the array is configured to simultaneously transfer signals from at least four rows in the ROI to the data converters. 8. The image sensor of claim 1, wherein an image sensor pixel in the array is coupled to at least one of the plurality of vertical readout lines and is also coupled to at least one of the plurality of diagonal readout lines. 9. The image sensor of claim 8, wherein the image sensor pixel comprises:
a source follower transistor; a first select transistor that is coupled between the source follower transistor and the at least one of the plurality of vertical readout lines; and a second select transistor that is coupled between the first select transistor and the at least one of the plurality of vertical readout lines. 10. The image sensor of claim 9, wherein the image sensor pixel further comprises:
a third select transistor coupled between the first select transistor and the at least one of the plurality of diagonal readout lines. 11. The image sensor of claim 10, wherein the image sensor pixel further comprises:
fourth and fifth select transistors coupled in series between the at least one of the plurality of diagonal readout lines and the at least one of the plurality of vertical readout lines. 12. A method of operating an image sensor, the method comprising:
using an array of pixels in the image sensor to capture an image, wherein the array of pixels are arranged in rows and columns; simultaneously reading signals out from at least two different rows in the array; and receiving the signals from the at least two different rows in the array at a plurality of data converters. 13. The method of claim 12, further comprising:
routing the signals from one of the at least two different rows to the plurality of data converters via vertical readout lines; and routing the signals from another one of the at least two different rows to the plurality of data converters via diagonal readout lines. 14. The method of claim 12, wherein simultaneously reading signals out from at least two different rows in the array comprise simultaneously reading signals out from at least two adjacent rows in the array. 15. The method of claim 12, wherein simultaneously reading signals out from at least two different rows in the array comprise simultaneously reading signals out from at least two nonadjacent rows in the array. 16. The method of claim 12, wherein simultaneously reading signals out from at least two different rows in the array comprise simultaneously reading signals out from at least four different rows in the array. 17. An electronic device comprising:
a camera module with an image sensor, the image sensor comprising:
an array of pixels;
data converters that receive signals from the array of pixels;
diagonal readout lines that route the signals from the array of pixels to the data converters. 18. The electronic device of claim 17, wherein the array of pixels is configured to simultaneously transfer signals from at least two rows in the array to the data converters. 19. The electronic device of claim 17, the image sensor further comprising:
vertical column readout lines that route the signals from the array of pixels to the data converters, wherein the vertical column readout lines and the diagonal readout lines are neither parallel nor perpendicular to each other. 20. The electronic device of claim 19, wherein a pixel in the array of pixels comprises:
a source follower transistor; a first select transistor coupled between the source follower transistor and at least one of the vertical column readout lines, the first select transistor is controlled by a first global vertical select signal; a second select transistor coupled between the first select transistor and the at least one of the vertical column readout lines, the second select transistor is controlled by a first horizontal select signal; a third select transistor coupled between the first select transistor and at least one of the diagonal readout lines, the third select transistor is controlled by a second horizontal select signal; a fourth select transistor coupled between the at least one of the diagonal readout lines and the at least one of the vertical column readout lines, the fourth select transistor is controlled by a second global vertical select signal; and a fifth select transistor coupled between the fourth select transistor and the at least one of the vertical column readout lines, the fifth select transistor is controlled by a third horizontal select signal. | 2,600 |
10,581 | 10,581 | 14,964,027 | 2,647 | Network devices can be registered to access a network using known host devices to thereby simplify the device registration process. The host device can be an administrator's device that is already registered and authorized to vouch for another user desiring to register his or her device. Alternatively, the host device can be the user's previously registered device and can be used to register the user's additional devices. | 1. A method for registering a user device to access a network via a known host device, the method comprising:
providing, on a known host device, an interface for registering a user device to access a network; receiving, via the interface, input that identifies an identifier of a user of the user device; receiving, from the user device via a direct wireless communication protocol, an identifier of the user device; and transmitting the user identifier and the identifier of the user device to a device management system to enable the device management system to associate the user identifier with the identifier of the user device for the purpose of attributing the user device's network traffic to the user. 2. The method of claim 1, wherein the direct wireless communication protocol is Near Field Communication. 3. The method of claim 1, wherein the direct wireless communication protocol is Bluetooth. 4. The method of claim 1, wherein the user identifier is a username. 5. The method of claim 1, wherein the identifier of the user device is a MAC address. 6. The method of claim 1, wherein the known host device comprises a device registered to an administrator. 7. The method of claim 1, wherein the known host device and the user device are both mobile devices. 8. The method of claim 1, wherein receiving input that identifies an identifier of a user of the user device comprises receiving the user's username and password. 9. The method of claim 1, further comprising:
providing, on the user device, an interface for registering one or more additional user devices to access the network using the user identifier; receiving, from a first additional user device and via a direct wireless communication protocol, an identifier of the first additional user device; and transmitting the user identifier and the identifier of the first additional user device to the device management system to enable the device management system to associate the user identifier with the identifier of the first additional user device for the purpose of attributing the first additional user device's network traffic to the user. 10. The method of claim 9, wherein the direct wireless communication protocol is one or more of Near Field Communication or Bluetooth. 11. The method of claim 1, further comprising:
providing, by the known host device, an access point for connecting to the network; logging an identifier of a second user device that connects to the network via the access point; and storing session information in association with the identifier of the second device, the session information identifying a time period during which the second user device is connected to the network via the access point. 12. The method of claim 11, wherein the identifier of the second user device is a computer name of the second user device. 13. The method of claim 11, further comprising:
transmitting the identifier of the second device and the session information to the device management system for storage in association with a user identifier of a user of the known host device. 14. One or more computer storage media storing computer executable instructions which when executed by one or more processors implement a method for registering a user device to access a network via a known host device, the method comprising:
providing, on a known host device, an interface for registering a user device to access a network; receiving, via the interface, a username of a user of the user device; receiving, from the user device via a direct wireless communication protocol, a MAC address of the user device; and transmitting the username and MAC address to a device management system to enable the device management system to associate the username with the MAC address for the purpose of attributing the user device's network traffic to the user. 15. The computer storage media of claim 14, wherein the direct wireless communication protocol is Near Field Communication. 16. The computer storage media of claim 14, wherein the known host device comprises a device registered to an administrator. 17. A system for registering a user device with a network comprising:
a device management system that includes a database for storing associations between user identifiers and identifiers of user devices that are registered to access the network, the device management system further including one or more network components that are configured to monitor network traffic generated by the user devices; and a registration application configured to be executed on an administrator device and to communicate with the device management system to register user devices with the device management system, the registration application comprising:
an interface for receiving manual input of an identifier of a first user of a first user device; and
an interface for receiving, via a direct wireless communication protocol, an identifier of the first user device;
wherein the registration application is configured to transmit the identifier of the first user and the identifier of the first user device to the device management system to cause the identifier of the first user to be stored in the database in association with the identifier of the first user device to thereby register the first user device to access the network and to allow the device management system to attribute network traffic generated by the first user device to the first user. 18. The system of claim 17, wherein the direct wireless communication protocol is Near Field Communication. 19. The system of claim 17, wherein the identifier of the first user comprises the first user's username that is employed to login to one or more resources on the network. 20. The system of claim 17, wherein the identifier of the first user device is a MAC address of a network card of the first user device. | Network devices can be registered to access a network using known host devices to thereby simplify the device registration process. The host device can be an administrator's device that is already registered and authorized to vouch for another user desiring to register his or her device. Alternatively, the host device can be the user's previously registered device and can be used to register the user's additional devices.1. A method for registering a user device to access a network via a known host device, the method comprising:
providing, on a known host device, an interface for registering a user device to access a network; receiving, via the interface, input that identifies an identifier of a user of the user device; receiving, from the user device via a direct wireless communication protocol, an identifier of the user device; and transmitting the user identifier and the identifier of the user device to a device management system to enable the device management system to associate the user identifier with the identifier of the user device for the purpose of attributing the user device's network traffic to the user. 2. The method of claim 1, wherein the direct wireless communication protocol is Near Field Communication. 3. The method of claim 1, wherein the direct wireless communication protocol is Bluetooth. 4. The method of claim 1, wherein the user identifier is a username. 5. The method of claim 1, wherein the identifier of the user device is a MAC address. 6. The method of claim 1, wherein the known host device comprises a device registered to an administrator. 7. The method of claim 1, wherein the known host device and the user device are both mobile devices. 8. The method of claim 1, wherein receiving input that identifies an identifier of a user of the user device comprises receiving the user's username and password. 9. The method of claim 1, further comprising:
providing, on the user device, an interface for registering one or more additional user devices to access the network using the user identifier; receiving, from a first additional user device and via a direct wireless communication protocol, an identifier of the first additional user device; and transmitting the user identifier and the identifier of the first additional user device to the device management system to enable the device management system to associate the user identifier with the identifier of the first additional user device for the purpose of attributing the first additional user device's network traffic to the user. 10. The method of claim 9, wherein the direct wireless communication protocol is one or more of Near Field Communication or Bluetooth. 11. The method of claim 1, further comprising:
providing, by the known host device, an access point for connecting to the network; logging an identifier of a second user device that connects to the network via the access point; and storing session information in association with the identifier of the second device, the session information identifying a time period during which the second user device is connected to the network via the access point. 12. The method of claim 11, wherein the identifier of the second user device is a computer name of the second user device. 13. The method of claim 11, further comprising:
transmitting the identifier of the second device and the session information to the device management system for storage in association with a user identifier of a user of the known host device. 14. One or more computer storage media storing computer executable instructions which when executed by one or more processors implement a method for registering a user device to access a network via a known host device, the method comprising:
providing, on a known host device, an interface for registering a user device to access a network; receiving, via the interface, a username of a user of the user device; receiving, from the user device via a direct wireless communication protocol, a MAC address of the user device; and transmitting the username and MAC address to a device management system to enable the device management system to associate the username with the MAC address for the purpose of attributing the user device's network traffic to the user. 15. The computer storage media of claim 14, wherein the direct wireless communication protocol is Near Field Communication. 16. The computer storage media of claim 14, wherein the known host device comprises a device registered to an administrator. 17. A system for registering a user device with a network comprising:
a device management system that includes a database for storing associations between user identifiers and identifiers of user devices that are registered to access the network, the device management system further including one or more network components that are configured to monitor network traffic generated by the user devices; and a registration application configured to be executed on an administrator device and to communicate with the device management system to register user devices with the device management system, the registration application comprising:
an interface for receiving manual input of an identifier of a first user of a first user device; and
an interface for receiving, via a direct wireless communication protocol, an identifier of the first user device;
wherein the registration application is configured to transmit the identifier of the first user and the identifier of the first user device to the device management system to cause the identifier of the first user to be stored in the database in association with the identifier of the first user device to thereby register the first user device to access the network and to allow the device management system to attribute network traffic generated by the first user device to the first user. 18. The system of claim 17, wherein the direct wireless communication protocol is Near Field Communication. 19. The system of claim 17, wherein the identifier of the first user comprises the first user's username that is employed to login to one or more resources on the network. 20. The system of claim 17, wherein the identifier of the first user device is a MAC address of a network card of the first user device. | 2,600 |
10,582 | 10,582 | 15,860,277 | 2,689 | A helmet lighting system includes a display attachable to a helmet which is selectively illuminated in response to receiving a wireless signal transmitted from a wireless transmitter operably coupled to a brake or direction signal light of the motorcycle. The display includes flexible base housing attachable to an outer surface of the helmet that at least partially houses an illumination module. A flexible applique overlies the illumination module and is at least partially transparent or translucent so as to pass light from LEDs of the illumination module therethrough. The applique is removably attachable to the base housing and/or the illumination module and may be replaced with other appliques having different logos or indicia. | 1. A helmet lighting system, comprising:
a wireless transmitter operably coupled to a brake or direction signal lights of a motorcycle for transmitting a wireless signal when the brake or direction signal is actuated; and a display attachable to a helmet, comprising: a base housing attachable to an outer surface of the helmet, the base housing being flexible so as to conform to the outer surface of the helmet; an illumination module at least partially disposed within the base housing, the illumination module comprising a power source, illuminating LEDs, a wireless signal receiver and electronic components for illuminating the LEDs in response to a transmitted wireless signal; and a flexible applique overlying the illumination module, the applique being at least partially transparent or translucent so as to pass light from the LEDs therethrough, the applique being removably attachable to the base housing and/or the illumination module. 2. The system of claim 1, wherein the illumination module is flexible. 3. The system of claim 2, wherein a periphery of the illumination module is disposed within the base housing and the applique is removably adhered to an exposed portion of an outermost layer of the illumination module. 4. The system of claim 1, wherein the illumination module emits a first light color and/or light intensity through the applique during normal operation and emits another light color and/or intensity upon transmission of the wireless signal. 5. The system of claim 4, wherein the illumination module emits a red light through the applique when the brake of the motorcycle is actuated. 6. The system of claim 1, wherein the at least partially transparent or translucent portion of the applique defines a logo and/or word indicia viewable from behind the helmet. 7. The system of claim 6, including a second applique having a second logo and/or indicia, the second applique being removably attachable to the base housing and/or the illumination module in place of the applique. 8. The system of claim 1, including a charger for charging a rechargeable battery power source of the illumination module. 9. The system of claim 8, wherein the charger is configured to be removably attachable to the display so as to overlay the display. 10. The system of claim 9, wherein the charger wirelessly charges the rechargeable battery power source of the illumination module. 11. The system of claim 10, wherein the charger has an outer configuration substantially matching that of the display and includes at least a portion that is transparent or translucent defining a logo and/or indicia that is illuminated as the charger recharges the display. 12. A helmet lighting system, comprising:
a wireless transmitter operably coupled to a brake or direction signal lights of a motorcycle for transmitting a wireless signal when the brake or direction signal is actuated; a display attachable to a helmet, comprising: a base housing attachable to an outer surface of the helmet, the base housing being flexible so as to conform to the outer surface of the helmet; a flexible illumination module at least partially disposed within the base housing, the illumination module comprising a power source, illuminating LEDs, a wireless signal receiver and electronic components for illuminating the LEDs in response to a transmitted wireless signal; and a flexible applique overlying the illumination module, the applique being at least partially transparent or translucent so as to pass light from the LEDs therethrough, the applique being removably attachable to the base housing and/or the illumination module; and a charger for charging a rechargeable battery of the illumination module, the charger being configured to be removably attachable to the display so as to overlay the display; wherein the illumination module emits a first light color and/or light intensity through the applique during normal operation and emits another light color and/or intensity upon transmission of the wireless signal. 13. The system of claim 12, wherein a periphery of the illumination module is disposed within the base housing and the applique is removably adhered to an exposed portion of an outermost layer of the illumination module. 14. The system of claim 12, wherein the illumination module emits a red light through the applique when the brake of the motorcycle is actuated. 15. The system of claim 12, wherein the at least partially transparent or translucent portion of the applique defines a logo and/or word indicia viewable from behind the helmet. 16. The system of claim 15, including a second applique having a second logo and/or indicia, the second applique being removably attachable to the base housing and/or the illumination module in place of the applique. 17. The system of claim 12, wherein the charger wirelessly charges the rechargeable battery power source of the illumination module. 18. The system of claim 12, wherein the charger has an outer configuration substantially matching that of the display and includes at least a portion that is transparent or translucent defining a logo and/or indicia that is illuminated as the charger recharges the display. | A helmet lighting system includes a display attachable to a helmet which is selectively illuminated in response to receiving a wireless signal transmitted from a wireless transmitter operably coupled to a brake or direction signal light of the motorcycle. The display includes flexible base housing attachable to an outer surface of the helmet that at least partially houses an illumination module. A flexible applique overlies the illumination module and is at least partially transparent or translucent so as to pass light from LEDs of the illumination module therethrough. The applique is removably attachable to the base housing and/or the illumination module and may be replaced with other appliques having different logos or indicia.1. A helmet lighting system, comprising:
a wireless transmitter operably coupled to a brake or direction signal lights of a motorcycle for transmitting a wireless signal when the brake or direction signal is actuated; and a display attachable to a helmet, comprising: a base housing attachable to an outer surface of the helmet, the base housing being flexible so as to conform to the outer surface of the helmet; an illumination module at least partially disposed within the base housing, the illumination module comprising a power source, illuminating LEDs, a wireless signal receiver and electronic components for illuminating the LEDs in response to a transmitted wireless signal; and a flexible applique overlying the illumination module, the applique being at least partially transparent or translucent so as to pass light from the LEDs therethrough, the applique being removably attachable to the base housing and/or the illumination module. 2. The system of claim 1, wherein the illumination module is flexible. 3. The system of claim 2, wherein a periphery of the illumination module is disposed within the base housing and the applique is removably adhered to an exposed portion of an outermost layer of the illumination module. 4. The system of claim 1, wherein the illumination module emits a first light color and/or light intensity through the applique during normal operation and emits another light color and/or intensity upon transmission of the wireless signal. 5. The system of claim 4, wherein the illumination module emits a red light through the applique when the brake of the motorcycle is actuated. 6. The system of claim 1, wherein the at least partially transparent or translucent portion of the applique defines a logo and/or word indicia viewable from behind the helmet. 7. The system of claim 6, including a second applique having a second logo and/or indicia, the second applique being removably attachable to the base housing and/or the illumination module in place of the applique. 8. The system of claim 1, including a charger for charging a rechargeable battery power source of the illumination module. 9. The system of claim 8, wherein the charger is configured to be removably attachable to the display so as to overlay the display. 10. The system of claim 9, wherein the charger wirelessly charges the rechargeable battery power source of the illumination module. 11. The system of claim 10, wherein the charger has an outer configuration substantially matching that of the display and includes at least a portion that is transparent or translucent defining a logo and/or indicia that is illuminated as the charger recharges the display. 12. A helmet lighting system, comprising:
a wireless transmitter operably coupled to a brake or direction signal lights of a motorcycle for transmitting a wireless signal when the brake or direction signal is actuated; a display attachable to a helmet, comprising: a base housing attachable to an outer surface of the helmet, the base housing being flexible so as to conform to the outer surface of the helmet; a flexible illumination module at least partially disposed within the base housing, the illumination module comprising a power source, illuminating LEDs, a wireless signal receiver and electronic components for illuminating the LEDs in response to a transmitted wireless signal; and a flexible applique overlying the illumination module, the applique being at least partially transparent or translucent so as to pass light from the LEDs therethrough, the applique being removably attachable to the base housing and/or the illumination module; and a charger for charging a rechargeable battery of the illumination module, the charger being configured to be removably attachable to the display so as to overlay the display; wherein the illumination module emits a first light color and/or light intensity through the applique during normal operation and emits another light color and/or intensity upon transmission of the wireless signal. 13. The system of claim 12, wherein a periphery of the illumination module is disposed within the base housing and the applique is removably adhered to an exposed portion of an outermost layer of the illumination module. 14. The system of claim 12, wherein the illumination module emits a red light through the applique when the brake of the motorcycle is actuated. 15. The system of claim 12, wherein the at least partially transparent or translucent portion of the applique defines a logo and/or word indicia viewable from behind the helmet. 16. The system of claim 15, including a second applique having a second logo and/or indicia, the second applique being removably attachable to the base housing and/or the illumination module in place of the applique. 17. The system of claim 12, wherein the charger wirelessly charges the rechargeable battery power source of the illumination module. 18. The system of claim 12, wherein the charger has an outer configuration substantially matching that of the display and includes at least a portion that is transparent or translucent defining a logo and/or indicia that is illuminated as the charger recharges the display. | 2,600 |
10,583 | 10,583 | 14,611,446 | 2,696 | Methods, devices and program products are provided for presenting a notification on a device, which includes a camera and a processor, detects a gaze event utilizing the camera and processor, and dismisses the notification based on the gaze event. The device comprises a processor and a display configured to present a notification. The device also comprises a camera configured to collect image frame data, and a local storage medium storing program instructions accessible by the processor. The processor, responsive to execution of the program instructions, detects a gaze event based on the image frame data and dismisses the notification based on the gaze event. The computer program product comprises a non-signal computer readable storage medium comprising computer executable code which presents a notification on a device, the device including a camera and a processor. | 1. A method, comprising:
presenting, using a processor, a notification on a device; detecting a gaze event utilizing a camera and processor of the device; and dismissing the notification based on the gaze event. 2. The method of claim 1, wherein the presenting includes displaying the notification on a display of the device while the device is in a restricted access mode. 3. The method of claim 1, further comprising receiving notice-related data at the device, the notification including a preview regarding the notice-related data received. 4. The method of claim 1, further comprising:
operating an application on the device, the notification representing a control feature associated with the application; and entering the control feature in response to the gaze event. 5. The method of claim 1, wherein the detecting includes identifying gaze engagement representative of a gaze direction vector being directed at the device, and identifying a gaze termination representative of a gaze direction vector being directed away from the device. 6. The method of claim 5, wherein the identifying includes:
capturing image frame data representing a user's face with the camera; detecting eye movement from the image frame data; and calculating, utilizing the processor, the gaze direction vectors from the eye movement to identify the gaze engagement and gaze termination. 7. The method of claim 1, wherein the dismissing includes processing a receipt acknowledgement in connection with the notification. 8. A device, comprising:
a processor; a display to present a notification; a camera to collect image frame data; a local storage medium storing program instructions accessible by the processor; wherein, responsive to execution of the program instructions, the processor: detects a gaze event based on the image frame data; and dismisses the notification based on the gaze event. 9. The device of claim 8, wherein the display displays the notification while the device is in a restricted access state. 10. The device of claim 8, wherein the processor receives content, the display displaying, as the notification, a notification preview including a portion of the content. 11. The device of claim 8, wherein the processor operates an application on the device, the display presenting, within the notification, a control feature associated with the application, the processor entering the control feature in response to the gaze event. 12. The device of claim 8, wherein the processor identifies a gaze engagement representative of a gaze direction vector being directed at the device, and identifies a gaze termination representative of a gaze direction vector being directed away from the device, the gaze engagement and gaze termination defining the gaze event. 13. The device of claim 12, wherein the camera captures image frame data representing a user's face, the processor analyzes the image frame data to detect eye movement and calculates gaze direction vectors from the eye movement to identify the gaze engagement and gaze termination. 14. The device of claim 8, wherein the processor processes a receipt acknowledgement in connection with the notification. 15. The device of claim 8, wherein the local storage medium stores a series of image data frames associated with scenes that appear in a field of view of the camera over time, and feature of interest (FOI) position data indicating a location of a feature of interest in the corresponding image data frames, the processor calculating gaze direction vectors based on the FOI position data. 16. The device of claim 8, wherein the camera collects a series of image data frames associated with scenes that appear in a field of view of the camera over time when a notification is presented on the display and continuing for a predetermined period of time. 17. A computer program product comprising a non-signal computer readable storage medium comprising computer executable code to perform:
presenting a notification on a device, the device including a camera and a processor; detecting, utilizing the camera and processor, a gaze event based on the image frame data captured by the camera; and dismissing the notification based on the gaze event. 18. The computer program product of claim 17, further comprising code to perform analysis of images captured by the camera to detect a line of sight of eyes of a user face in a field of view of the camera. 19. The computer program product of claim 17, wherein the receiving operation includes one or more of:
analyze image data frames to determine a line of sight of a user in connection with the image data frames; determine changes in the line of sight from a first direction that excludes the notification to a second direction that includes the notification; and set a gaze engagement flag indicating that the line of site has changed to the second direction that includes the notification. 20. The computer program product of claim 17, further comprising code to:
analyze image data frames to determine a line of sight of a user in connection with the image data frames; determine changes in the line of the site from a first direction that includes the notification to a second direction that excludes the notification; and set a gaze termination flag indicating that the line of site has changed to the second direction that excludes the notification. | Methods, devices and program products are provided for presenting a notification on a device, which includes a camera and a processor, detects a gaze event utilizing the camera and processor, and dismisses the notification based on the gaze event. The device comprises a processor and a display configured to present a notification. The device also comprises a camera configured to collect image frame data, and a local storage medium storing program instructions accessible by the processor. The processor, responsive to execution of the program instructions, detects a gaze event based on the image frame data and dismisses the notification based on the gaze event. The computer program product comprises a non-signal computer readable storage medium comprising computer executable code which presents a notification on a device, the device including a camera and a processor.1. A method, comprising:
presenting, using a processor, a notification on a device; detecting a gaze event utilizing a camera and processor of the device; and dismissing the notification based on the gaze event. 2. The method of claim 1, wherein the presenting includes displaying the notification on a display of the device while the device is in a restricted access mode. 3. The method of claim 1, further comprising receiving notice-related data at the device, the notification including a preview regarding the notice-related data received. 4. The method of claim 1, further comprising:
operating an application on the device, the notification representing a control feature associated with the application; and entering the control feature in response to the gaze event. 5. The method of claim 1, wherein the detecting includes identifying gaze engagement representative of a gaze direction vector being directed at the device, and identifying a gaze termination representative of a gaze direction vector being directed away from the device. 6. The method of claim 5, wherein the identifying includes:
capturing image frame data representing a user's face with the camera; detecting eye movement from the image frame data; and calculating, utilizing the processor, the gaze direction vectors from the eye movement to identify the gaze engagement and gaze termination. 7. The method of claim 1, wherein the dismissing includes processing a receipt acknowledgement in connection with the notification. 8. A device, comprising:
a processor; a display to present a notification; a camera to collect image frame data; a local storage medium storing program instructions accessible by the processor; wherein, responsive to execution of the program instructions, the processor: detects a gaze event based on the image frame data; and dismisses the notification based on the gaze event. 9. The device of claim 8, wherein the display displays the notification while the device is in a restricted access state. 10. The device of claim 8, wherein the processor receives content, the display displaying, as the notification, a notification preview including a portion of the content. 11. The device of claim 8, wherein the processor operates an application on the device, the display presenting, within the notification, a control feature associated with the application, the processor entering the control feature in response to the gaze event. 12. The device of claim 8, wherein the processor identifies a gaze engagement representative of a gaze direction vector being directed at the device, and identifies a gaze termination representative of a gaze direction vector being directed away from the device, the gaze engagement and gaze termination defining the gaze event. 13. The device of claim 12, wherein the camera captures image frame data representing a user's face, the processor analyzes the image frame data to detect eye movement and calculates gaze direction vectors from the eye movement to identify the gaze engagement and gaze termination. 14. The device of claim 8, wherein the processor processes a receipt acknowledgement in connection with the notification. 15. The device of claim 8, wherein the local storage medium stores a series of image data frames associated with scenes that appear in a field of view of the camera over time, and feature of interest (FOI) position data indicating a location of a feature of interest in the corresponding image data frames, the processor calculating gaze direction vectors based on the FOI position data. 16. The device of claim 8, wherein the camera collects a series of image data frames associated with scenes that appear in a field of view of the camera over time when a notification is presented on the display and continuing for a predetermined period of time. 17. A computer program product comprising a non-signal computer readable storage medium comprising computer executable code to perform:
presenting a notification on a device, the device including a camera and a processor; detecting, utilizing the camera and processor, a gaze event based on the image frame data captured by the camera; and dismissing the notification based on the gaze event. 18. The computer program product of claim 17, further comprising code to perform analysis of images captured by the camera to detect a line of sight of eyes of a user face in a field of view of the camera. 19. The computer program product of claim 17, wherein the receiving operation includes one or more of:
analyze image data frames to determine a line of sight of a user in connection with the image data frames; determine changes in the line of sight from a first direction that excludes the notification to a second direction that includes the notification; and set a gaze engagement flag indicating that the line of site has changed to the second direction that includes the notification. 20. The computer program product of claim 17, further comprising code to:
analyze image data frames to determine a line of sight of a user in connection with the image data frames; determine changes in the line of the site from a first direction that includes the notification to a second direction that excludes the notification; and set a gaze termination flag indicating that the line of site has changed to the second direction that excludes the notification. | 2,600 |
10,584 | 10,584 | 15,611,754 | 2,657 | Computer-generated feedback directed to whether user speech input meets subjective criteria is provided through the evaluation of multiple speaking traits. Initially, discrete instances of various multiple speaking traits are detected within the user speech input provided. Such multiple speaking traits include vocal fry, tag questions, uptalk, filler sounds and hedge words. Audio constructs indicative of individual instances of speaking traits are isolated and identified from appropriate samples. Speaking trait detectors then utilize such audio constructs to identify individual instances of speaking traits within the spoken input. The resulting quantities are scored based on reference to predetermined threshold quantities. The individual speaking trait scores are then amalgamated utilizing a weighting that is derived based on empirical relationships between those speaking traits and the criteria for which the user's speech input is being evaluated. Further adjustments thereof can be made by separately, manually weighting the previously determined quantities. | 1. A computing device comprising:
one or more processing units; and one or more computer-readable media comprising computer-executable instructions which, when executed by the one or more processing units, cause the computing device to:
receive speech sample data obtained by digitizing audio generated by a user speaking a speech sample;
detect instances of individual ones of multiple speaking traits in the speech sample data, each speaking trait being both different from, and independent of, others of the multiple speaking traits;
generate, for each of the multiple speaking traits, speaking trait quantity scores by comparing a quantity of detected instances of each speaking trait to a corresponding predetermined speaking trait quantity threshold;
generate, for each of the speaking trait quantity scores, weighted speaking trait quantity scores by weighting each of the speaking trait quantity scores with a corresponding predetermined speaking trait quantity weight, each predetermined speaking trait quantity weight being based on an empirically-derived correlation between one of the multiple speaking traits and one or more criteria for which the user's speaking the speech sample is being evaluated;
amalgamate the weighted speaking trait quantity scores into an amalgamated speaking trait quantity score;
generate, for each of the speaking trait quantity scores, social value weighted speaking trait scores by weighting each of the speaking trait quantity scores with a corresponding predetermined speaking trait social value weight that was manually set;
amalgamate the social value weighted speaking trait scores into an amalgamated social value weighted speaking trait score; and
provide feedback based on a combination of the amalgamated speaking trait quantity score and the amalgamated social value weighted speaking trait score, the feedback being indicative of the user's speaking the speech sample exhibiting one or more criteria. 2. The computing device of claim 1, wherein the multiple speaking traits comprise at least two of: vocal fry, uptalk, tag questions, filler sounds or hedge words. 3. The computing device of claim 1, wherein the multiple speaking traits comprise: vocal fry, uptalk, tag questions, filler sounds and hedge words. 4. The computing device of claim 1, wherein the speaking trait quantity threshold is based on a quantity of instances of that speaking trait that were detected in a set of speech samples that were previously categorized as highly exhibiting the one or more criteria. 5. The computing device of claim 1, wherein the empirically-derived correlation between the one of the multiple speaking traits and the one or more criteria for which the user's speaking the speech sample is being evaluated is based on a quantity of the one of the multiple speaking traits detected in a set of speech samples that were previously categorized as highly exhibiting the one or more criteria and on a quantity of the one of the multiple speaking traits detected in a set of speech samples that were previously categorized as poorly exhibiting the one or more criteria. 6. The computing device of claim 1, wherein the computer-executable instructions for detecting instances of the individual ones of the multiple speaking traits in the speech sample data comprise computer-executable instructions, which, when executed by the one or more processing units, cause the computing device to match portions of the audio generated by the user speaking the speech sample to audio constructs that were previously determined to be representative of the individual ones of the multiple speaking traits. 7. The computing device of claim 6, wherein the audio constructs that were previously determined to be representative of the individual ones of the multiple speaking traits were identified by identifying similarities among a set of speech samples that were previously determined to comprise the individual ones of the multiple speaking traits. 8. A set of one or more computing devices, comprising, in aggregate:
one or more processing units; and one or more computer-readable media comprising computer-executable instructions which, when executed by the one or more processing units, cause the set of computing devices to:
receive a first set of multiple speech samples, each identified to comprise a first speaking trait;
identify a first set of audio constructs that are common among the first set of multiple speech samples;
associate the first set of audio constructs with the first speaking trait such that identification of one instance of the first set of audio constructs in a new speech sample causes the set of computing devices to identify that new speech sample as comprising at least one instance of the first speaking trait;
repeat the receiving, the identifying, and the associating for each of others of multiple speaking traits, each speaking trait being both different from, and independent of, others of the multiple speaking traits, the multiple speaking traits also comprising the first speaking trait;
receive speech sample data obtained by digitizing audio generated by a user speaking a speech sample;
detect instances of individual ones of the multiple speaking traits in the speech sample data by reference to the identified audio constructs that were associated with the individual ones of multiple speaking traits; and
provide feedback based at least in part on a quantity of detected instances of each speaking trait, the feedback being indicative of the user's speaking the speech sample exhibiting one or more criteria. 9. The set of one or more computing devices of claim 8, wherein the one or more computer-readable media comprise further computer-executable instructions which, when executed by the one or more processing units, cause the set of computing devices to refine the first set of audio constructs by:
detecting instances of the first speaking trait in a second set of multiple speech samples; receiving identification of instances of the first speaking trait in the second set of multiple speech samples; comparing the detected instances to the received identification of instances; identifying a first audio construct, from among the first set of audio constructs, that is not present in instances of the first speaking trait that were identified, but is present in instances of the first speaking trait that were detected; and refining the first set of audio constructs by removing the first audio construct from the first set of audio constructs. 10. The set of one or more computing devices of claim 8, wherein the one or more computer-readable media comprise further computer-executable instructions which, when executed by the one or more processing units, cause the set of computing devices to refine the first set of audio constructs by:
detecting instances of the first speaking trait in a second set of multiple speech samples; receiving identification of instances of the first speaking trait in the second set of multiple speech samples; comparing the detected instances to the received identification of instances; identifying a first audio construct that is present in instances of the first speaking trait that were identified, but is not present in instances of the first speaking trait that were detected; and refining the first set of audio constructs by adding the first audio construct to the first set of audio constructs. 11. The set of one or more computing devices of claim 8, wherein the multiple speaking traits comprise at least two of: vocal fry, uptalk, tag questions, filler sounds or hedge words. 12. The set of one or more computing devices of claim 8, wherein the one or more computer-readable media comprise further computer-executable instructions which, when executed by the one or more processing units, cause the set of computing devices to:
generate, for each of the multiple speaking traits, speaking trait quantity scores by comparing the quantity of detected instances of each speaking trait to a corresponding predetermined speaking trait quantity threshold; generate, for each of the speaking trait quantity scores, weighted speaking trait quantity scores by weighting each of the speaking trait quantity scores with a corresponding predetermined speaking trait quantity weight, each predetermined speaking trait quantity weight being based on an empirically-derived correlation between one of the multiple speaking traits and one or more criteria for which the user's speaking the speech sample is being evaluated; amalgamate the weighted speaking trait quantity scores into an amalgamated speaking trait quantity score; generate, for each of the speaking trait quantity scores, social value weighted speaking trait scores by weighting each of the speaking trait quantity scores with a corresponding predetermined speaking trait social value weight that was manually set; and amalgamate the social value weighted speaking trait scores into an amalgamated social value weighted speaking trait score; wherein the providing the feedback is further based on a combination of the amalgamated speaking trait quantity score and the amalgamated social value weighted speaking trait score. 13. The set of one or more computing devices of claim 12, wherein the speaking trait quantity threshold is based on a quantity of instances of that speaking trait that were detected in a set of speech samples that were previously categorized as highly exhibiting the one or more criteria. 14. The set of one or more computing devices of claim 12, wherein the empirically-derived correlation between the one of the multiple speaking traits and the one or more criteria for which the user's speaking the speech sample is being evaluated is based on a quantity of the one of the multiple speaking traits detected in a set of speech samples that were previously categorized as highly exhibiting the one or more criteria and on a quantity of the one of the multiple speaking traits detected in a set of speech samples that were previously categorized as poorly exhibiting the one or more criteria. 15. A method of providing computer-generated feedback that is indicative of a user's speaking a speech sample exhibiting one or more criteria, the method comprising the steps of:
receive speech sample data obtained by digitizing audio generated by the user speaking the speech sample; detect instances of individual ones of multiple speaking traits in the speech sample data, each speaking trait being both different from, and independent of, others of the multiple speaking traits; generate, for each of the multiple speaking traits, speaking trait quantity scores by comparing a quantity of detected instances of each speaking trait to a corresponding predetermined speaking trait quantity threshold; generate, for each of the speaking trait quantity scores, weighted speaking trait quantity scores by weighting each of the speaking trait quantity scores with a corresponding predetermined speaking trait quantity weight, each predetermined speaking trait quantity weight being based on an empirically-derived correlation between one of the multiple speaking traits and one or more criteria for which the user's speaking the speech sample is being evaluated; amalgamate the weighted speaking trait quantity scores into an amalgamated speaking trait quantity score; generate, for each of the speaking trait quantity scores, social value weighted speaking trait scores by weighting each of the speaking trait quantity scores with a corresponding predetermined speaking trait social value weight that was manually set; amalgamate the social value weighted speaking trait scores into an amalgamated social value weighted speaking trait score; and provide feedback based on a combination of the amalgamated speaking trait quantity score and the amalgamated social value weighted speaking trait score, the feedback being indicative of the user's speaking the speech sample exhibiting one or more criteria. 16. The method of claim 15, further comprising the steps of:
receive a first set of multiple speech samples, each identified to comprise a first speaking trait from among the multiple speaking traits; identify a first set of audio constructs that are common among the first set of multiple speech samples; associate the first set of audio constructs with the first speaking trait such that identification of one instance of the first set of audio constructs in a new speech sample identifies that new speech sample as comprising at least one instance of the first speaking trait; and repeat the receiving, the identifying, and the associating for each of others of the multiple speaking traits; wherein the detecting the instances of the individual ones of the multiple speaking traits in the speech sample data is performed by reference to the identified audio constructs that were associated with the individual ones of multiple speaking traits. 17. The method of claim 15, wherein the multiple speaking traits comprise at least two of: vocal fry, uptalk, tag questions, filler sounds or hedge words. 18. The method of claim 15, wherein the multiple speaking traits comprise: vocal fry, uptalk, tag questions, filler sounds and hedge words. 19. The method of claim 15, wherein the speaking trait quantity threshold is based on a quantity of instances of that speaking trait that were detected in a set of speech samples that were previously categorized as highly exhibiting the one or more criteria. 20. The method of claim 15, wherein the empirically-derived correlation between the one of the multiple speaking traits and the one or more criteria for which the user's speaking the speech sample is being evaluated is based on a quantity of the one of the multiple speaking traits detected in a set of speech samples that were previously categorized as highly exhibiting the one or more criteria and on a quantity of the one of the multiple speaking traits detected in a set of speech samples that were previously categorized as poorly exhibiting the one or more criteria. | Computer-generated feedback directed to whether user speech input meets subjective criteria is provided through the evaluation of multiple speaking traits. Initially, discrete instances of various multiple speaking traits are detected within the user speech input provided. Such multiple speaking traits include vocal fry, tag questions, uptalk, filler sounds and hedge words. Audio constructs indicative of individual instances of speaking traits are isolated and identified from appropriate samples. Speaking trait detectors then utilize such audio constructs to identify individual instances of speaking traits within the spoken input. The resulting quantities are scored based on reference to predetermined threshold quantities. The individual speaking trait scores are then amalgamated utilizing a weighting that is derived based on empirical relationships between those speaking traits and the criteria for which the user's speech input is being evaluated. Further adjustments thereof can be made by separately, manually weighting the previously determined quantities.1. A computing device comprising:
one or more processing units; and one or more computer-readable media comprising computer-executable instructions which, when executed by the one or more processing units, cause the computing device to:
receive speech sample data obtained by digitizing audio generated by a user speaking a speech sample;
detect instances of individual ones of multiple speaking traits in the speech sample data, each speaking trait being both different from, and independent of, others of the multiple speaking traits;
generate, for each of the multiple speaking traits, speaking trait quantity scores by comparing a quantity of detected instances of each speaking trait to a corresponding predetermined speaking trait quantity threshold;
generate, for each of the speaking trait quantity scores, weighted speaking trait quantity scores by weighting each of the speaking trait quantity scores with a corresponding predetermined speaking trait quantity weight, each predetermined speaking trait quantity weight being based on an empirically-derived correlation between one of the multiple speaking traits and one or more criteria for which the user's speaking the speech sample is being evaluated;
amalgamate the weighted speaking trait quantity scores into an amalgamated speaking trait quantity score;
generate, for each of the speaking trait quantity scores, social value weighted speaking trait scores by weighting each of the speaking trait quantity scores with a corresponding predetermined speaking trait social value weight that was manually set;
amalgamate the social value weighted speaking trait scores into an amalgamated social value weighted speaking trait score; and
provide feedback based on a combination of the amalgamated speaking trait quantity score and the amalgamated social value weighted speaking trait score, the feedback being indicative of the user's speaking the speech sample exhibiting one or more criteria. 2. The computing device of claim 1, wherein the multiple speaking traits comprise at least two of: vocal fry, uptalk, tag questions, filler sounds or hedge words. 3. The computing device of claim 1, wherein the multiple speaking traits comprise: vocal fry, uptalk, tag questions, filler sounds and hedge words. 4. The computing device of claim 1, wherein the speaking trait quantity threshold is based on a quantity of instances of that speaking trait that were detected in a set of speech samples that were previously categorized as highly exhibiting the one or more criteria. 5. The computing device of claim 1, wherein the empirically-derived correlation between the one of the multiple speaking traits and the one or more criteria for which the user's speaking the speech sample is being evaluated is based on a quantity of the one of the multiple speaking traits detected in a set of speech samples that were previously categorized as highly exhibiting the one or more criteria and on a quantity of the one of the multiple speaking traits detected in a set of speech samples that were previously categorized as poorly exhibiting the one or more criteria. 6. The computing device of claim 1, wherein the computer-executable instructions for detecting instances of the individual ones of the multiple speaking traits in the speech sample data comprise computer-executable instructions, which, when executed by the one or more processing units, cause the computing device to match portions of the audio generated by the user speaking the speech sample to audio constructs that were previously determined to be representative of the individual ones of the multiple speaking traits. 7. The computing device of claim 6, wherein the audio constructs that were previously determined to be representative of the individual ones of the multiple speaking traits were identified by identifying similarities among a set of speech samples that were previously determined to comprise the individual ones of the multiple speaking traits. 8. A set of one or more computing devices, comprising, in aggregate:
one or more processing units; and one or more computer-readable media comprising computer-executable instructions which, when executed by the one or more processing units, cause the set of computing devices to:
receive a first set of multiple speech samples, each identified to comprise a first speaking trait;
identify a first set of audio constructs that are common among the first set of multiple speech samples;
associate the first set of audio constructs with the first speaking trait such that identification of one instance of the first set of audio constructs in a new speech sample causes the set of computing devices to identify that new speech sample as comprising at least one instance of the first speaking trait;
repeat the receiving, the identifying, and the associating for each of others of multiple speaking traits, each speaking trait being both different from, and independent of, others of the multiple speaking traits, the multiple speaking traits also comprising the first speaking trait;
receive speech sample data obtained by digitizing audio generated by a user speaking a speech sample;
detect instances of individual ones of the multiple speaking traits in the speech sample data by reference to the identified audio constructs that were associated with the individual ones of multiple speaking traits; and
provide feedback based at least in part on a quantity of detected instances of each speaking trait, the feedback being indicative of the user's speaking the speech sample exhibiting one or more criteria. 9. The set of one or more computing devices of claim 8, wherein the one or more computer-readable media comprise further computer-executable instructions which, when executed by the one or more processing units, cause the set of computing devices to refine the first set of audio constructs by:
detecting instances of the first speaking trait in a second set of multiple speech samples; receiving identification of instances of the first speaking trait in the second set of multiple speech samples; comparing the detected instances to the received identification of instances; identifying a first audio construct, from among the first set of audio constructs, that is not present in instances of the first speaking trait that were identified, but is present in instances of the first speaking trait that were detected; and refining the first set of audio constructs by removing the first audio construct from the first set of audio constructs. 10. The set of one or more computing devices of claim 8, wherein the one or more computer-readable media comprise further computer-executable instructions which, when executed by the one or more processing units, cause the set of computing devices to refine the first set of audio constructs by:
detecting instances of the first speaking trait in a second set of multiple speech samples; receiving identification of instances of the first speaking trait in the second set of multiple speech samples; comparing the detected instances to the received identification of instances; identifying a first audio construct that is present in instances of the first speaking trait that were identified, but is not present in instances of the first speaking trait that were detected; and refining the first set of audio constructs by adding the first audio construct to the first set of audio constructs. 11. The set of one or more computing devices of claim 8, wherein the multiple speaking traits comprise at least two of: vocal fry, uptalk, tag questions, filler sounds or hedge words. 12. The set of one or more computing devices of claim 8, wherein the one or more computer-readable media comprise further computer-executable instructions which, when executed by the one or more processing units, cause the set of computing devices to:
generate, for each of the multiple speaking traits, speaking trait quantity scores by comparing the quantity of detected instances of each speaking trait to a corresponding predetermined speaking trait quantity threshold; generate, for each of the speaking trait quantity scores, weighted speaking trait quantity scores by weighting each of the speaking trait quantity scores with a corresponding predetermined speaking trait quantity weight, each predetermined speaking trait quantity weight being based on an empirically-derived correlation between one of the multiple speaking traits and one or more criteria for which the user's speaking the speech sample is being evaluated; amalgamate the weighted speaking trait quantity scores into an amalgamated speaking trait quantity score; generate, for each of the speaking trait quantity scores, social value weighted speaking trait scores by weighting each of the speaking trait quantity scores with a corresponding predetermined speaking trait social value weight that was manually set; and amalgamate the social value weighted speaking trait scores into an amalgamated social value weighted speaking trait score; wherein the providing the feedback is further based on a combination of the amalgamated speaking trait quantity score and the amalgamated social value weighted speaking trait score. 13. The set of one or more computing devices of claim 12, wherein the speaking trait quantity threshold is based on a quantity of instances of that speaking trait that were detected in a set of speech samples that were previously categorized as highly exhibiting the one or more criteria. 14. The set of one or more computing devices of claim 12, wherein the empirically-derived correlation between the one of the multiple speaking traits and the one or more criteria for which the user's speaking the speech sample is being evaluated is based on a quantity of the one of the multiple speaking traits detected in a set of speech samples that were previously categorized as highly exhibiting the one or more criteria and on a quantity of the one of the multiple speaking traits detected in a set of speech samples that were previously categorized as poorly exhibiting the one or more criteria. 15. A method of providing computer-generated feedback that is indicative of a user's speaking a speech sample exhibiting one or more criteria, the method comprising the steps of:
receive speech sample data obtained by digitizing audio generated by the user speaking the speech sample; detect instances of individual ones of multiple speaking traits in the speech sample data, each speaking trait being both different from, and independent of, others of the multiple speaking traits; generate, for each of the multiple speaking traits, speaking trait quantity scores by comparing a quantity of detected instances of each speaking trait to a corresponding predetermined speaking trait quantity threshold; generate, for each of the speaking trait quantity scores, weighted speaking trait quantity scores by weighting each of the speaking trait quantity scores with a corresponding predetermined speaking trait quantity weight, each predetermined speaking trait quantity weight being based on an empirically-derived correlation between one of the multiple speaking traits and one or more criteria for which the user's speaking the speech sample is being evaluated; amalgamate the weighted speaking trait quantity scores into an amalgamated speaking trait quantity score; generate, for each of the speaking trait quantity scores, social value weighted speaking trait scores by weighting each of the speaking trait quantity scores with a corresponding predetermined speaking trait social value weight that was manually set; amalgamate the social value weighted speaking trait scores into an amalgamated social value weighted speaking trait score; and provide feedback based on a combination of the amalgamated speaking trait quantity score and the amalgamated social value weighted speaking trait score, the feedback being indicative of the user's speaking the speech sample exhibiting one or more criteria. 16. The method of claim 15, further comprising the steps of:
receive a first set of multiple speech samples, each identified to comprise a first speaking trait from among the multiple speaking traits; identify a first set of audio constructs that are common among the first set of multiple speech samples; associate the first set of audio constructs with the first speaking trait such that identification of one instance of the first set of audio constructs in a new speech sample identifies that new speech sample as comprising at least one instance of the first speaking trait; and repeat the receiving, the identifying, and the associating for each of others of the multiple speaking traits; wherein the detecting the instances of the individual ones of the multiple speaking traits in the speech sample data is performed by reference to the identified audio constructs that were associated with the individual ones of multiple speaking traits. 17. The method of claim 15, wherein the multiple speaking traits comprise at least two of: vocal fry, uptalk, tag questions, filler sounds or hedge words. 18. The method of claim 15, wherein the multiple speaking traits comprise: vocal fry, uptalk, tag questions, filler sounds and hedge words. 19. The method of claim 15, wherein the speaking trait quantity threshold is based on a quantity of instances of that speaking trait that were detected in a set of speech samples that were previously categorized as highly exhibiting the one or more criteria. 20. The method of claim 15, wherein the empirically-derived correlation between the one of the multiple speaking traits and the one or more criteria for which the user's speaking the speech sample is being evaluated is based on a quantity of the one of the multiple speaking traits detected in a set of speech samples that were previously categorized as highly exhibiting the one or more criteria and on a quantity of the one of the multiple speaking traits detected in a set of speech samples that were previously categorized as poorly exhibiting the one or more criteria. | 2,600 |
10,585 | 10,585 | 15,831,721 | 2,689 | A hand-held device having a touch sensitive surface uses a relative distance from an origin location to each of a plurality of touch zones of the touch sensitive surface activated by a user to select a one of the plurality of touch zones as being intended for activation by the user. | 1. A method for using a controlled device to control an operating state of a controlling device in communication with the controlled device, comprising:
storing by the controlling device in a deferred transfer queue a message communication; at a time subsequent to the message communication being stored in the deferred transfer queue, receiving by the controlled device from the controlling device a first communication; in response to the controlling device receiving from the controlling device the first communication, transmitting from the controlled device to the controlling device the message communication stored in the deferred transfer queue; and in response to the controlling device receiving the message communication from the controlled device, causing the controlling device to transition from a first controlling device operating state to a second controlling device operating state. 2. The method as recited in claim 1, wherein causing the controlling device to transition from a first controlling device operating state to a second controlling device operating state comprises causing the controlling device to transition from a first user interface state to a second user interface state. 3. The method as recited in claim 2, wherein the first user interface state comprises a state in which at least one input element of the controlling device is disabled and the second user interface state comprises a state in which the at least one input element of the controlling device is enabled. 4. The method as recited in claim 3, wherein the at least one input element of the controlling device comprises a graphical user interface input element presented in a display of the controlling device. 5. The method as recited in claim 3, wherein the at least one input element of the controlling device comprises a hard key of the controlling device. 6. The method as recited in claim 1, wherein causing the controlling device to transition from a first controlling device operating state to a second controlling device operating state comprises causing the controlling device to transition from a first command transmitting state to a second command transmitting state. 7. The method as recited in claim 6, wherein the first command transmitting state comprises a state in which the controlling device uses a first transmitter to transmit command communications to an intended target device and the second command transmitting state comprises a state in which the controlling device uses a second transmitter, different than the first transmitter, to transmit command communications to an intended target device. 8. The method as recited in claim 7, wherein the first transmitter comprises an infrared transmitter and the second transmitter comprises a radio frequency transmitter. 9. The method as recited in claim 1, wherein the first communication comprises an explicit command to the controlled device to transmit from the controlled device to the controlling device the message communication stored in the deferred transfer queue. 10. The method as recited in claim 9, wherein the controlling device is caused to transmit the first communication in response to the controlling device exiting a quiescent operating state of the controlling device. 11. The method as recited in claim 1, further comprising the steps of receiving by the controlled device, at a time at a time subsequent to the message communication being stored in the deferred transfer queue and prior to the controlled device receiving from the controlling device the first communication, a second communication from the controlling device, in response to the controlling device receiving from the controlling device the second communication, transmitting from the controlled device to the controlling device a message having data indicative of a status of the deferred transfer queue, and in response to the controlling device receiving the message having data indicative of a status of the deferred transfer queue, causing the controlling device to transmit to the controlled device the first communication. 12. The method as recited in claim 11, wherein the second communication comprises a command for causing the controlled device to perform a functional operation and the first communication comprises an explicit command to the controlled device to transmit from the controlled device to the controlling device the message communication stored in the deferred transfer queue. 13. The method as recited in claim 12, wherein the explicit command comprises a fetch command. 14. The method as recited in claim 11, wherein causing the controlling device to transition from a first controlling device operating state to a second controlling device operating state comprises causing the controlling device to transition from a first user interface state to a second user interface state. 15. The method as recited in claim 14, wherein the first user interface state comprises a state in which at least one input element of the controlling device is disabled and the second user interface state comprises a state in which the at least one input element of the controlling device is enabled. 16. The method as recited in claim 15, wherein the at least one input element of the controlling device comprises a graphical user interface input element presented in a display of the controlling device. 17. The method as recited in claim 15, wherein the at least one input element of the controlling device comprises a hard key of the controlling device. 18. The method as recited in claim 11, wherein causing the controlling device to transition from a first controlling device operating state to a second controlling device operating state comprises causing the controlling device to transition from a first command transmitting state to a second command transmitting state. 19. The method as recited in claim 18, wherein the first command transmitting state comprises a state in which the controlling device uses a first transmitter to transmit command communications to an intended target device and the second command transmitting state comprises a state in which the controlling device uses a second transmitter, different than the first transmitter, to transmit command communications to an intended target device. 20. The method as recited in claim 19, wherein the first transmitter comprises an infrared transmitter and the second transmitter comprises a radio frequency transmitter. | A hand-held device having a touch sensitive surface uses a relative distance from an origin location to each of a plurality of touch zones of the touch sensitive surface activated by a user to select a one of the plurality of touch zones as being intended for activation by the user.1. A method for using a controlled device to control an operating state of a controlling device in communication with the controlled device, comprising:
storing by the controlling device in a deferred transfer queue a message communication; at a time subsequent to the message communication being stored in the deferred transfer queue, receiving by the controlled device from the controlling device a first communication; in response to the controlling device receiving from the controlling device the first communication, transmitting from the controlled device to the controlling device the message communication stored in the deferred transfer queue; and in response to the controlling device receiving the message communication from the controlled device, causing the controlling device to transition from a first controlling device operating state to a second controlling device operating state. 2. The method as recited in claim 1, wherein causing the controlling device to transition from a first controlling device operating state to a second controlling device operating state comprises causing the controlling device to transition from a first user interface state to a second user interface state. 3. The method as recited in claim 2, wherein the first user interface state comprises a state in which at least one input element of the controlling device is disabled and the second user interface state comprises a state in which the at least one input element of the controlling device is enabled. 4. The method as recited in claim 3, wherein the at least one input element of the controlling device comprises a graphical user interface input element presented in a display of the controlling device. 5. The method as recited in claim 3, wherein the at least one input element of the controlling device comprises a hard key of the controlling device. 6. The method as recited in claim 1, wherein causing the controlling device to transition from a first controlling device operating state to a second controlling device operating state comprises causing the controlling device to transition from a first command transmitting state to a second command transmitting state. 7. The method as recited in claim 6, wherein the first command transmitting state comprises a state in which the controlling device uses a first transmitter to transmit command communications to an intended target device and the second command transmitting state comprises a state in which the controlling device uses a second transmitter, different than the first transmitter, to transmit command communications to an intended target device. 8. The method as recited in claim 7, wherein the first transmitter comprises an infrared transmitter and the second transmitter comprises a radio frequency transmitter. 9. The method as recited in claim 1, wherein the first communication comprises an explicit command to the controlled device to transmit from the controlled device to the controlling device the message communication stored in the deferred transfer queue. 10. The method as recited in claim 9, wherein the controlling device is caused to transmit the first communication in response to the controlling device exiting a quiescent operating state of the controlling device. 11. The method as recited in claim 1, further comprising the steps of receiving by the controlled device, at a time at a time subsequent to the message communication being stored in the deferred transfer queue and prior to the controlled device receiving from the controlling device the first communication, a second communication from the controlling device, in response to the controlling device receiving from the controlling device the second communication, transmitting from the controlled device to the controlling device a message having data indicative of a status of the deferred transfer queue, and in response to the controlling device receiving the message having data indicative of a status of the deferred transfer queue, causing the controlling device to transmit to the controlled device the first communication. 12. The method as recited in claim 11, wherein the second communication comprises a command for causing the controlled device to perform a functional operation and the first communication comprises an explicit command to the controlled device to transmit from the controlled device to the controlling device the message communication stored in the deferred transfer queue. 13. The method as recited in claim 12, wherein the explicit command comprises a fetch command. 14. The method as recited in claim 11, wherein causing the controlling device to transition from a first controlling device operating state to a second controlling device operating state comprises causing the controlling device to transition from a first user interface state to a second user interface state. 15. The method as recited in claim 14, wherein the first user interface state comprises a state in which at least one input element of the controlling device is disabled and the second user interface state comprises a state in which the at least one input element of the controlling device is enabled. 16. The method as recited in claim 15, wherein the at least one input element of the controlling device comprises a graphical user interface input element presented in a display of the controlling device. 17. The method as recited in claim 15, wherein the at least one input element of the controlling device comprises a hard key of the controlling device. 18. The method as recited in claim 11, wherein causing the controlling device to transition from a first controlling device operating state to a second controlling device operating state comprises causing the controlling device to transition from a first command transmitting state to a second command transmitting state. 19. The method as recited in claim 18, wherein the first command transmitting state comprises a state in which the controlling device uses a first transmitter to transmit command communications to an intended target device and the second command transmitting state comprises a state in which the controlling device uses a second transmitter, different than the first transmitter, to transmit command communications to an intended target device. 20. The method as recited in claim 19, wherein the first transmitter comprises an infrared transmitter and the second transmitter comprises a radio frequency transmitter. | 2,600 |
10,586 | 10,586 | 15,289,091 | 2,626 | This application is directed to detecting touch events on a touch sensing surface coupled to a capacitive sense array and one or more force sensors. The capacitive sense array includes a plurality of sense electrodes configured to provide a plurality of capacitive sense signals. The force electrodes are configured to provide one or more force signals. In accordance with the capacitive sense signals, one or more candidate touches are detected on the touch sensing surface, and the detected candidate touches include a first candidate touch. When it is determined that force associated with the one or more force signals is associated with a grip on an edge area of the touch sensing surface, the first candidate touch is associated with the grip on the edge area of the touch sensing surface, and is designated as an invalid touch. | 1. A method of detecting touch events on a touch sensing surface coupled to a capacitive sense array, comprising:
at a processing device coupled to a capacitive sense array and one or more force electrodes, wherein the capacitive sense array includes a plurality of sense electrodes:
obtaining a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array;
in accordance with the plurality of capacitive sense signals, detecting one or more candidate touches on the touch sensing surface, the one or more candidate touches including a first candidate touch;
obtaining one or more force signals from the one or more force electrodes; and
determining that force associated with the one or more force signals is associated with a grip on an edge area of the touch sensing surface;
in accordance with the determination that force associated with the one or more force signals is associated with a grip on an edge area of the touch sensing surface:
associating the first candidate touch with the grip on the edge area of the touch sensing surface; and
designating the first candidate touch as an invalid touch. 2. The method of claim 1, further comprising:
determining a force shape corresponding to the force associated with the grip on the edge area based on the one or more force signals, wherein the force shape indicates a physical press on the edge area of the touch sensing surface on the edge area, thereby showing that the force is associated with the grip on the edge area. 3. The method of claim 2, wherein the force shape is represented by a force shape matrix having a plurality of elements corresponding to the one or more force electrodes, and each element of the force shape matrix is associated with a displacement of the touch sensing surface at a corresponding force electrode. 4. The method of claim 1, wherein determining that the force is associated with a grip of the edge area further comprises at least one of:
determining that the force applied on the edge area lasts for a duration of time that exceeds a predetermined grip time threshold; determining that the first touch associated with the grip of the edge area has a touch area that exceeds a predetermined grip area threshold; and determining that force associated with the one or more force signals exceeds a grip force threshold. 5. The method of claim 1, further comprising:
determining that the one or more touches include a plurality of substantially synchronous touches. 6. The method of claim 1, wherein the one or more force signals are associated with one or more capacitances formed between the one or more force electrodes and a ground, and obtaining one or more force signals from the one or more force electrodes further comprises at least one of:
measuring the one or more force signals based on self-capacitances of the one or more force electrodes with respect to the ground; and measuring the one or more force signals based on mutual capacitances of the one or more force electrodes. 7. The method of claim 1, wherein the one or more force electrodes and a ground are separated by a compressible gap having a gap thickness that varies in response to a force on the touch sensing surface. 8. The method of claim 7, wherein the compressible gap includes an air gap. 9. The method of claim 7, wherein the one or more force electrodes and the capacitive sense array are disposed on one side of the compressible gap. 10. The method of claim 7, wherein the one or more force electrodes and the capacitive sense array are separated by the compressible gap. 11. A processing device, wherein:
a processing core; a capacitance sense circuit; and memory storing one or more programs configured for execution by the processing core, the one or more programs comprising instructions for:
obtaining a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array;
in accordance with the plurality of capacitive sense signals, detecting one or more candidate touches on the touch sensing surface, the one or more candidate touches including a first candidate touch;
obtaining one or more force signals from the one or more force electrodes; and
determining that force associated with the one or more force signals is associated with a grip on an edge area of the touch sensing surface;
in accordance with the determination that force associated with the one or more force signals is associated with a grip on an edge area of the touch sensing surface:
associating the first candidate touch with the grip on the edge area of the touch sensing surface; and
designating the first candidate touch as an invalid touch. 12. The processing device of claim 11, wherein the processing device includes a plurality of capacitive sense channels, and the processing device is further configured to:
electrically couple a first subset of the plurality of capacitive sense channels to the plurality of sense electrodes of the capacitive sense array, wherein the plurality of capacitive sense signals are measured from the first subset of the plurality of capacitive sense channels. 13. The processing device of claim 12, wherein the processing device is further configured to:
electrically couple a second subset of the plurality of capacitive sense channels to the one or more force electrodes, wherein the one or more force signals are measured from the second subset of the plurality of capacitive sense channels. 14. The processing device of claim 11, wherein the processing device is further configured to:
determine a force shape corresponding to the force associated with the grip on the edge area based on the one or more force signals, wherein the force shape indicates a physical press on the edge area of the touch sensing surface on the edge area, thereby showing that the force is associated with the grip on the edge area. 15. The processing device of claim 11, wherein the processing device is configured to determine that the force is associated with a grip of the edge area by at least one of:
determining that the force applied on the edge area lasts for a duration of time that exceeds a predetermined grip time threshold; determining that the first touch associated with the grip of the edge area has a touch area that exceeds a predetermined grip area threshold; and determining that force associated with the one or more force signals exceeds a grip force threshold. 16. An electronic system, comprising:
a capacitive sense array coupled to a touch sensing surface; one or more force electrodes; and a processing device coupled to the capacitive sense array and the one or more force electrodes, wherein the processing device is configured to:
obtain a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array;
in accordance with the plurality of capacitive sense signals, detect one or more candidate touches on the touch sensing surface, the one or more candidate touches including a first candidate touch;
obtain one or more force signals from the one or more force electrodes; and
determine that force associated with the one or more force signals is associated with a grip on an edge area of the touch sensing surface;
in accordance with the determination that force associated with the one or more force signals is associated with a grip on an edge area of the touch sensing surface:
associate the first candidate touch with the grip on the edge area of the touch sensing surface; and
designate the first candidate touch as an invalid touch. 17. The electronic system of claim 16, wherein the processing device is configured to obtain the plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array by at least one of:
measuring the plurality of capacitive sense signals based on self-capacitances of the plurality of sense electrodes with respect to a ground; and measuring the plurality of capacitive sense signals based on mutual capacitances of the plurality of sense electrodes. 18. The electronic system of claim 16, wherein the processing device further comprises:
a touch sense circuit configured to obtain the plurality of capacitive sense signals from the plurality of sense electrodes; a force sense circuit configured to obtain the one or more force signals from the one or more force electrodes; and a controller, coupled to the force sense circuit and the capacitive sense circuit, the controller being configured to synchronize the plurality of capacitive sense signals and obtain the one or more force signals for further processing. 19. The electronic system of claim 16, wherein the processing device is further configured to determine that the one or more touches include a plurality of substantially synchronous touches. 20. The electronic system of claim 16, wherein the one or more force electrodes include a single force electrode. | This application is directed to detecting touch events on a touch sensing surface coupled to a capacitive sense array and one or more force sensors. The capacitive sense array includes a plurality of sense electrodes configured to provide a plurality of capacitive sense signals. The force electrodes are configured to provide one or more force signals. In accordance with the capacitive sense signals, one or more candidate touches are detected on the touch sensing surface, and the detected candidate touches include a first candidate touch. When it is determined that force associated with the one or more force signals is associated with a grip on an edge area of the touch sensing surface, the first candidate touch is associated with the grip on the edge area of the touch sensing surface, and is designated as an invalid touch.1. A method of detecting touch events on a touch sensing surface coupled to a capacitive sense array, comprising:
at a processing device coupled to a capacitive sense array and one or more force electrodes, wherein the capacitive sense array includes a plurality of sense electrodes:
obtaining a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array;
in accordance with the plurality of capacitive sense signals, detecting one or more candidate touches on the touch sensing surface, the one or more candidate touches including a first candidate touch;
obtaining one or more force signals from the one or more force electrodes; and
determining that force associated with the one or more force signals is associated with a grip on an edge area of the touch sensing surface;
in accordance with the determination that force associated with the one or more force signals is associated with a grip on an edge area of the touch sensing surface:
associating the first candidate touch with the grip on the edge area of the touch sensing surface; and
designating the first candidate touch as an invalid touch. 2. The method of claim 1, further comprising:
determining a force shape corresponding to the force associated with the grip on the edge area based on the one or more force signals, wherein the force shape indicates a physical press on the edge area of the touch sensing surface on the edge area, thereby showing that the force is associated with the grip on the edge area. 3. The method of claim 2, wherein the force shape is represented by a force shape matrix having a plurality of elements corresponding to the one or more force electrodes, and each element of the force shape matrix is associated with a displacement of the touch sensing surface at a corresponding force electrode. 4. The method of claim 1, wherein determining that the force is associated with a grip of the edge area further comprises at least one of:
determining that the force applied on the edge area lasts for a duration of time that exceeds a predetermined grip time threshold; determining that the first touch associated with the grip of the edge area has a touch area that exceeds a predetermined grip area threshold; and determining that force associated with the one or more force signals exceeds a grip force threshold. 5. The method of claim 1, further comprising:
determining that the one or more touches include a plurality of substantially synchronous touches. 6. The method of claim 1, wherein the one or more force signals are associated with one or more capacitances formed between the one or more force electrodes and a ground, and obtaining one or more force signals from the one or more force electrodes further comprises at least one of:
measuring the one or more force signals based on self-capacitances of the one or more force electrodes with respect to the ground; and measuring the one or more force signals based on mutual capacitances of the one or more force electrodes. 7. The method of claim 1, wherein the one or more force electrodes and a ground are separated by a compressible gap having a gap thickness that varies in response to a force on the touch sensing surface. 8. The method of claim 7, wherein the compressible gap includes an air gap. 9. The method of claim 7, wherein the one or more force electrodes and the capacitive sense array are disposed on one side of the compressible gap. 10. The method of claim 7, wherein the one or more force electrodes and the capacitive sense array are separated by the compressible gap. 11. A processing device, wherein:
a processing core; a capacitance sense circuit; and memory storing one or more programs configured for execution by the processing core, the one or more programs comprising instructions for:
obtaining a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array;
in accordance with the plurality of capacitive sense signals, detecting one or more candidate touches on the touch sensing surface, the one or more candidate touches including a first candidate touch;
obtaining one or more force signals from the one or more force electrodes; and
determining that force associated with the one or more force signals is associated with a grip on an edge area of the touch sensing surface;
in accordance with the determination that force associated with the one or more force signals is associated with a grip on an edge area of the touch sensing surface:
associating the first candidate touch with the grip on the edge area of the touch sensing surface; and
designating the first candidate touch as an invalid touch. 12. The processing device of claim 11, wherein the processing device includes a plurality of capacitive sense channels, and the processing device is further configured to:
electrically couple a first subset of the plurality of capacitive sense channels to the plurality of sense electrodes of the capacitive sense array, wherein the plurality of capacitive sense signals are measured from the first subset of the plurality of capacitive sense channels. 13. The processing device of claim 12, wherein the processing device is further configured to:
electrically couple a second subset of the plurality of capacitive sense channels to the one or more force electrodes, wherein the one or more force signals are measured from the second subset of the plurality of capacitive sense channels. 14. The processing device of claim 11, wherein the processing device is further configured to:
determine a force shape corresponding to the force associated with the grip on the edge area based on the one or more force signals, wherein the force shape indicates a physical press on the edge area of the touch sensing surface on the edge area, thereby showing that the force is associated with the grip on the edge area. 15. The processing device of claim 11, wherein the processing device is configured to determine that the force is associated with a grip of the edge area by at least one of:
determining that the force applied on the edge area lasts for a duration of time that exceeds a predetermined grip time threshold; determining that the first touch associated with the grip of the edge area has a touch area that exceeds a predetermined grip area threshold; and determining that force associated with the one or more force signals exceeds a grip force threshold. 16. An electronic system, comprising:
a capacitive sense array coupled to a touch sensing surface; one or more force electrodes; and a processing device coupled to the capacitive sense array and the one or more force electrodes, wherein the processing device is configured to:
obtain a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array;
in accordance with the plurality of capacitive sense signals, detect one or more candidate touches on the touch sensing surface, the one or more candidate touches including a first candidate touch;
obtain one or more force signals from the one or more force electrodes; and
determine that force associated with the one or more force signals is associated with a grip on an edge area of the touch sensing surface;
in accordance with the determination that force associated with the one or more force signals is associated with a grip on an edge area of the touch sensing surface:
associate the first candidate touch with the grip on the edge area of the touch sensing surface; and
designate the first candidate touch as an invalid touch. 17. The electronic system of claim 16, wherein the processing device is configured to obtain the plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array by at least one of:
measuring the plurality of capacitive sense signals based on self-capacitances of the plurality of sense electrodes with respect to a ground; and measuring the plurality of capacitive sense signals based on mutual capacitances of the plurality of sense electrodes. 18. The electronic system of claim 16, wherein the processing device further comprises:
a touch sense circuit configured to obtain the plurality of capacitive sense signals from the plurality of sense electrodes; a force sense circuit configured to obtain the one or more force signals from the one or more force electrodes; and a controller, coupled to the force sense circuit and the capacitive sense circuit, the controller being configured to synchronize the plurality of capacitive sense signals and obtain the one or more force signals for further processing. 19. The electronic system of claim 16, wherein the processing device is further configured to determine that the one or more touches include a plurality of substantially synchronous touches. 20. The electronic system of claim 16, wherein the one or more force electrodes include a single force electrode. | 2,600 |
10,587 | 10,587 | 15,318,997 | 2,612 | A method for presenting an image on a display device ( 100 ) includes modifying the image by applying a geometric transformation to the image so that an area of the image on the display device is presented to a viewer with higher density of pixels than that in the rest of the image (S 18 ). | 1. A method for presenting an image on a display device, including:
modifying the image by applying a geometric transformation to the image so that an area of the image on the display device is presented to a viewer with higher density of pixels than that in the rest of the image. 2. The method according to claim 1, wherein the method further including presenting the image on the display device to the viewer through an optical component applying an inverse transformation corresponding to the geometric transformation that has been applied to the image. 3. The method according to claim 2, wherein the method further including:
determining eye gaze position of the viewer; and changing the area of the image on the display device having the higher density of pixels and the position of the optical component in response to the determined eye gaze position. 4. The method according to claim 1, wherein the position of the area of the image on the display device having the higher density of pixels is fixed at a predetermined position on the display device. 5. The method according to claim 1, wherein the display device is a head mounted display (HMD) device. 6. The method according to claim 5, wherein the content has wider area than what can be displayed on the display device, and wherein the method further including:
detecting a head position of the viewer; and selecting a part of the image to be displayed on the display device in response to the detected head position. 7. A display device for presenting an image comprising at least a processor configured to:
modify the image by applying a geometric transformation to the image so that an area of the image on the display device is presented to a viewer with higher density of pixels than that in the rest of the image. 8. The display device according to claim 7, wherein the display device further comprising an optical component through which the image on the display device is presented to the viewer, wherein the optical component is configured to apply an inverse transformation corresponding to the geometric transformation that has been applied to the image. 9. The display device according to claim 8, wherein the display device further comprising an eye tracking sensor, and wherein the processor is further configured to:
determine eye gaze position of the viewer in cooperation with the eye tracking sensor; and change the area of the image on the display device having the higher density of pixels and the position of the optical component in response to the determined eye gaze position. 10. The display device according to claim 8, wherein the processor is further configured to:
determine eye gaze position of the viewer based on information of Region of Interests incorporated in the content; and change the area of the image on the display device having the higher density of pixels and the position of the optical component in response to the determined eye gaze position. 11. The display device according to claim 9, wherein the display device further comprising an actuator to move the optical component, the position of the optical component is changed by operating the actuator in response to the determined eye gaze position. 12. The display device according to claim 7, wherein the position of the area of the image on the display device having the higher density of pixels is fixed at a predetermined position on the display device. 13. The display device according to claim 7, wherein the display device is a head mounted display (HMD) device. 14. The display device according to claim 13, wherein the display device further comprising a position sensor to detect a head position of the viewer, wherein the image has wider area than what can be displayed on the display device, and wherein the processor is further configured to:
detect a head position of the viewer in cooperation with the position sensor; and select a part of the image to be displayed on the display device in response to the detected head position. | A method for presenting an image on a display device ( 100 ) includes modifying the image by applying a geometric transformation to the image so that an area of the image on the display device is presented to a viewer with higher density of pixels than that in the rest of the image (S 18 ).1. A method for presenting an image on a display device, including:
modifying the image by applying a geometric transformation to the image so that an area of the image on the display device is presented to a viewer with higher density of pixels than that in the rest of the image. 2. The method according to claim 1, wherein the method further including presenting the image on the display device to the viewer through an optical component applying an inverse transformation corresponding to the geometric transformation that has been applied to the image. 3. The method according to claim 2, wherein the method further including:
determining eye gaze position of the viewer; and changing the area of the image on the display device having the higher density of pixels and the position of the optical component in response to the determined eye gaze position. 4. The method according to claim 1, wherein the position of the area of the image on the display device having the higher density of pixels is fixed at a predetermined position on the display device. 5. The method according to claim 1, wherein the display device is a head mounted display (HMD) device. 6. The method according to claim 5, wherein the content has wider area than what can be displayed on the display device, and wherein the method further including:
detecting a head position of the viewer; and selecting a part of the image to be displayed on the display device in response to the detected head position. 7. A display device for presenting an image comprising at least a processor configured to:
modify the image by applying a geometric transformation to the image so that an area of the image on the display device is presented to a viewer with higher density of pixels than that in the rest of the image. 8. The display device according to claim 7, wherein the display device further comprising an optical component through which the image on the display device is presented to the viewer, wherein the optical component is configured to apply an inverse transformation corresponding to the geometric transformation that has been applied to the image. 9. The display device according to claim 8, wherein the display device further comprising an eye tracking sensor, and wherein the processor is further configured to:
determine eye gaze position of the viewer in cooperation with the eye tracking sensor; and change the area of the image on the display device having the higher density of pixels and the position of the optical component in response to the determined eye gaze position. 10. The display device according to claim 8, wherein the processor is further configured to:
determine eye gaze position of the viewer based on information of Region of Interests incorporated in the content; and change the area of the image on the display device having the higher density of pixels and the position of the optical component in response to the determined eye gaze position. 11. The display device according to claim 9, wherein the display device further comprising an actuator to move the optical component, the position of the optical component is changed by operating the actuator in response to the determined eye gaze position. 12. The display device according to claim 7, wherein the position of the area of the image on the display device having the higher density of pixels is fixed at a predetermined position on the display device. 13. The display device according to claim 7, wherein the display device is a head mounted display (HMD) device. 14. The display device according to claim 13, wherein the display device further comprising a position sensor to detect a head position of the viewer, wherein the image has wider area than what can be displayed on the display device, and wherein the processor is further configured to:
detect a head position of the viewer in cooperation with the position sensor; and select a part of the image to be displayed on the display device in response to the detected head position. | 2,600 |
10,588 | 10,588 | 15,783,302 | 2,627 | An apparatus may include a first panel having a first user interface that includes a keyboard. The apparatus may also include a second panel coupled via a hinge to the first panel in a clamshell structure. The second panel may include a first display side to present information in a first display mode when the apparatus is arranged in an open position, and a second display side to present information in a second display mode when the apparatus is arranged in a closed position. Other embodiments are disclosed and claimed. | 1-29. (canceled) 30. An apparatus, comprising:
a processor, a first panel having a first input device that includes a keyboard; a second panel coupled via a hinge to the first panel, the second panel comprising a display panel operably coupled to the processor, the display panel comprising:
a first display side to present information in a first display mode when the apparatus is arranged in an open position; and
a second display side to present information in a second display mode when the apparatus is arranged in a closed position, the second panel arranged to provide a transparent display when the apparatus is in the open position such that an image is visible via the first display side and the second display side; and
a display adjusting module, at least a portion of the display adjusting module comprised in a software module, the display adjusting module to provide information to the processor to cause the display panel to present the image in one of a first orientation or a second orientation when the apparatus is arranged in the open position based on whether a swiping motion input is received from the first display side or the second display side, the display adjusting module to cause, via information provided to the processor the display panel to: present the image in the first orientation responsive to receiving the swiping motion input from the first display side and the apparatus is arranged in the open position, and present the image in the second orientation responsive to receiving the swiping motion input from the second display side and the apparatus is arranged in the open position. | An apparatus may include a first panel having a first user interface that includes a keyboard. The apparatus may also include a second panel coupled via a hinge to the first panel in a clamshell structure. The second panel may include a first display side to present information in a first display mode when the apparatus is arranged in an open position, and a second display side to present information in a second display mode when the apparatus is arranged in a closed position. Other embodiments are disclosed and claimed.1-29. (canceled) 30. An apparatus, comprising:
a processor, a first panel having a first input device that includes a keyboard; a second panel coupled via a hinge to the first panel, the second panel comprising a display panel operably coupled to the processor, the display panel comprising:
a first display side to present information in a first display mode when the apparatus is arranged in an open position; and
a second display side to present information in a second display mode when the apparatus is arranged in a closed position, the second panel arranged to provide a transparent display when the apparatus is in the open position such that an image is visible via the first display side and the second display side; and
a display adjusting module, at least a portion of the display adjusting module comprised in a software module, the display adjusting module to provide information to the processor to cause the display panel to present the image in one of a first orientation or a second orientation when the apparatus is arranged in the open position based on whether a swiping motion input is received from the first display side or the second display side, the display adjusting module to cause, via information provided to the processor the display panel to: present the image in the first orientation responsive to receiving the swiping motion input from the first display side and the apparatus is arranged in the open position, and present the image in the second orientation responsive to receiving the swiping motion input from the second display side and the apparatus is arranged in the open position. | 2,600 |
10,589 | 10,589 | 15,849,016 | 2,674 | Generating a risk and constraint labeled context map of an operational space is provided. The risk and constraint labeled context map of the operational space corresponding to a user of a cognitive suit is generated to drive the cognitive suit contextually using three-dimension reconstruction, virtual reality, and semi-supervised learning. Labeled risks and constraints in the risk and constraint labeled context map are associated with cognitive suit actuation events to deploy a set of mitigation strategies to address the labeled risks and constraints. An apparatus embedded in the cognitive suit is actuated to deploy the set of mitigation strategies in response to sensing a labeled risk or labeled constraint proximate to the user along a trajectory of the user in the operational space. | 1. A computer-implemented method for generating a risk and constraint labeled context map of an operational space, the computer-implemented method comprising:
generating, by a data processing system, the risk and constraint labeled context map of the operational space corresponding to a user of a cognitive suit to drive the cognitive suit contextually using three-dimension reconstruction, virtual reality, and semi-supervised learning; associating, by the data processing system, labeled risks and constraints in the risk and constraint labeled context map with cognitive suit actuation events to deploy a set of mitigation strategies to address the labeled risks and constraints; and actuating, by the data processing system, an apparatus embedded in the cognitive suit to deploy the set of mitigation strategies in response to sensing a labeled risk or labeled constraint proximate to the user along a trajectory of the user in the operational space. 2. The computer-implemented method of claim 1 further comprising:
detecting, by the data processing system, that the user of the cognitive suit has entered the operational space corresponding to the user; and
generating, by the data processing system, a three-dimensional map of the operational space using captured images of the operational space by an imaging system coupled to the data processing system. 3. The computer-implemented method of claim 2 further comprising:
generating, by the data processing system, a set of virtual reality scenes within the operational space based on the three-dimensional map;
selecting, by the data processing system, a virtual reality scene from the set of virtual reality scenes; and
segmenting, by the data processing system, the selected virtual reality scene using a three-dimension segmentation process. 4. The computer-implemented method of claim 3 further comprising:
extracting, by the data processing system, image features from the selected virtual reality scene;
determining, by the data processing system, whether the extracted image features are similar to image features of a previously labeled virtual reality scene; and
responsive to the data processing system determining that the extracted image features are similar to image features of a previously labeled virtual reality scene, assigning, by the data processing system, a same label corresponding to the previously labeled virtual reality scene to the selected virtual reality scene. 5. The computer-implemented method of claim 4 further comprising:
responsive to the data processing system determining that the extracted image features are not similar to image features of a previously labeled virtual reality scene, determining, by the data processing system, whether the data processing system received an indication that the selected virtual reality scene contains a risk or a constraint to the user;
responsive to the data processing system determining that the data processing system did receive an indication that the selected virtual reality scene contains a risk or a constraint to the user, determining, by the data processing system, whether the indication corresponds to a user risk;
responsive to the data processing system determining that the indication does correspond to a user risk, assigning, by the data processing system, a risk label to the selected virtual reality scene; and
responsive to the data processing system determining that the indication does not correspond to a user risk, assigning, by the data processing system, a constraint label to the selected virtual reality scene. 6. The computer-implemented method of claim 1 further comprising:
mapping, by the data processing system, each labeled risk and each labeled constraint within the risk and constraint labeled context map of the operational space to a corresponding cognitive suit actuation event. 7. The computer-implemented method of claim 1 further comprising:
predicting, by the data processing system, a current trajectory of the user of the cognitive suit within the operational space based on historical trajectory information and geolocation data. 8. The computer-implemented method of claim 1 further comprising:
responsive to the data processing system determining that the user of the cognitive suit will encounter a labeled risk or a labeled constraint within the operational space along a predicted trajectory of the user, determining, by the data processing system, whether the user is within a defined threshold distance to the labeled risk or the labeled constraint;
responsive to the data processing system determining that the user is within the defined threshold distance to the labeled risk or the labeled constraint, identifying, by the data processing system, a cognitive suit actuation event that corresponds to the labeled risk or the labeled constraint within the defined threshold distance based on a mapping of labeled risks and constraints to corresponding cognitive suit actuation events; and
actuating, by the data processing system, the cognitive suit actuation event that corresponds to the labeled risk or the labeled constraint within the defined threshold distance. 9. The computer-implemented method of claim 8 further comprising:
receiving, by the data processing system, sensor data from a set of sensors embedded in the cognitive suit; and
regulating, by the data processing system, a level of actuation of the cognitive suit actuation event based on the received sensor data. 10. The computer-implemented method of claim 9, wherein the set of sensors include a geographical position sensor and an object proximity detector. 11. The computer-implemented method of claim 9 further comprising:
receiving, by the data processing system, feedback from the user of the cognitive suit regarding the level of actuation of the cognitive suit. 12. The computer-implemented method of claim 1, wherein the cognitive suit is a risk prediction and reduction cognitive suit. 13. The computer-implemented method of claim 12, wherein the data processing system is embedded in the risk prediction and reduction cognitive suit. 14. The computer-implemented method of claim 1, wherein the apparatus is an inflatable/deflatable apparatus. 15. The computer-implemented method of claim 1, wherein the set of mitigation strategies includes a risk warning event when a labeled risk and a labeled constraint exist within the operational space proximate to the user of the cognitive suit, a risk protection event when a labeled risk exists within the operational space proximate to the user and not a labeled constraint, a constraint relaxation event when a labeled constraint exists within the operational space proximate to the user and not a labeled risk, and a no action event when neither a labeled risk nor a labeled constraint exists within the operational space proximate to the user. 16. A data processing system for generating a risk and constraint labeled context map of an operational space, the data processing system comprising:
a bus system; a storage device connected to the bus system, wherein the storage device stores program instructions; and a processor connected to the bus system, wherein the processor executes the program instructions to:
generate the risk and constraint labeled context map of the operational space corresponding to a user of a cognitive suit to drive the cognitive suit contextually using three-dimension reconstruction, virtual reality, and semi-supervised learning;
associate labeled risks and constraints in the risk and constraint labeled context map with cognitive suit actuation events to deploy a set of mitigation strategies to address the labeled risks and constraints; and
actuate an apparatus embedded in the cognitive suit to deploy the set of mitigation strategies in response to sensing a labeled risk or labeled constraint proximate to the user along a trajectory of the user in the operational space. 17. A computer program product for generating a risk and constraint labeled context map of an operational space, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a data processing system to cause the data processing system to perform a method comprising:
generating, by the data processing system, the risk and constraint labeled context map of the operational space corresponding to a user of a cognitive suit to drive the cognitive suit contextually using three-dimension reconstruction, virtual reality, and semi-supervised learning; associating, by the data processing system, labeled risks and constraints in the risk and constraint labeled context map with cognitive suit actuation events to deploy a set of mitigation strategies to address the labeled risks and constraints; and actuating, by the data processing system, an apparatus embedded in the cognitive suit to deploy the set of mitigation strategies in response to sensing a labeled risk or labeled constraint proximate to the user along a trajectory of the user in the operational space. 18. The computer program product of claim 17 further comprising:
detecting, by the data processing system, that the user of the cognitive suit has entered the operational space corresponding to the user; and
generating, by the data processing system, a three-dimensional map of the operational space using captured images of the operational space by an imaging system coupled to the data processing system. 19. The computer program product of claim 18 further comprising:
generating, by the data processing system, a set of virtual reality scenes within the operational space based on the three-dimensional map;
selecting, by the data processing system, a virtual reality scene from the set of virtual reality scenes; and
segmenting, by the data processing system, the selected virtual reality scene using a three-dimension segmentation process. 20. The computer program product of claim 19 further comprising:
extracting, by the data processing system, image features from the selected virtual reality scene;
determining, by the data processing system, whether the extracted image features are similar to image features of a previously labeled virtual reality scene; and
responsive to the data processing system determining that the extracted image features are similar to image features of a previously labeled virtual reality scene, assigning, by the data processing system, a same label corresponding to the previously labeled virtual reality scene to the selected virtual reality scene. | Generating a risk and constraint labeled context map of an operational space is provided. The risk and constraint labeled context map of the operational space corresponding to a user of a cognitive suit is generated to drive the cognitive suit contextually using three-dimension reconstruction, virtual reality, and semi-supervised learning. Labeled risks and constraints in the risk and constraint labeled context map are associated with cognitive suit actuation events to deploy a set of mitigation strategies to address the labeled risks and constraints. An apparatus embedded in the cognitive suit is actuated to deploy the set of mitigation strategies in response to sensing a labeled risk or labeled constraint proximate to the user along a trajectory of the user in the operational space.1. A computer-implemented method for generating a risk and constraint labeled context map of an operational space, the computer-implemented method comprising:
generating, by a data processing system, the risk and constraint labeled context map of the operational space corresponding to a user of a cognitive suit to drive the cognitive suit contextually using three-dimension reconstruction, virtual reality, and semi-supervised learning; associating, by the data processing system, labeled risks and constraints in the risk and constraint labeled context map with cognitive suit actuation events to deploy a set of mitigation strategies to address the labeled risks and constraints; and actuating, by the data processing system, an apparatus embedded in the cognitive suit to deploy the set of mitigation strategies in response to sensing a labeled risk or labeled constraint proximate to the user along a trajectory of the user in the operational space. 2. The computer-implemented method of claim 1 further comprising:
detecting, by the data processing system, that the user of the cognitive suit has entered the operational space corresponding to the user; and
generating, by the data processing system, a three-dimensional map of the operational space using captured images of the operational space by an imaging system coupled to the data processing system. 3. The computer-implemented method of claim 2 further comprising:
generating, by the data processing system, a set of virtual reality scenes within the operational space based on the three-dimensional map;
selecting, by the data processing system, a virtual reality scene from the set of virtual reality scenes; and
segmenting, by the data processing system, the selected virtual reality scene using a three-dimension segmentation process. 4. The computer-implemented method of claim 3 further comprising:
extracting, by the data processing system, image features from the selected virtual reality scene;
determining, by the data processing system, whether the extracted image features are similar to image features of a previously labeled virtual reality scene; and
responsive to the data processing system determining that the extracted image features are similar to image features of a previously labeled virtual reality scene, assigning, by the data processing system, a same label corresponding to the previously labeled virtual reality scene to the selected virtual reality scene. 5. The computer-implemented method of claim 4 further comprising:
responsive to the data processing system determining that the extracted image features are not similar to image features of a previously labeled virtual reality scene, determining, by the data processing system, whether the data processing system received an indication that the selected virtual reality scene contains a risk or a constraint to the user;
responsive to the data processing system determining that the data processing system did receive an indication that the selected virtual reality scene contains a risk or a constraint to the user, determining, by the data processing system, whether the indication corresponds to a user risk;
responsive to the data processing system determining that the indication does correspond to a user risk, assigning, by the data processing system, a risk label to the selected virtual reality scene; and
responsive to the data processing system determining that the indication does not correspond to a user risk, assigning, by the data processing system, a constraint label to the selected virtual reality scene. 6. The computer-implemented method of claim 1 further comprising:
mapping, by the data processing system, each labeled risk and each labeled constraint within the risk and constraint labeled context map of the operational space to a corresponding cognitive suit actuation event. 7. The computer-implemented method of claim 1 further comprising:
predicting, by the data processing system, a current trajectory of the user of the cognitive suit within the operational space based on historical trajectory information and geolocation data. 8. The computer-implemented method of claim 1 further comprising:
responsive to the data processing system determining that the user of the cognitive suit will encounter a labeled risk or a labeled constraint within the operational space along a predicted trajectory of the user, determining, by the data processing system, whether the user is within a defined threshold distance to the labeled risk or the labeled constraint;
responsive to the data processing system determining that the user is within the defined threshold distance to the labeled risk or the labeled constraint, identifying, by the data processing system, a cognitive suit actuation event that corresponds to the labeled risk or the labeled constraint within the defined threshold distance based on a mapping of labeled risks and constraints to corresponding cognitive suit actuation events; and
actuating, by the data processing system, the cognitive suit actuation event that corresponds to the labeled risk or the labeled constraint within the defined threshold distance. 9. The computer-implemented method of claim 8 further comprising:
receiving, by the data processing system, sensor data from a set of sensors embedded in the cognitive suit; and
regulating, by the data processing system, a level of actuation of the cognitive suit actuation event based on the received sensor data. 10. The computer-implemented method of claim 9, wherein the set of sensors include a geographical position sensor and an object proximity detector. 11. The computer-implemented method of claim 9 further comprising:
receiving, by the data processing system, feedback from the user of the cognitive suit regarding the level of actuation of the cognitive suit. 12. The computer-implemented method of claim 1, wherein the cognitive suit is a risk prediction and reduction cognitive suit. 13. The computer-implemented method of claim 12, wherein the data processing system is embedded in the risk prediction and reduction cognitive suit. 14. The computer-implemented method of claim 1, wherein the apparatus is an inflatable/deflatable apparatus. 15. The computer-implemented method of claim 1, wherein the set of mitigation strategies includes a risk warning event when a labeled risk and a labeled constraint exist within the operational space proximate to the user of the cognitive suit, a risk protection event when a labeled risk exists within the operational space proximate to the user and not a labeled constraint, a constraint relaxation event when a labeled constraint exists within the operational space proximate to the user and not a labeled risk, and a no action event when neither a labeled risk nor a labeled constraint exists within the operational space proximate to the user. 16. A data processing system for generating a risk and constraint labeled context map of an operational space, the data processing system comprising:
a bus system; a storage device connected to the bus system, wherein the storage device stores program instructions; and a processor connected to the bus system, wherein the processor executes the program instructions to:
generate the risk and constraint labeled context map of the operational space corresponding to a user of a cognitive suit to drive the cognitive suit contextually using three-dimension reconstruction, virtual reality, and semi-supervised learning;
associate labeled risks and constraints in the risk and constraint labeled context map with cognitive suit actuation events to deploy a set of mitigation strategies to address the labeled risks and constraints; and
actuate an apparatus embedded in the cognitive suit to deploy the set of mitigation strategies in response to sensing a labeled risk or labeled constraint proximate to the user along a trajectory of the user in the operational space. 17. A computer program product for generating a risk and constraint labeled context map of an operational space, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a data processing system to cause the data processing system to perform a method comprising:
generating, by the data processing system, the risk and constraint labeled context map of the operational space corresponding to a user of a cognitive suit to drive the cognitive suit contextually using three-dimension reconstruction, virtual reality, and semi-supervised learning; associating, by the data processing system, labeled risks and constraints in the risk and constraint labeled context map with cognitive suit actuation events to deploy a set of mitigation strategies to address the labeled risks and constraints; and actuating, by the data processing system, an apparatus embedded in the cognitive suit to deploy the set of mitigation strategies in response to sensing a labeled risk or labeled constraint proximate to the user along a trajectory of the user in the operational space. 18. The computer program product of claim 17 further comprising:
detecting, by the data processing system, that the user of the cognitive suit has entered the operational space corresponding to the user; and
generating, by the data processing system, a three-dimensional map of the operational space using captured images of the operational space by an imaging system coupled to the data processing system. 19. The computer program product of claim 18 further comprising:
generating, by the data processing system, a set of virtual reality scenes within the operational space based on the three-dimensional map;
selecting, by the data processing system, a virtual reality scene from the set of virtual reality scenes; and
segmenting, by the data processing system, the selected virtual reality scene using a three-dimension segmentation process. 20. The computer program product of claim 19 further comprising:
extracting, by the data processing system, image features from the selected virtual reality scene;
determining, by the data processing system, whether the extracted image features are similar to image features of a previously labeled virtual reality scene; and
responsive to the data processing system determining that the extracted image features are similar to image features of a previously labeled virtual reality scene, assigning, by the data processing system, a same label corresponding to the previously labeled virtual reality scene to the selected virtual reality scene. | 2,600 |
10,590 | 10,590 | 15,217,630 | 2,631 | One embodiment provides a method, including: receiving, at a security device, external data; and adjusting, at the security device, a motion detection feature based on the external data. Other aspects are described and claimed. | 1. A method, comprising:
receiving, at a security device, external data; and adjusting, at the security device, a motion detection feature based on the external data. 2. The method of claim 1, further comprising:
detecting, at the security device, motion of an object; and determining, at the security device, that the motion of the object should be filtered out based on the external condition data. 3. The method of claim 1, wherein the external data is external condition data. 4. The method of claim 3, wherein the external condition data is current condition data. 5. The method of claim 4, wherein the current condition data is weather data. 6. The method of claim 3, wherein the external condition data is updated according to a policy. 7. The method of claim 1, wherein the external data is a predetermined object motion filter. 8. The method of claim 7, wherein the predetermined object motion filter comprises a predetermined pattern filter for vertical object movement. 9. The method of claim 7, wherein the predetermined object motion filter is a repetitive motion filter. 10. The method of claim 9, wherein the predetermined object motion filter is applied to an area in the field of view of the security device. 11. An electronic device, comprising:
a processor; and a memory device that stores instructions executable by the processor to: receive external data; and adjust a motion detection feature based on the external data. 12. The electronic device of claim 11, wherein the instructions are executable by the processor to:
detect motion of an object; and determine that the motion of the object should be filtered out based on the external condition data. 13. The electronic device of claim 11, wherein the external data is external condition data. 14. The electronic device of claim 13, wherein the external condition data is current condition data. 15. The electronic device of claim 14, wherein the current condition data is weather data. 16. The electronic device of claim 13, wherein the external condition data is updated according to a policy. 17. The electronic device of claim 11, wherein the external data is a predetermined object motion filter. 18. The electronic device of claim 17, wherein the predetermined object motion filter comprises a predetermined pattern filter for vertical object movement. 19. The electronic device of claim 17, wherein the predetermined object motion filter is a repetitive motion filter. 20. A product, comprising:
a storage device that stores code, the code being executable by a processor and comprising: code that receives external data; and code that adjusts a motion detection feature based on the external data. | One embodiment provides a method, including: receiving, at a security device, external data; and adjusting, at the security device, a motion detection feature based on the external data. Other aspects are described and claimed.1. A method, comprising:
receiving, at a security device, external data; and adjusting, at the security device, a motion detection feature based on the external data. 2. The method of claim 1, further comprising:
detecting, at the security device, motion of an object; and determining, at the security device, that the motion of the object should be filtered out based on the external condition data. 3. The method of claim 1, wherein the external data is external condition data. 4. The method of claim 3, wherein the external condition data is current condition data. 5. The method of claim 4, wherein the current condition data is weather data. 6. The method of claim 3, wherein the external condition data is updated according to a policy. 7. The method of claim 1, wherein the external data is a predetermined object motion filter. 8. The method of claim 7, wherein the predetermined object motion filter comprises a predetermined pattern filter for vertical object movement. 9. The method of claim 7, wherein the predetermined object motion filter is a repetitive motion filter. 10. The method of claim 9, wherein the predetermined object motion filter is applied to an area in the field of view of the security device. 11. An electronic device, comprising:
a processor; and a memory device that stores instructions executable by the processor to: receive external data; and adjust a motion detection feature based on the external data. 12. The electronic device of claim 11, wherein the instructions are executable by the processor to:
detect motion of an object; and determine that the motion of the object should be filtered out based on the external condition data. 13. The electronic device of claim 11, wherein the external data is external condition data. 14. The electronic device of claim 13, wherein the external condition data is current condition data. 15. The electronic device of claim 14, wherein the current condition data is weather data. 16. The electronic device of claim 13, wherein the external condition data is updated according to a policy. 17. The electronic device of claim 11, wherein the external data is a predetermined object motion filter. 18. The electronic device of claim 17, wherein the predetermined object motion filter comprises a predetermined pattern filter for vertical object movement. 19. The electronic device of claim 17, wherein the predetermined object motion filter is a repetitive motion filter. 20. A product, comprising:
a storage device that stores code, the code being executable by a processor and comprising: code that receives external data; and code that adjusts a motion detection feature based on the external data. | 2,600 |
10,591 | 10,591 | 16,259,700 | 2,687 | A system and method for quieting unwanted sound. As a non-limiting example, various aspects of this disclosure provide a system and method, for example implemented in a premises-based or home audio system, for quieting unwanted sound at a particular location. | 1-22. (canceled) 23. An audio system comprising:
at least one module operable to, at least:
characterize a relationship between a sound signal and a received sound utilizing a microphone that is positioned outside a target zone of sound cancellation, where the sound signal is used to cause a loudspeaker to output an output sound and the received sound is the output sound as received at a location;
identify a sound at the location;
determine, based at least in part on the characterized relationship, a cancellation signal that, when used to cause the loudspeaker to output a counteracting sound, will quiet the identified sound at the location; and
generate the cancellation signal to cause the loudspeaker to output the counteracting sound,
wherein the at least one module is operable to determine the cancellation signal by, at least in part, operating to, independent of user input, set a magnitude of the cancellation signal to a level below that determined for ideal cancellation. 24. The audio system of claim 23, wherein the at least one module is operable to characterize the relationship by, at least in part, operating to:
generate a test stimulus signal to cause the loudspeaker to output a test sound; receive the test sound at the location; and analyze the test stimulus signal and the received test sound to determine a functional relationship between the test stimulus signal and the received test sound. 25. The audio system of claim 24, wherein the test stimulus signal comprises a pseudo-random noise signal comprising a pink noise signal. 26. The audio system of claim 24, wherein the test stimulus signal comprises a swept frequency signal comprising frequencies in a range up to at least twice a quieting passband of the audio system. 27. The audio system of claim 23, wherein the magnitude of the cancellation signal is 6-10 dB lower than the level determined for ideal cancellation. 28. The audio system of claim 23, wherein the at least one module is operable to characterize the relationship between the sound signal and the received sound by, at least in part, utilizing a plurality of microphones where each of the plurality of microphones is positioned at a respective fixed geographical location. 29. The audio system of claim 28, wherein each of the respective fixed geographical locations is within a same room of a premises. 30. An audio system comprising:
at least one module operable to, at least:
characterize a relationship between a sound signal and a received sound utilizing a plurality of microphones, where each of the plurality of microphones is positioned at a respective fixed geographical location, and where the sound signal is used to cause a loudspeaker to output an output sound and the received sound is the output sound as received at a location;
identify a sound at the location;
determine, based at least in part on the characterized relationship, a cancellation signal that, when used to cause the loudspeaker to output a counteracting sound, will quiet the identified sound at the location; and
generate the cancellation signal to cause the loudspeaker to output the counteracting sound,
wherein the at least one module is operable to determine the cancellation signal by, at least in part, operating to, independent of user input, set a magnitude of the cancellation signal to a level below that determined for ideal cancellation. 31. The audio system of claim 30, wherein each of the respective fixed geographical locations is within a same room of a premises. 32. The audio system of claim 30, wherein the at least one module is operable to characterize the relationship by, at least in part, operating to:
generate a test stimulus signal to cause the loudspeaker to output a test sound; receive the test sound at the location; and analyze the test stimulus signal and the received test sound to determine a functional relationship between the test stimulus signal and the received test sound. 33. The audio system of claim 32, wherein the test stimulus signal comprises a pseudo-random noise signal comprising a pink noise signal. 34. The audio system of claim 32, wherein the test stimulus signal comprises a swept frequency signal comprising frequencies in a range up to at least twice a quieting passband of the audio system. 35. The audio system of claim 30, wherein the magnitude of the cancellation signal is 6-10 dB lower than the level determined for ideal cancellation. 36. An audio system comprising:
at least one module operable to, at least:
characterize a relationship between a sound signal and a received sound, where the sound signal is used to cause a loudspeaker to output an output sound, and the received sound is the output sound as received at a location;
identify an unwanted sound at the location;
determine, based at least in part on the characterized relationship, a cancellation signal that, when used to cause the loudspeaker to output a counteracting sound, will quiet the unwanted sound at the location; and
generate the cancellation signal to cause the loudspeaker to output the counteracting sound,
wherein the at least one module is operable to determine the cancellation signal by, at least in part, operating to, independent of user input, set a magnitude of the cancellation signal to a level below that determined for ideal cancellation. 37. The audio system of claim 36, wherein the at least one module is operable to characterize the relationship by, at least in part, operating to:
generate a test stimulus signal to cause the loudspeaker to output a test sound; receive the test sound at the location; and analyze the test stimulus signal and the received test sound to determine a functional relationship between the test stimulus signal and the received test sound. 38. The audio system of claim 37, wherein the test stimulus signal comprises a pseudo-random noise signal comprising a pink noise signal. 39. The audio system of claim 37, wherein the test stimulus signal comprises a swept frequency signal comprising frequencies in a range up to at least twice a quieting passband of the audio system. 40. The audio system of claim 36, wherein the magnitude of the cancellation signal is 6-10 dB lower than the level determined for ideal cancellation. 41. The audio system of claim 36, wherein the at least one module is operable to characterize the relationship between the sound signal and the received sound by, at least in part, utilizing a microphone that is outside a target zone of sound cancellation. 42. The audio system of claim 36, wherein the at least one module is operable to characterize the relationship between the sound signal and the received sound by, at least in part, utilizing a plurality of microphones positioned at fixed geographical locations. | A system and method for quieting unwanted sound. As a non-limiting example, various aspects of this disclosure provide a system and method, for example implemented in a premises-based or home audio system, for quieting unwanted sound at a particular location.1-22. (canceled) 23. An audio system comprising:
at least one module operable to, at least:
characterize a relationship between a sound signal and a received sound utilizing a microphone that is positioned outside a target zone of sound cancellation, where the sound signal is used to cause a loudspeaker to output an output sound and the received sound is the output sound as received at a location;
identify a sound at the location;
determine, based at least in part on the characterized relationship, a cancellation signal that, when used to cause the loudspeaker to output a counteracting sound, will quiet the identified sound at the location; and
generate the cancellation signal to cause the loudspeaker to output the counteracting sound,
wherein the at least one module is operable to determine the cancellation signal by, at least in part, operating to, independent of user input, set a magnitude of the cancellation signal to a level below that determined for ideal cancellation. 24. The audio system of claim 23, wherein the at least one module is operable to characterize the relationship by, at least in part, operating to:
generate a test stimulus signal to cause the loudspeaker to output a test sound; receive the test sound at the location; and analyze the test stimulus signal and the received test sound to determine a functional relationship between the test stimulus signal and the received test sound. 25. The audio system of claim 24, wherein the test stimulus signal comprises a pseudo-random noise signal comprising a pink noise signal. 26. The audio system of claim 24, wherein the test stimulus signal comprises a swept frequency signal comprising frequencies in a range up to at least twice a quieting passband of the audio system. 27. The audio system of claim 23, wherein the magnitude of the cancellation signal is 6-10 dB lower than the level determined for ideal cancellation. 28. The audio system of claim 23, wherein the at least one module is operable to characterize the relationship between the sound signal and the received sound by, at least in part, utilizing a plurality of microphones where each of the plurality of microphones is positioned at a respective fixed geographical location. 29. The audio system of claim 28, wherein each of the respective fixed geographical locations is within a same room of a premises. 30. An audio system comprising:
at least one module operable to, at least:
characterize a relationship between a sound signal and a received sound utilizing a plurality of microphones, where each of the plurality of microphones is positioned at a respective fixed geographical location, and where the sound signal is used to cause a loudspeaker to output an output sound and the received sound is the output sound as received at a location;
identify a sound at the location;
determine, based at least in part on the characterized relationship, a cancellation signal that, when used to cause the loudspeaker to output a counteracting sound, will quiet the identified sound at the location; and
generate the cancellation signal to cause the loudspeaker to output the counteracting sound,
wherein the at least one module is operable to determine the cancellation signal by, at least in part, operating to, independent of user input, set a magnitude of the cancellation signal to a level below that determined for ideal cancellation. 31. The audio system of claim 30, wherein each of the respective fixed geographical locations is within a same room of a premises. 32. The audio system of claim 30, wherein the at least one module is operable to characterize the relationship by, at least in part, operating to:
generate a test stimulus signal to cause the loudspeaker to output a test sound; receive the test sound at the location; and analyze the test stimulus signal and the received test sound to determine a functional relationship between the test stimulus signal and the received test sound. 33. The audio system of claim 32, wherein the test stimulus signal comprises a pseudo-random noise signal comprising a pink noise signal. 34. The audio system of claim 32, wherein the test stimulus signal comprises a swept frequency signal comprising frequencies in a range up to at least twice a quieting passband of the audio system. 35. The audio system of claim 30, wherein the magnitude of the cancellation signal is 6-10 dB lower than the level determined for ideal cancellation. 36. An audio system comprising:
at least one module operable to, at least:
characterize a relationship between a sound signal and a received sound, where the sound signal is used to cause a loudspeaker to output an output sound, and the received sound is the output sound as received at a location;
identify an unwanted sound at the location;
determine, based at least in part on the characterized relationship, a cancellation signal that, when used to cause the loudspeaker to output a counteracting sound, will quiet the unwanted sound at the location; and
generate the cancellation signal to cause the loudspeaker to output the counteracting sound,
wherein the at least one module is operable to determine the cancellation signal by, at least in part, operating to, independent of user input, set a magnitude of the cancellation signal to a level below that determined for ideal cancellation. 37. The audio system of claim 36, wherein the at least one module is operable to characterize the relationship by, at least in part, operating to:
generate a test stimulus signal to cause the loudspeaker to output a test sound; receive the test sound at the location; and analyze the test stimulus signal and the received test sound to determine a functional relationship between the test stimulus signal and the received test sound. 38. The audio system of claim 37, wherein the test stimulus signal comprises a pseudo-random noise signal comprising a pink noise signal. 39. The audio system of claim 37, wherein the test stimulus signal comprises a swept frequency signal comprising frequencies in a range up to at least twice a quieting passband of the audio system. 40. The audio system of claim 36, wherein the magnitude of the cancellation signal is 6-10 dB lower than the level determined for ideal cancellation. 41. The audio system of claim 36, wherein the at least one module is operable to characterize the relationship between the sound signal and the received sound by, at least in part, utilizing a microphone that is outside a target zone of sound cancellation. 42. The audio system of claim 36, wherein the at least one module is operable to characterize the relationship between the sound signal and the received sound by, at least in part, utilizing a plurality of microphones positioned at fixed geographical locations. | 2,600 |
10,592 | 10,592 | 15,724,826 | 2,637 | The present invention relates to the field of network communications. An Optical Line Terminal (OLT) allocates a Pseudo Wire (PW) label of an access segment PW for a port, and establishes a corresponding relationship between the port information and the PW label; and carries the corresponding relationship between the port information and the PW label in a label management message, and sends the label management message to an Optical Network Unit (ONU) so that the ONU updates a forwarding table, in which the label management message adopts an access network management protocol. As a consequence, a problem of supporting Pseudo Wire Emulation Edge-to-Edge (PWE3) on a data plane of an access segment of an access network is solved under the conditions that device complexity of the ONU is not increased and a configuration of the ONU is slightly changed. | 1. A method for managing a label of an access network, comprising:
allocating, by an Optical Line Terminal (OLT), a Pseudo Wire (PW) label of an access segment PW for a first port of an Optical Network Unit (ONU) having at least two ports; establishing a corresponding relationship between an identifier of the first port of the at least two ports and the PW label; carrying, by the OLT, the corresponding relationship between the identifier of the first port of the at least two ports and the PW label in a label management message; sending the label management message to the ONU, wherein the label management message adopts an access network management protocol; and receiving, by the OLT, a frame from the ONU, where the frame is encapsulated with the PW label. 2. The method according to claim 1, wherein the method further comprises:
sending, by the OLT, a core segment query message to a server, wherein the core segment query message carries the identifier of the first port; receiving, by the OLT, a core segment response message sent by the server, wherein the core segment response message carries a core segment Provider Edge (PE) device address and a core segment Attachment Circuit (AC) identifier that correspond to the first port; establishing, by the OLT, a core segment PW to a core segment PE device according to the core segment PE device address and the core segment AC identifier, and allocating a PW label of the core segment PW; and establishing or updating, by the OLT, a mapping relationship between the PW label of the core segment PW and the PW label of the access segment PW. 3. The method according to claim 1, wherein before allocating the PW label of the access segment PW for the first port, the method further comprises:
establishing, by the OLT, a core segment PW to a core segment Provider Edge (PE) device, and allocating a PW label of the core segment PW; and acquiring, by the OLT, at least one of an access segment ONU port and an access segment OLT port corresponding to one of the PW label of the core segment PW and a core segment Attachment Circuit (AC) identifier of the core segment PW, acquiring an ONU address corresponding to one of the PW label of the core segment PW and the core segment AC identifier of the core segment PW, and establishing the access segment PW; and establishing or updating, by the OLT, a mapping relationship between the PW label of the core segment PW and the PW label of the access segment PW after sending the label management message to the ONU. 4. The method according to claim 3, wherein the acquiring step further comprises:
sending, by the OLT, an access segment query message to a server, wherein the access segment query message carries one of the PW label of the core segment PW and the core segment AC identifier of the core segment PW; and receiving, by the OLT, an access segment response message sent by the server, wherein the access segment response message carries the at least one of the access segment ONU port and OLT port corresponding to the one of the PW label of the core segment PW and the core segment AC identifier of the core segment PW, and address corresponding to the PW label of the core segment PW or the core segment AC identifier of the core segment PW. 5. The method according to claim 1, wherein before sending the label management message to the ONU, the method further comprises:
receiving, by the OLT, a label request message before allocating the PW label, wherein the label request message carries the identifier of the first port. 6. The method according to claim 1, further comprising:
modifying, by the OLT, the PW label of the access segment PW for the first port, and modifying the corresponding relationship between the identifier of the first port and the PW label. 7. The method according to claim 1, wherein the access segment PW is based on Multiple Protocol Label Switching (MPLS). 8. A non-transitory computer readable medium including computer-executable instructions for execution on an Optical Line Terminal (OLT), such that when the computer-executable instructions are executed by the apparatus a method is carried out comprising:
allocating a Pseudo Wire (PW) label of an access segment PW for a first port of an Optical Network Unit (ONU) having at least two ports; establishing a corresponding relationship between an identifier of the first port of the at least two ports and the PW label; carrying the corresponding relationship between the identifier of the first port of the at least two ports and the PW label in a label management message; sending the label management message to the ONU wherein the label management message adopts an access network management protocol; and receiving a frame from the ONU, wherein the frame is encapsulated with the PW label. 9. The computer readable medium according to claim 8, further comprising computer-executable instructions for:
sending a core segment query message to a server, wherein the core segment query message carries an OLT port and an ONU port; receiving a core segment response message sent by the server, the core segment response message carries a core segment Provider Edge (PE) device address and a core segment Attachment Circuit (AC) identifier that correspond to the OLT port and the ONU port; establishing a core segment PW to a core segment PE device according to the core segment PE device address and the core segment AC identifier; allocating a PW label of the core segment PW; and modifying a mapping relationship between the PW label of the core segment PW and the PW label of the access segment PW. 10. The non-transitory computer readable medium of claim 8, wherein the access segment PW is based on Multiple Protocol Label Switching (MPLS). 11. A non-transitory computer readable medium including computer-executable instructions for execution on an Optical Network Unit (ONU), such that when the computer-executable instructions are executed by the apparatus a method is carried out comprising:
sending a label request to an Optical Line Terminal (OLT), wherein the label request carries an identifier of a port of the ONU; receiving a response comprising a correspondence between the identifier and a psudeowire (PW) label; storing the correspondence in the ONU; encapsulating received data into a frame by adding the PW label to the data; and sending the frame to the OLT. 12. A method for managing a label, comprising:
sending, by an Optical Network Unit (ONU), a label request to an Optical Line Terminal (OLT), wherein the label request carries an identifier of a port of the ONU; receiving by the ONU, a response comprising a correspondence between the identifier and a psudeowire (PW) label; storing, by the ONU, the correspondence in the ONU; encapsulating, by the ONU, received data into a frame by adding the PW label to the data; and sending, by the ONU, the frame to the OLT. | The present invention relates to the field of network communications. An Optical Line Terminal (OLT) allocates a Pseudo Wire (PW) label of an access segment PW for a port, and establishes a corresponding relationship between the port information and the PW label; and carries the corresponding relationship between the port information and the PW label in a label management message, and sends the label management message to an Optical Network Unit (ONU) so that the ONU updates a forwarding table, in which the label management message adopts an access network management protocol. As a consequence, a problem of supporting Pseudo Wire Emulation Edge-to-Edge (PWE3) on a data plane of an access segment of an access network is solved under the conditions that device complexity of the ONU is not increased and a configuration of the ONU is slightly changed.1. A method for managing a label of an access network, comprising:
allocating, by an Optical Line Terminal (OLT), a Pseudo Wire (PW) label of an access segment PW for a first port of an Optical Network Unit (ONU) having at least two ports; establishing a corresponding relationship between an identifier of the first port of the at least two ports and the PW label; carrying, by the OLT, the corresponding relationship between the identifier of the first port of the at least two ports and the PW label in a label management message; sending the label management message to the ONU, wherein the label management message adopts an access network management protocol; and receiving, by the OLT, a frame from the ONU, where the frame is encapsulated with the PW label. 2. The method according to claim 1, wherein the method further comprises:
sending, by the OLT, a core segment query message to a server, wherein the core segment query message carries the identifier of the first port; receiving, by the OLT, a core segment response message sent by the server, wherein the core segment response message carries a core segment Provider Edge (PE) device address and a core segment Attachment Circuit (AC) identifier that correspond to the first port; establishing, by the OLT, a core segment PW to a core segment PE device according to the core segment PE device address and the core segment AC identifier, and allocating a PW label of the core segment PW; and establishing or updating, by the OLT, a mapping relationship between the PW label of the core segment PW and the PW label of the access segment PW. 3. The method according to claim 1, wherein before allocating the PW label of the access segment PW for the first port, the method further comprises:
establishing, by the OLT, a core segment PW to a core segment Provider Edge (PE) device, and allocating a PW label of the core segment PW; and acquiring, by the OLT, at least one of an access segment ONU port and an access segment OLT port corresponding to one of the PW label of the core segment PW and a core segment Attachment Circuit (AC) identifier of the core segment PW, acquiring an ONU address corresponding to one of the PW label of the core segment PW and the core segment AC identifier of the core segment PW, and establishing the access segment PW; and establishing or updating, by the OLT, a mapping relationship between the PW label of the core segment PW and the PW label of the access segment PW after sending the label management message to the ONU. 4. The method according to claim 3, wherein the acquiring step further comprises:
sending, by the OLT, an access segment query message to a server, wherein the access segment query message carries one of the PW label of the core segment PW and the core segment AC identifier of the core segment PW; and receiving, by the OLT, an access segment response message sent by the server, wherein the access segment response message carries the at least one of the access segment ONU port and OLT port corresponding to the one of the PW label of the core segment PW and the core segment AC identifier of the core segment PW, and address corresponding to the PW label of the core segment PW or the core segment AC identifier of the core segment PW. 5. The method according to claim 1, wherein before sending the label management message to the ONU, the method further comprises:
receiving, by the OLT, a label request message before allocating the PW label, wherein the label request message carries the identifier of the first port. 6. The method according to claim 1, further comprising:
modifying, by the OLT, the PW label of the access segment PW for the first port, and modifying the corresponding relationship between the identifier of the first port and the PW label. 7. The method according to claim 1, wherein the access segment PW is based on Multiple Protocol Label Switching (MPLS). 8. A non-transitory computer readable medium including computer-executable instructions for execution on an Optical Line Terminal (OLT), such that when the computer-executable instructions are executed by the apparatus a method is carried out comprising:
allocating a Pseudo Wire (PW) label of an access segment PW for a first port of an Optical Network Unit (ONU) having at least two ports; establishing a corresponding relationship between an identifier of the first port of the at least two ports and the PW label; carrying the corresponding relationship between the identifier of the first port of the at least two ports and the PW label in a label management message; sending the label management message to the ONU wherein the label management message adopts an access network management protocol; and receiving a frame from the ONU, wherein the frame is encapsulated with the PW label. 9. The computer readable medium according to claim 8, further comprising computer-executable instructions for:
sending a core segment query message to a server, wherein the core segment query message carries an OLT port and an ONU port; receiving a core segment response message sent by the server, the core segment response message carries a core segment Provider Edge (PE) device address and a core segment Attachment Circuit (AC) identifier that correspond to the OLT port and the ONU port; establishing a core segment PW to a core segment PE device according to the core segment PE device address and the core segment AC identifier; allocating a PW label of the core segment PW; and modifying a mapping relationship between the PW label of the core segment PW and the PW label of the access segment PW. 10. The non-transitory computer readable medium of claim 8, wherein the access segment PW is based on Multiple Protocol Label Switching (MPLS). 11. A non-transitory computer readable medium including computer-executable instructions for execution on an Optical Network Unit (ONU), such that when the computer-executable instructions are executed by the apparatus a method is carried out comprising:
sending a label request to an Optical Line Terminal (OLT), wherein the label request carries an identifier of a port of the ONU; receiving a response comprising a correspondence between the identifier and a psudeowire (PW) label; storing the correspondence in the ONU; encapsulating received data into a frame by adding the PW label to the data; and sending the frame to the OLT. 12. A method for managing a label, comprising:
sending, by an Optical Network Unit (ONU), a label request to an Optical Line Terminal (OLT), wherein the label request carries an identifier of a port of the ONU; receiving by the ONU, a response comprising a correspondence between the identifier and a psudeowire (PW) label; storing, by the ONU, the correspondence in the ONU; encapsulating, by the ONU, received data into a frame by adding the PW label to the data; and sending, by the ONU, the frame to the OLT. | 2,600 |
10,593 | 10,593 | 15,714,970 | 2,648 | An electronic device may be provided with wireless communications circuitry and control circuitry. The wireless communications circuitry may include centimeter wave and millimeter wave transceiver circuitry and a phased antenna array. The phased antenna array may transmit and receive wireless signals having a frequency higher than 10 GHz. Beam steering circuitry may be coupled to the phased antenna array and may be adjusted to steer the wireless signals to communicate with external equipment. Sensor circuitry in the electronic device may gather sensor data. The control circuitry may use the gathered sensor data to determine a polarization mismatch between the electronic device and the external equipment (e.g., between a signal transmitting device and a signal receiving equipment or vice versa). To mitigate the polarization mismatch data loss, the control circuitry may adjust the polarization settings associated with antennas of the electronic device. | 1. An electronic device configured to wirelessly communicate with an external device, the electronic device comprising:
an antenna configured to transmit wireless signals at a first polarization; sensor circuitry that generates sensor data; and control circuitry coupled to the sensor circuitry, wherein the control circuitry is configured to identify a second polarization that is different from the first polarization based on the generated sensor data and to control the antenna to transmit the wireless signals at the second polarization. 2. The electronic device defined in claim 1, wherein the control circuitry is configured to compare the generated sensor data to a predetermined range of sensor values and to control the antenna to transmit the wireless signals at the second polarization in response to identifying that the sensor data is outside of the predetermined range of sensor values. 3. The electronic device defined in claim 1, wherein the sensor circuitry comprises an orientation sensor and the generated sensor data comprises orientation data indicative of an orientation of the electronic device. 4. The electronic device defined in claim 1, wherein the control circuitry is configured to:
gather wireless performance metric data associated with the wireless signals at the first polarization; compare the gathered wireless performance metric data to a predetermined range of performance metric data values; and control the antenna to transmit the wireless signals at the second polarization in response to determining that the gathered wireless performance metric data is outside of the predetermined range of wireless performance metric data values. 5. The electronic device defined in claim 1, wherein the antenna comprises an antenna resonating element having an antenna feed, the electronic device further comprising:
a phase and magnitude controller coupled to the antenna feed, wherein the phase and magnitude controller is configured to exhibit a first set of phase and magnitude settings while the antenna is configured to transmit the wireless signals at the first polarization. 6. The electronic device defined in claim 5, wherein the antenna resonating element includes an additional antenna feed, the electronic device further comprising:
an additional phase and magnitude controller coupled to the additional antenna feed, wherein the additional phase and magnitude controller is configured to exhibit a second set of phase and magnitude settings while the antenna is configured to transmit the wireless signals at the first polarization. 7. The electronic device defined in claim 6, wherein phase and magnitude controller is configured to exhibit a third set of phase and magnitude settings that is different from the first set of phase and magnitude settings while the antenna transmits the wireless signals at the second polarization. 8. The electronic device defined in claim 5, wherein the antenna comprises an additional antenna resonating element having an additional antenna feed, the electronic device further comprising:
an additional phase and magnitude controller coupled to the additional antenna feed, wherein the control circuitry is configured to control the phase and magnitude controller and the additional phase and magnitude controller to steer the transmitted wireless signals towards the external device. 9. The electronic device defined in claim 1, wherein the transmitted signals comprise millimeter wave signals. 10. An electronic device configured to wirelessly communicate with external equipment, the electronic device comprising:
a phased antenna array configured to convey wireless signals at a frequency greater than 10 GHz; and control circuitry coupled to the phased antenna array, wherein the control circuitry is configured to identify a polarization mismatch between the phased antenna array and the external equipment and to adjust a polarization of the phased antenna array based on the identified polarization mismatch. 11. The electronic device defined in claim 10, the electronic device further comprising:
sensor circuitry that generates sensor data, wherein the control circuitry is configured to identify the polarization mismatch by comparing the generated sensor data to predetermined range of sensor data values. 12. The electronic device defined in claim 11, wherein the control circuitry is configured to maintain the polarization of the phased antenna in response to determining that the generated sensor data is within the predetermined range of sensor data values. 13. The electronic device defined in claim 12, wherein the phased antenna array comprises a plurality of antennas and a respective phase and magnitude controller coupled to each antenna in the plurality of antennas, and the control circuitry is configured to adjust the polarization of the phased antenna array by adjusting phase and magnitude settings of the phase and magnitude controllers in the phased antenna array. 14. The electronic device defined in claim 10, wherein the phased antenna array comprises a plurality of antennas each having an antenna feed, and a first subset of antennas in the plurality of antennas is configured to convey signals having a first polarization. 15. The electronic device defined in claim 14, wherein a second subset of antennas in the plurality of antennas is configured to convey signals having a second polarization that is different than the first polarization. 16. A method of operating an electronic device to communicate with an external device, wherein the electronic device includes a phased antenna array and control circuitry coupled to the phased antenna array, the method comprising:
with the phased antenna array, transmitting wireless signals at a frequency greater than 10 GHz using a signal polarization setting; with the control circuitry, identifying a difference in orientation between the electronic device and the external device; and with the control circuitry, adjusting the signal polarization setting based on the identified difference in orientation between the electronic device and the external device. 17. The method defined in claim 16, wherein identifying the difference in orientation comprises:
with sensor circuitry, generating orientation data for the electronic device; with the control circuitry, receiving the generated orientation data for the electronic device; and with the control circuitry, processing the orientation data for the electronic device and orientation data for the external device to generate the difference in orientation between the electronic device and the external device. 18. The method defined in claim 17, wherein the sensor circuitry comprises an accelerometer. 19. The method defined in claim 17, further comprising:
with the control circuitry, receiving the orientation data for the external device by receiving wireless signals at a frequency less than 10 GHz at the electronic device. 20. The method defined in claim 17, wherein the sensor circuitry comprises an image sensor, the method further comprising:
with the image sensor, generating the orientation data for the external device. | An electronic device may be provided with wireless communications circuitry and control circuitry. The wireless communications circuitry may include centimeter wave and millimeter wave transceiver circuitry and a phased antenna array. The phased antenna array may transmit and receive wireless signals having a frequency higher than 10 GHz. Beam steering circuitry may be coupled to the phased antenna array and may be adjusted to steer the wireless signals to communicate with external equipment. Sensor circuitry in the electronic device may gather sensor data. The control circuitry may use the gathered sensor data to determine a polarization mismatch between the electronic device and the external equipment (e.g., between a signal transmitting device and a signal receiving equipment or vice versa). To mitigate the polarization mismatch data loss, the control circuitry may adjust the polarization settings associated with antennas of the electronic device.1. An electronic device configured to wirelessly communicate with an external device, the electronic device comprising:
an antenna configured to transmit wireless signals at a first polarization; sensor circuitry that generates sensor data; and control circuitry coupled to the sensor circuitry, wherein the control circuitry is configured to identify a second polarization that is different from the first polarization based on the generated sensor data and to control the antenna to transmit the wireless signals at the second polarization. 2. The electronic device defined in claim 1, wherein the control circuitry is configured to compare the generated sensor data to a predetermined range of sensor values and to control the antenna to transmit the wireless signals at the second polarization in response to identifying that the sensor data is outside of the predetermined range of sensor values. 3. The electronic device defined in claim 1, wherein the sensor circuitry comprises an orientation sensor and the generated sensor data comprises orientation data indicative of an orientation of the electronic device. 4. The electronic device defined in claim 1, wherein the control circuitry is configured to:
gather wireless performance metric data associated with the wireless signals at the first polarization; compare the gathered wireless performance metric data to a predetermined range of performance metric data values; and control the antenna to transmit the wireless signals at the second polarization in response to determining that the gathered wireless performance metric data is outside of the predetermined range of wireless performance metric data values. 5. The electronic device defined in claim 1, wherein the antenna comprises an antenna resonating element having an antenna feed, the electronic device further comprising:
a phase and magnitude controller coupled to the antenna feed, wherein the phase and magnitude controller is configured to exhibit a first set of phase and magnitude settings while the antenna is configured to transmit the wireless signals at the first polarization. 6. The electronic device defined in claim 5, wherein the antenna resonating element includes an additional antenna feed, the electronic device further comprising:
an additional phase and magnitude controller coupled to the additional antenna feed, wherein the additional phase and magnitude controller is configured to exhibit a second set of phase and magnitude settings while the antenna is configured to transmit the wireless signals at the first polarization. 7. The electronic device defined in claim 6, wherein phase and magnitude controller is configured to exhibit a third set of phase and magnitude settings that is different from the first set of phase and magnitude settings while the antenna transmits the wireless signals at the second polarization. 8. The electronic device defined in claim 5, wherein the antenna comprises an additional antenna resonating element having an additional antenna feed, the electronic device further comprising:
an additional phase and magnitude controller coupled to the additional antenna feed, wherein the control circuitry is configured to control the phase and magnitude controller and the additional phase and magnitude controller to steer the transmitted wireless signals towards the external device. 9. The electronic device defined in claim 1, wherein the transmitted signals comprise millimeter wave signals. 10. An electronic device configured to wirelessly communicate with external equipment, the electronic device comprising:
a phased antenna array configured to convey wireless signals at a frequency greater than 10 GHz; and control circuitry coupled to the phased antenna array, wherein the control circuitry is configured to identify a polarization mismatch between the phased antenna array and the external equipment and to adjust a polarization of the phased antenna array based on the identified polarization mismatch. 11. The electronic device defined in claim 10, the electronic device further comprising:
sensor circuitry that generates sensor data, wherein the control circuitry is configured to identify the polarization mismatch by comparing the generated sensor data to predetermined range of sensor data values. 12. The electronic device defined in claim 11, wherein the control circuitry is configured to maintain the polarization of the phased antenna in response to determining that the generated sensor data is within the predetermined range of sensor data values. 13. The electronic device defined in claim 12, wherein the phased antenna array comprises a plurality of antennas and a respective phase and magnitude controller coupled to each antenna in the plurality of antennas, and the control circuitry is configured to adjust the polarization of the phased antenna array by adjusting phase and magnitude settings of the phase and magnitude controllers in the phased antenna array. 14. The electronic device defined in claim 10, wherein the phased antenna array comprises a plurality of antennas each having an antenna feed, and a first subset of antennas in the plurality of antennas is configured to convey signals having a first polarization. 15. The electronic device defined in claim 14, wherein a second subset of antennas in the plurality of antennas is configured to convey signals having a second polarization that is different than the first polarization. 16. A method of operating an electronic device to communicate with an external device, wherein the electronic device includes a phased antenna array and control circuitry coupled to the phased antenna array, the method comprising:
with the phased antenna array, transmitting wireless signals at a frequency greater than 10 GHz using a signal polarization setting; with the control circuitry, identifying a difference in orientation between the electronic device and the external device; and with the control circuitry, adjusting the signal polarization setting based on the identified difference in orientation between the electronic device and the external device. 17. The method defined in claim 16, wherein identifying the difference in orientation comprises:
with sensor circuitry, generating orientation data for the electronic device; with the control circuitry, receiving the generated orientation data for the electronic device; and with the control circuitry, processing the orientation data for the electronic device and orientation data for the external device to generate the difference in orientation between the electronic device and the external device. 18. The method defined in claim 17, wherein the sensor circuitry comprises an accelerometer. 19. The method defined in claim 17, further comprising:
with the control circuitry, receiving the orientation data for the external device by receiving wireless signals at a frequency less than 10 GHz at the electronic device. 20. The method defined in claim 17, wherein the sensor circuitry comprises an image sensor, the method further comprising:
with the image sensor, generating the orientation data for the external device. | 2,600 |
10,594 | 10,594 | 14,290,361 | 2,625 | Systems and methods for monitor brightness control are disclosed. The method includes connecting with a device via a dock, the device including a sensor configured to detect a lighting condition of an environment surrounding the device. The method further includes linking the dock with a monitor. The method further includes detecting the lighting condition. Additionally, in response to a change in the lighting condition, the method includes matching the lighting condition with a monitor brightness setting in a plurality of brightness look-up-tables and adjusting a brightness level of the monitor based on the monitor brightness setting. | 1. A method for monitor brightness control comprising:
connecting with a device via a dock, the device including a sensor configured to detect a lighting condition of an environment surrounding the device; linking the dock with a monitor; detecting the lighting condition; in response to a change in the lighting condition, matching the lighting condition with a monitor brightness setting in a plurality of brightness look-up-tables; and adjusting a brightness level of the monitor based on the monitor brightness setting. 2. The method of claim 1, the method further comprising:
matching the plurality of monitor brightness settings to the plurality of lighting conditions, wherein the plurality of lighting conditions are based on a plurality of field data including a plurality of user monitor brightness settings for a plurality of actual lighting conditions; and generating the plurality of brightness look-up-tables based on the plurality of field data. 3. The method of claim 1, further comprising selecting one of the plurality of brightness look-up-tables based on a type of the monitor. 4. The method of claim 1, wherein the monitor brightness settings in the plurality of brightness look-up-tables are updated based on a manual change in the brightness level of the monitor made by a user of the monitor. 5. The method of claim 1, wherein adjusting the brightness level of the monitor requires a user to be logged into an information handling system communicatively coupled to the monitor. 6. The method of claim 1, wherein the lighting condition is detected by a single sensor. 7. The method of claim 1, further comprising selecting the monitor brightness setting based on a device brightness setting of the device. 8. An information handling system comprising:
a processor; a memory communicatively coupled to the processor; a docking station communicatively coupled to the processor and memory; and a brightness module including instructions in the memory, the instructions executable by the processor, the instructions, when executed, configure the brightness module to:
connect with a device via the docking station, the device including a sensor configured to detect a lighting condition of an environment surrounding the device;
link the docking station with a monitor;
detect the lighting condition;
in response to a change in the lighting condition, match the lighting condition with a monitor brightness setting in a plurality of brightness look-up-tables; and
adjust a brightness level of the monitor based on the monitor brightness setting. 9. The system of claim 8, the instructions further configure the brightness module to:
match the plurality of monitor brightness settings to the plurality of lighting conditions, wherein the plurality of lighting conditions are based on a plurality of field data including a plurality of user monitor brightness settings for a plurality of actual lighting conditions; and generate the plurality of brightness look-up-tables based on the plurality of field data. 10. The system of claim 8, the instructions further configure the brightness module to select one of the plurality of brightness look-up-tables based on a type of the monitor. 11. The system of claim 8, wherein the monitor brightness settings in the plurality of brightness look-up-tables are updated based on a manual change in the brightness level of the monitor made by a user of the monitor. 12. The system of claim 8, wherein adjusting the brightness level of the monitor requires a user to be logged into an information handling system communicatively coupled to the monitor. 13. The system of claim 8, wherein the lighting condition is detected by a single sensor. 14. The system of claim 8, the instructions further configure the brightness module to select the monitor brightness setting based on a device brightness setting of the device. 15. A non-transitory machine-readable medium comprising instructions stored therein, the instructions executable by one or more processors, the instructions, when read and executed for causing the processor to:
connect with a device via a dock, the device including a sensor configured to detect a lighting condition of an environment surrounding the device; link the dock with a monitor; detect the lighting condition; in response to a change in the lighting condition, match the lighting condition with a monitor brightness setting in a plurality of brightness look-up-tables; and adjust a brightness level of the monitor based on the monitor brightness setting. 16. The non-transitory machine-readable medium of claim 15, the instructions further causing the processor to:
match the plurality of monitor brightness settings to the plurality of lighting conditions, wherein the plurality of lighting conditions are based on a plurality of field data including a plurality of user monitor brightness settings for a plurality of actual lighting conditions; and generate the plurality of brightness look-up-tables based on the plurality of field data. 17. The non-transitory machine-readable medium of claim 15, the instructions further configure the processor to select one of the plurality of brightness look-up-tables based on a type of the monitor. 18. The non-transitory machine-readable medium of claim 15, wherein adjusting the brightness level of the monitor requires a user to be logged into an information handling system communicatively coupled to the monitor. 19. The non-transitory machine-readable medium of claim 15, wherein the lighting condition is detected by a single sensor. 20. The non-transitory machine-readable medium of claim 15, the instructions further causing the processor to select the monitor brightness setting based on a device brightness setting of the device. | Systems and methods for monitor brightness control are disclosed. The method includes connecting with a device via a dock, the device including a sensor configured to detect a lighting condition of an environment surrounding the device. The method further includes linking the dock with a monitor. The method further includes detecting the lighting condition. Additionally, in response to a change in the lighting condition, the method includes matching the lighting condition with a monitor brightness setting in a plurality of brightness look-up-tables and adjusting a brightness level of the monitor based on the monitor brightness setting.1. A method for monitor brightness control comprising:
connecting with a device via a dock, the device including a sensor configured to detect a lighting condition of an environment surrounding the device; linking the dock with a monitor; detecting the lighting condition; in response to a change in the lighting condition, matching the lighting condition with a monitor brightness setting in a plurality of brightness look-up-tables; and adjusting a brightness level of the monitor based on the monitor brightness setting. 2. The method of claim 1, the method further comprising:
matching the plurality of monitor brightness settings to the plurality of lighting conditions, wherein the plurality of lighting conditions are based on a plurality of field data including a plurality of user monitor brightness settings for a plurality of actual lighting conditions; and generating the plurality of brightness look-up-tables based on the plurality of field data. 3. The method of claim 1, further comprising selecting one of the plurality of brightness look-up-tables based on a type of the monitor. 4. The method of claim 1, wherein the monitor brightness settings in the plurality of brightness look-up-tables are updated based on a manual change in the brightness level of the monitor made by a user of the monitor. 5. The method of claim 1, wherein adjusting the brightness level of the monitor requires a user to be logged into an information handling system communicatively coupled to the monitor. 6. The method of claim 1, wherein the lighting condition is detected by a single sensor. 7. The method of claim 1, further comprising selecting the monitor brightness setting based on a device brightness setting of the device. 8. An information handling system comprising:
a processor; a memory communicatively coupled to the processor; a docking station communicatively coupled to the processor and memory; and a brightness module including instructions in the memory, the instructions executable by the processor, the instructions, when executed, configure the brightness module to:
connect with a device via the docking station, the device including a sensor configured to detect a lighting condition of an environment surrounding the device;
link the docking station with a monitor;
detect the lighting condition;
in response to a change in the lighting condition, match the lighting condition with a monitor brightness setting in a plurality of brightness look-up-tables; and
adjust a brightness level of the monitor based on the monitor brightness setting. 9. The system of claim 8, the instructions further configure the brightness module to:
match the plurality of monitor brightness settings to the plurality of lighting conditions, wherein the plurality of lighting conditions are based on a plurality of field data including a plurality of user monitor brightness settings for a plurality of actual lighting conditions; and generate the plurality of brightness look-up-tables based on the plurality of field data. 10. The system of claim 8, the instructions further configure the brightness module to select one of the plurality of brightness look-up-tables based on a type of the monitor. 11. The system of claim 8, wherein the monitor brightness settings in the plurality of brightness look-up-tables are updated based on a manual change in the brightness level of the monitor made by a user of the monitor. 12. The system of claim 8, wherein adjusting the brightness level of the monitor requires a user to be logged into an information handling system communicatively coupled to the monitor. 13. The system of claim 8, wherein the lighting condition is detected by a single sensor. 14. The system of claim 8, the instructions further configure the brightness module to select the monitor brightness setting based on a device brightness setting of the device. 15. A non-transitory machine-readable medium comprising instructions stored therein, the instructions executable by one or more processors, the instructions, when read and executed for causing the processor to:
connect with a device via a dock, the device including a sensor configured to detect a lighting condition of an environment surrounding the device; link the dock with a monitor; detect the lighting condition; in response to a change in the lighting condition, match the lighting condition with a monitor brightness setting in a plurality of brightness look-up-tables; and adjust a brightness level of the monitor based on the monitor brightness setting. 16. The non-transitory machine-readable medium of claim 15, the instructions further causing the processor to:
match the plurality of monitor brightness settings to the plurality of lighting conditions, wherein the plurality of lighting conditions are based on a plurality of field data including a plurality of user monitor brightness settings for a plurality of actual lighting conditions; and generate the plurality of brightness look-up-tables based on the plurality of field data. 17. The non-transitory machine-readable medium of claim 15, the instructions further configure the processor to select one of the plurality of brightness look-up-tables based on a type of the monitor. 18. The non-transitory machine-readable medium of claim 15, wherein adjusting the brightness level of the monitor requires a user to be logged into an information handling system communicatively coupled to the monitor. 19. The non-transitory machine-readable medium of claim 15, wherein the lighting condition is detected by a single sensor. 20. The non-transitory machine-readable medium of claim 15, the instructions further causing the processor to select the monitor brightness setting based on a device brightness setting of the device. | 2,600 |
10,595 | 10,595 | 16,410,532 | 2,622 | A gesture based data capture and analysis system includes one or more gesture units and an analysis unit. The gesture units are affixed to a user's hand and/or one or more fingers. The gesture units contain a gesture sensor such as one or more motion detectors (e.g., accelerometers or an image analysis unit) and communicate with an analysis unit and one or more peripherals, such as a mobile phone, camera, video recorder, audio recorder, or other analog or digital sensing unit. The gesture units sense movement of a user's arms, hands, and/or fingers as gesture data. The analysis unit interprets the gesture data and controls the capture of image, audio, video, and/or other data by the peripherals and the processing, storing, sending, networking, posting, display, and/or publishing of the captured data according to processing gestures captured by the gesture units as gesture data and interpreted as such by the analysis unit. | 1. A gesture-based data capture and analysis device, comprising:
a display; a gesture unit configured to detect one or more gestures made by a user; one or more peripherals configured to capture data in response to a gesture made by a user; and an analysis unit connected to the gesture unit configured to interpret the one or more gestures based upon information obtained from the gesture unit and to process the captured data based upon the interpretation of the one or more gestures. 2. The gesture-based data capture and analysis device of claim 1 wherein the analysis unit is further configured to create a capture file containing the captured data and to append gesture data and an instruction to the capture file. 3. The gesture-based data capture and analysis device of claim 1 wherein the analysis unit is further configured to interpret at least two gestures, one that initiates capture and one that corresponds to a user instruction as to how to process the captured data. 4. The gesture-based data capture and analysis device of claim 3 wherein the user instruction relates to one or more of the following: (1) a request to conduct speech recognition on the capture file, (2) a request to conduct optical character recognition on the capture file, (3) a request to conduct facial recognition on the capture file, or (4) a request to send the capture file to a recipient over a network. 5. The gesture-based data capture and analysis device of claim 1 wherein the analysis unit is further configured to interpret a sequence gesture. 6. The gesture-based data capture and analysis device of claim 1 wherein the analysis unit is further configured to interpret a contrasting gesture. 7. The gesture-based data capture and analysis device of claim 1 wherein the gesture unit is further configured to be affixed to the user's hand or fingers without the user having to hold the gesture unit. 8. The gesture-based data capture and analysis device of claim 1 wherein the gesture-based data capture and analysis device is configured to detect gestures made by a user's finger. 9. The gesture-based data capture and analysis device of claim 1 wherein the gesture unit is further configured to only detect gestures after the gesture-based data capture and analysis device receives input from a user requesting that it begin to detect gestures for a predefined period of time. 10. The gesture-based data capture and analysis device of claim 1 wherein the gesture-based data capture and analysis device is further configured to capture data in response to a gesture made by a user and process the captured data according to a user instruction without the user having to interact with a graphical user interface to initiate the capture or the processing at the time of the capture or the processing. 11. A data capture and analysis system, comprising:
a gesture sensing device configured to detect gestures made by a user's hands or fingers, wherein the gesture sensing device is carried by or affixed to a user; one or more peripherals configured to capture data in response to one or more of the gestures; a processing unit configured to interpret the gestures based upon information provided by the gesture sensing device and to store in memory the captured data in the form of an electronic file; and wherein the processing unit is further configured to alter the electronic file based upon one or more gestures detected by the gesture sensing device. 12. The data capture and analysis system of claim 11 wherein the processing unit is further configured to append to the electronic file information obtained by the gesture sensing device. 13. The data capture and analysis system of claim 12 wherein the appended information identifies the type of data captured. 14. The data capture and analysis system of claim 11 wherein the appended information identifies the type of peripheral that captured the data. 15. The data capture and analysis system of claim 11 wherein the processing unit is further configured to interpret at least two gestures, one that initiates capture and one that corresponds to an instruction as to how to process the captured data. 16. The gesture-based data capture and analysis system of claim 15 wherein the instruction includes one or more of the following: (1) to conduct speech recognition on the electronic file, (2) to conduct optical character recognition on the electronic file, (3) to conduct facial recognition on the electronic file, or (4) to send the electronic file to a recipient over a network. 17. The gesture-based data capture and analysis system of claim 11 wherein the processing unit is further configured to interpret a sequence gesture or a contrasting gesture. 18. The gesture-based data capture and analysis system of claim 11 wherein the gesture unit is further configured to be affixed to the user's hand or fingers without the user having to hold the gesture unit. 19. The gesture-based data capture and analysis device of claim 11 wherein the gesture-based data capture and analysis device is configured to detect gestures made by a user's finger. 20. The gesture-based data capture and analysis device of claim 11 wherein the gesture unit is further configured to only detect gestures after the gesture-based data capture and analysis device receives input from a user requesting that it begin to detect gestures for a predefined period of time. 21. The gesture-based data capture and analysis system of claim 11 wherein the gesture-based data capture and analysis system is further configured to capture data in response to a gesture made by a user and process the data according to a user instruction without the user having to interact with a graphical user interface to initiate the capture or the processing at the time of the capture or the processing. 22. A method of capturing and processing data in response to user a gesture, comprising
detecting with a motion capture device a gesture made by a user; in response to the gesture, capturing data with a peripheral; identifying the gesture detected based upon gesture identifying information contained in a database or lookup table, wherein the gesture identifying information includes one or more numerical or alphanumerical identifiers; and storing an instruction for processing of the captured data based upon the type of gesture identified. 23. The method of claim 22 wherein a gesture detected includes a motion of the user's hands or fingers. 24. The method of claim 22 further including detecting at least two different gesture types, one gesture type instructing a first peripheral to capture image data and the other type instructing a second peripheral to capture audio data. 25. The method of claim 24 wherein the at least two different types of gestures are combined combined to create a third gesture type. 26. A gesture-based analysis device, comprising:
an interface for receiving from a gesture-based data capture device gesture-information related to gestures and for receiving from a peripheral device peripheral-information captured by the peripheral; and a logic unit capable of (1) interpreting the gesture-information and the peripheral-information, (2) identifying gestures based upon the gesture-information and (3) processing the peripheral-information based upon the gesture-information. 27. The gesture-based data analysis device of claim 26 wherein the logic unit is capable of creating a capture file containing the peripheral-information and appending the gesture-information and instructions to the capture file. 28. The gesture-based data analysis device of claim 26 wherein the logic unit is capable of identifying in the gesture-information at least two gestures, one that initiates capture and one that corresponds to a user instruction as to how to process the peripheral-information. 29. The gesture-based analysis device of claim 26 wherein the logic unit is capable of identifying gesture-information that relates to a sequence gesture. 30. The gesture-based analysis device of claim 26 wherein the logic unit is capable of processing the peripheral-information according to user instructions without the user having to interact with a graphical user interface to initiate processing at the time of the processing. | A gesture based data capture and analysis system includes one or more gesture units and an analysis unit. The gesture units are affixed to a user's hand and/or one or more fingers. The gesture units contain a gesture sensor such as one or more motion detectors (e.g., accelerometers or an image analysis unit) and communicate with an analysis unit and one or more peripherals, such as a mobile phone, camera, video recorder, audio recorder, or other analog or digital sensing unit. The gesture units sense movement of a user's arms, hands, and/or fingers as gesture data. The analysis unit interprets the gesture data and controls the capture of image, audio, video, and/or other data by the peripherals and the processing, storing, sending, networking, posting, display, and/or publishing of the captured data according to processing gestures captured by the gesture units as gesture data and interpreted as such by the analysis unit.1. A gesture-based data capture and analysis device, comprising:
a display; a gesture unit configured to detect one or more gestures made by a user; one or more peripherals configured to capture data in response to a gesture made by a user; and an analysis unit connected to the gesture unit configured to interpret the one or more gestures based upon information obtained from the gesture unit and to process the captured data based upon the interpretation of the one or more gestures. 2. The gesture-based data capture and analysis device of claim 1 wherein the analysis unit is further configured to create a capture file containing the captured data and to append gesture data and an instruction to the capture file. 3. The gesture-based data capture and analysis device of claim 1 wherein the analysis unit is further configured to interpret at least two gestures, one that initiates capture and one that corresponds to a user instruction as to how to process the captured data. 4. The gesture-based data capture and analysis device of claim 3 wherein the user instruction relates to one or more of the following: (1) a request to conduct speech recognition on the capture file, (2) a request to conduct optical character recognition on the capture file, (3) a request to conduct facial recognition on the capture file, or (4) a request to send the capture file to a recipient over a network. 5. The gesture-based data capture and analysis device of claim 1 wherein the analysis unit is further configured to interpret a sequence gesture. 6. The gesture-based data capture and analysis device of claim 1 wherein the analysis unit is further configured to interpret a contrasting gesture. 7. The gesture-based data capture and analysis device of claim 1 wherein the gesture unit is further configured to be affixed to the user's hand or fingers without the user having to hold the gesture unit. 8. The gesture-based data capture and analysis device of claim 1 wherein the gesture-based data capture and analysis device is configured to detect gestures made by a user's finger. 9. The gesture-based data capture and analysis device of claim 1 wherein the gesture unit is further configured to only detect gestures after the gesture-based data capture and analysis device receives input from a user requesting that it begin to detect gestures for a predefined period of time. 10. The gesture-based data capture and analysis device of claim 1 wherein the gesture-based data capture and analysis device is further configured to capture data in response to a gesture made by a user and process the captured data according to a user instruction without the user having to interact with a graphical user interface to initiate the capture or the processing at the time of the capture or the processing. 11. A data capture and analysis system, comprising:
a gesture sensing device configured to detect gestures made by a user's hands or fingers, wherein the gesture sensing device is carried by or affixed to a user; one or more peripherals configured to capture data in response to one or more of the gestures; a processing unit configured to interpret the gestures based upon information provided by the gesture sensing device and to store in memory the captured data in the form of an electronic file; and wherein the processing unit is further configured to alter the electronic file based upon one or more gestures detected by the gesture sensing device. 12. The data capture and analysis system of claim 11 wherein the processing unit is further configured to append to the electronic file information obtained by the gesture sensing device. 13. The data capture and analysis system of claim 12 wherein the appended information identifies the type of data captured. 14. The data capture and analysis system of claim 11 wherein the appended information identifies the type of peripheral that captured the data. 15. The data capture and analysis system of claim 11 wherein the processing unit is further configured to interpret at least two gestures, one that initiates capture and one that corresponds to an instruction as to how to process the captured data. 16. The gesture-based data capture and analysis system of claim 15 wherein the instruction includes one or more of the following: (1) to conduct speech recognition on the electronic file, (2) to conduct optical character recognition on the electronic file, (3) to conduct facial recognition on the electronic file, or (4) to send the electronic file to a recipient over a network. 17. The gesture-based data capture and analysis system of claim 11 wherein the processing unit is further configured to interpret a sequence gesture or a contrasting gesture. 18. The gesture-based data capture and analysis system of claim 11 wherein the gesture unit is further configured to be affixed to the user's hand or fingers without the user having to hold the gesture unit. 19. The gesture-based data capture and analysis device of claim 11 wherein the gesture-based data capture and analysis device is configured to detect gestures made by a user's finger. 20. The gesture-based data capture and analysis device of claim 11 wherein the gesture unit is further configured to only detect gestures after the gesture-based data capture and analysis device receives input from a user requesting that it begin to detect gestures for a predefined period of time. 21. The gesture-based data capture and analysis system of claim 11 wherein the gesture-based data capture and analysis system is further configured to capture data in response to a gesture made by a user and process the data according to a user instruction without the user having to interact with a graphical user interface to initiate the capture or the processing at the time of the capture or the processing. 22. A method of capturing and processing data in response to user a gesture, comprising
detecting with a motion capture device a gesture made by a user; in response to the gesture, capturing data with a peripheral; identifying the gesture detected based upon gesture identifying information contained in a database or lookup table, wherein the gesture identifying information includes one or more numerical or alphanumerical identifiers; and storing an instruction for processing of the captured data based upon the type of gesture identified. 23. The method of claim 22 wherein a gesture detected includes a motion of the user's hands or fingers. 24. The method of claim 22 further including detecting at least two different gesture types, one gesture type instructing a first peripheral to capture image data and the other type instructing a second peripheral to capture audio data. 25. The method of claim 24 wherein the at least two different types of gestures are combined combined to create a third gesture type. 26. A gesture-based analysis device, comprising:
an interface for receiving from a gesture-based data capture device gesture-information related to gestures and for receiving from a peripheral device peripheral-information captured by the peripheral; and a logic unit capable of (1) interpreting the gesture-information and the peripheral-information, (2) identifying gestures based upon the gesture-information and (3) processing the peripheral-information based upon the gesture-information. 27. The gesture-based data analysis device of claim 26 wherein the logic unit is capable of creating a capture file containing the peripheral-information and appending the gesture-information and instructions to the capture file. 28. The gesture-based data analysis device of claim 26 wherein the logic unit is capable of identifying in the gesture-information at least two gestures, one that initiates capture and one that corresponds to a user instruction as to how to process the peripheral-information. 29. The gesture-based analysis device of claim 26 wherein the logic unit is capable of identifying gesture-information that relates to a sequence gesture. 30. The gesture-based analysis device of claim 26 wherein the logic unit is capable of processing the peripheral-information according to user instructions without the user having to interact with a graphical user interface to initiate processing at the time of the processing. | 2,600 |
10,596 | 10,596 | 16,163,270 | 2,626 | Axis orientation compensation is provided in a system in which movement of a controlling device is used to control navigational functions of a target appliance by determining which one of plural sides of the controlling device is an active side of the controlling device and by causing navigational functions of the target appliance made relative to at least one of an X, Y, and Z axis of the target appliance to be dynamically aligned with movements of the controlling device made relative to at least one of an A, B, and C axis of the controlling device as a function of the one of the plural sides of the controlling device that is determined to be the active side of the controlling device. | 1. A method for providing axis orientation compensation in a system in which movement of a controlling device is used to control navigational functions of a target appliance, comprising:
receiving by the target appliance from the controlling device a navigational command wherein the navigational command comprises data indicative of a one of a plurality of sides of the controlling device that was determined by the controlling device to be an active side of the controlling device and data indicative of a movement of the controlling device along a one of an A, B, or C axis of the controlling device; using the data in the received navigational command by the target appliance to determine whether the target appliance is to respond to the received navigational command by performing a navigational function in a one of an X, Y, or Z axis of the target appliance; and performing by the target appliance the navigational function in the determined one of the X, Y, or Z axis of the target appliance. 2. The method as recited in claim 1, wherein the controlling device comprises at least one orientation sensor and the method comprises using a signal generated by the at least one orientation sensor to determine which one of the plurality of sides of the controlling device is the active side of the controlling device. 3. The method as recited in claim 2, wherein the at least one orientation sensor comprises an accelerometer. 4. The method as recited in claim 2, comprising using the signal generated by the at least one orientation sensor to also track movements of the controlling device made relative to the A, B, or C axes of the controlling device. 5. A method for providing axis orientation compensation in a system in which movement of a controlling device is used to control navigational functions of a target appliance, comprising:
in response to a movement of the controlling device along a one of an A, B, or C axis of the controlling device, transmitting from the controlling device to the target appliance a navigational command to cause the target appliance to perform a navigational function in a one of an X, Y, or Z axis of the target appliance wherein the controlling device determines whether the target appliance is perform the navigational function in the one of the X, Y, or Z axis of the target appliance as a function of which one of a plurality of sides of the controlling device was determined by the controlling device to be an active side of the controlling device and the one of the A, B, or C axis of the controlling device along which the controlling device was moved to cause the navigational signal to be sent to the target appliance. 6. The method as recited in claim 5, wherein the controlling device comprises at least one orientation sensor and the method comprises using a signal generated by the at least one orientation sensor to determine which one of the plurality of sides of the controlling device is the active side of the controlling device. 7. The method as recited in claim 6, wherein the at least one orientation sensor comprises an accelerometer. 8. The method as recited in claim 6, comprising using the signal generated by the at least one orientation sensor to track movements of the controlling device made relative to the A, B, or C axes of the controlling device. | Axis orientation compensation is provided in a system in which movement of a controlling device is used to control navigational functions of a target appliance by determining which one of plural sides of the controlling device is an active side of the controlling device and by causing navigational functions of the target appliance made relative to at least one of an X, Y, and Z axis of the target appliance to be dynamically aligned with movements of the controlling device made relative to at least one of an A, B, and C axis of the controlling device as a function of the one of the plural sides of the controlling device that is determined to be the active side of the controlling device.1. A method for providing axis orientation compensation in a system in which movement of a controlling device is used to control navigational functions of a target appliance, comprising:
receiving by the target appliance from the controlling device a navigational command wherein the navigational command comprises data indicative of a one of a plurality of sides of the controlling device that was determined by the controlling device to be an active side of the controlling device and data indicative of a movement of the controlling device along a one of an A, B, or C axis of the controlling device; using the data in the received navigational command by the target appliance to determine whether the target appliance is to respond to the received navigational command by performing a navigational function in a one of an X, Y, or Z axis of the target appliance; and performing by the target appliance the navigational function in the determined one of the X, Y, or Z axis of the target appliance. 2. The method as recited in claim 1, wherein the controlling device comprises at least one orientation sensor and the method comprises using a signal generated by the at least one orientation sensor to determine which one of the plurality of sides of the controlling device is the active side of the controlling device. 3. The method as recited in claim 2, wherein the at least one orientation sensor comprises an accelerometer. 4. The method as recited in claim 2, comprising using the signal generated by the at least one orientation sensor to also track movements of the controlling device made relative to the A, B, or C axes of the controlling device. 5. A method for providing axis orientation compensation in a system in which movement of a controlling device is used to control navigational functions of a target appliance, comprising:
in response to a movement of the controlling device along a one of an A, B, or C axis of the controlling device, transmitting from the controlling device to the target appliance a navigational command to cause the target appliance to perform a navigational function in a one of an X, Y, or Z axis of the target appliance wherein the controlling device determines whether the target appliance is perform the navigational function in the one of the X, Y, or Z axis of the target appliance as a function of which one of a plurality of sides of the controlling device was determined by the controlling device to be an active side of the controlling device and the one of the A, B, or C axis of the controlling device along which the controlling device was moved to cause the navigational signal to be sent to the target appliance. 6. The method as recited in claim 5, wherein the controlling device comprises at least one orientation sensor and the method comprises using a signal generated by the at least one orientation sensor to determine which one of the plurality of sides of the controlling device is the active side of the controlling device. 7. The method as recited in claim 6, wherein the at least one orientation sensor comprises an accelerometer. 8. The method as recited in claim 6, comprising using the signal generated by the at least one orientation sensor to track movements of the controlling device made relative to the A, B, or C axes of the controlling device. | 2,600 |
10,597 | 10,597 | 15,684,412 | 2,632 | The warning device ( 12 ) includes a warning light mechanism ( 16 ), that is configured so as to emit a flashing strobe light ( 20 ) that flashes at a frequency that is higher than 10 Hz. | 1. A warning device for an urban public transport vehicle, that includes a warning light mechanism, wherein the warning light mechanism is configured so as to emit a flashing strobe light that flashes at a frequency that is higher than 10 Hz. 2. A warning device according to claim 1, wherein the warning light mechanism is configured in order to emit light rays in a downward direction when it is mounted on the public transport vehicle. 3. A warning device according to claim 1, wherein the light emitted by the warning light mechanism is of an intensity that is greater than 900 lux at 1 metre. 4. A warning device according to claim 1, wherein the light emitted by the warning light mechanism flashes with a duty cycle comprised between 45% and 55%. 5. A warning device according to claim 1, wherein the light emitted by the warning light mechanism flashes with a duty cycle that is equal to 50%. 6. A warning device according to claim 1, wherein the warning light mechanism comprises at least one xenon flash lamp. 7. A warning device according to claim 1, wherein the warning light mechanism comprises at least one high power light emitting diode. 8. A warning device according to claim 1, that includes an audible sound warning mechanism, and the simultaneous activation means for simultaneously activating the audible sound warning mechanism and the warning light mechanism. 9. An urban public transport vehicle, wherein it includes a warning device that includes a warning light mechanism, wherein the warning light mechanism is configured so as to emit a flashing strobe light that flashes at a frequency that is higher than 10 Hz. | The warning device ( 12 ) includes a warning light mechanism ( 16 ), that is configured so as to emit a flashing strobe light ( 20 ) that flashes at a frequency that is higher than 10 Hz.1. A warning device for an urban public transport vehicle, that includes a warning light mechanism, wherein the warning light mechanism is configured so as to emit a flashing strobe light that flashes at a frequency that is higher than 10 Hz. 2. A warning device according to claim 1, wherein the warning light mechanism is configured in order to emit light rays in a downward direction when it is mounted on the public transport vehicle. 3. A warning device according to claim 1, wherein the light emitted by the warning light mechanism is of an intensity that is greater than 900 lux at 1 metre. 4. A warning device according to claim 1, wherein the light emitted by the warning light mechanism flashes with a duty cycle comprised between 45% and 55%. 5. A warning device according to claim 1, wherein the light emitted by the warning light mechanism flashes with a duty cycle that is equal to 50%. 6. A warning device according to claim 1, wherein the warning light mechanism comprises at least one xenon flash lamp. 7. A warning device according to claim 1, wherein the warning light mechanism comprises at least one high power light emitting diode. 8. A warning device according to claim 1, that includes an audible sound warning mechanism, and the simultaneous activation means for simultaneously activating the audible sound warning mechanism and the warning light mechanism. 9. An urban public transport vehicle, wherein it includes a warning device that includes a warning light mechanism, wherein the warning light mechanism is configured so as to emit a flashing strobe light that flashes at a frequency that is higher than 10 Hz. | 2,600 |
10,598 | 10,598 | 15,090,911 | 2,624 | Two extended embedded Display Port displays may be enabled by using a single set of panel power sequencing (PPS) signals from a chipset to enable the two embedded Display Port panels. To enhance the user experience, the backlight module brightness is controlled by making use of a pin available on a system on a chip (SOC) and modification of drivers. This helps to save power when only one panel is used. When both panels are used simultaneously, power savings can be achieved by using backlight control signals. | 1. A method comprising:
operating two Display Port panels directly from one chipset by sharing backlight and power enable signals from the chipset with both panels; and causing a backlight to one of said panels to be turned off. 2. The method of claim 1 wherein turning off the backlight of one panel includes setting its duty cycle to zero. 3. The method of claim 1 wherein turning off the backlight of one panel indicates operating a switch to turn off said backlight. 4. The method of claim 1 including successively link training each panel. 5. The method of claim 2 including increasing the duty cycle when the one panel is used for display. 6. The method of claim 1 including selectively enabling simultaneous display on both panels or display on either but not both of said panels. 7. The method of claim 1 wherein the chipset produces only one backlight brightness control signal, using said signal for only one panel. 8. The method of claim 7 including using a pulse width modulated signal from said chipset as a backlight control signal for the other panel. 9. One or more non-transitory computer readable media storing instructions executed by a hardware processor to perform a sequence comprising:
operating two Display Port panels directly from one chipset by sharing backlight and power enable signals from the chipset with both panels; and causing a backlight to one of said panels to be turned off. 10. The media of claim 9 wherein turning off the backlight of one panel includes setting its duty cycle to zero. 11. The media of claim 9 wherein turning off the backlight of one panel indicates operating a switch to turn off said backlight. 12. The media of claim 9 wherein said sequence includes successively link training each panel. 13. The media of claim 10 wherein said sequence includes increasing the duty cycle when the one panel is used for display. 14. The media of claim 9 wherein said sequence includes selectively enabling simultaneous display on both panels or display on either but not both of said panels. 15. The media of claim 9 wherein the chipset produces only one backlight brightness control signal, using said signal for only one panel. 16. The media of claim 15 wherein said sequence includes using a pulse width modulated signal from said chipset as a backlight control signal for the other panel. 17. An apparatus comprising:
a chipset to directly operate two Display Port panels by sharing backlight and power enable signals from the chipset with both panels, and causing the backlight to one of said panels to be turned off; and a storage coupled to said chipset. 18. The apparatus of claim 17 wherein turning off the backlight of one panel includes setting its duty cycle to zero. 19. The apparatus of claim 17 wherein turning off the backlight of one panel indicates operating a switch to turn off said backlight. 20. The apparatus of claim 17, said chipset to successively link train each panel. 21. The apparatus of claim 18, said chipset to increase the duty cycle when the one panel is used for display. 22. The apparatus of claim 17, said chipset to selectively enable simultaneous display on both panels or display on either but not both of said panels. 23. The apparatus of claim 17 wherein the chipset produces only one backlight brightness control signal, using said signal for only one panel. 24. The apparatus of claim 23, said chipset to use a pulse width modulated signal from said chipset as a backlight control signal for the other panel. 25. The apparatus of claim 17 including a pair of display panels, one of which is embedded. | Two extended embedded Display Port displays may be enabled by using a single set of panel power sequencing (PPS) signals from a chipset to enable the two embedded Display Port panels. To enhance the user experience, the backlight module brightness is controlled by making use of a pin available on a system on a chip (SOC) and modification of drivers. This helps to save power when only one panel is used. When both panels are used simultaneously, power savings can be achieved by using backlight control signals.1. A method comprising:
operating two Display Port panels directly from one chipset by sharing backlight and power enable signals from the chipset with both panels; and causing a backlight to one of said panels to be turned off. 2. The method of claim 1 wherein turning off the backlight of one panel includes setting its duty cycle to zero. 3. The method of claim 1 wherein turning off the backlight of one panel indicates operating a switch to turn off said backlight. 4. The method of claim 1 including successively link training each panel. 5. The method of claim 2 including increasing the duty cycle when the one panel is used for display. 6. The method of claim 1 including selectively enabling simultaneous display on both panels or display on either but not both of said panels. 7. The method of claim 1 wherein the chipset produces only one backlight brightness control signal, using said signal for only one panel. 8. The method of claim 7 including using a pulse width modulated signal from said chipset as a backlight control signal for the other panel. 9. One or more non-transitory computer readable media storing instructions executed by a hardware processor to perform a sequence comprising:
operating two Display Port panels directly from one chipset by sharing backlight and power enable signals from the chipset with both panels; and causing a backlight to one of said panels to be turned off. 10. The media of claim 9 wherein turning off the backlight of one panel includes setting its duty cycle to zero. 11. The media of claim 9 wherein turning off the backlight of one panel indicates operating a switch to turn off said backlight. 12. The media of claim 9 wherein said sequence includes successively link training each panel. 13. The media of claim 10 wherein said sequence includes increasing the duty cycle when the one panel is used for display. 14. The media of claim 9 wherein said sequence includes selectively enabling simultaneous display on both panels or display on either but not both of said panels. 15. The media of claim 9 wherein the chipset produces only one backlight brightness control signal, using said signal for only one panel. 16. The media of claim 15 wherein said sequence includes using a pulse width modulated signal from said chipset as a backlight control signal for the other panel. 17. An apparatus comprising:
a chipset to directly operate two Display Port panels by sharing backlight and power enable signals from the chipset with both panels, and causing the backlight to one of said panels to be turned off; and a storage coupled to said chipset. 18. The apparatus of claim 17 wherein turning off the backlight of one panel includes setting its duty cycle to zero. 19. The apparatus of claim 17 wherein turning off the backlight of one panel indicates operating a switch to turn off said backlight. 20. The apparatus of claim 17, said chipset to successively link train each panel. 21. The apparatus of claim 18, said chipset to increase the duty cycle when the one panel is used for display. 22. The apparatus of claim 17, said chipset to selectively enable simultaneous display on both panels or display on either but not both of said panels. 23. The apparatus of claim 17 wherein the chipset produces only one backlight brightness control signal, using said signal for only one panel. 24. The apparatus of claim 23, said chipset to use a pulse width modulated signal from said chipset as a backlight control signal for the other panel. 25. The apparatus of claim 17 including a pair of display panels, one of which is embedded. | 2,600 |
10,599 | 10,599 | 15,974,904 | 2,623 | A display device (e.g., in a contact lens) is mounted on the eye. The eye mounted display contains multiple sub-displays, each of which projects light to different retinal positions within a portion of the retina corresponding to the sub-display. Additionally, a “locally uniform resolution” mapping may be used to model the variable resolution of the eye. Accordingly, various aspects of the display device may be based on the locally uniform resolution mapping. For example, the light emitted from the sub-displays may be based on the locally uniform resolution mapping. | 1. An eye mounted display, comprising:
a contact lens; and a plurality of image projectors mounted in the contact lens, each image projector comprising:
a display element; and
non-folded optics that project an image from the display element onto a retina of a user wearing the contact lens, where the image projector has an in-line optical design. 2. The eye-mounted display of claim 1 where the image projector has a thickness of not more than 0.5 mm. 3. The eye-mounted display of claim 1 where the image projector has a volume of not more than 0.5 mm×0.5 mm×0.5 mm. 4. The eye-mounted display of claim 1 where the non-folded optics are constructed from at least two layers, each layer including a portion of the non-folded optics for more than one of the image projectors. 5. The eye-mounted display of claim 1 where the image projectors are cylindrically shaped and inserted into the contact lens. 6. The eye-mounted display of claim 1 where the image projectors comprise at least two different optical designs. 7. The eye-mounted display of claim 1 where the plurality of image projectors are peripheral image projectors, and the eye-mounted display further comprises one or more foveal image projectors, the foveal and peripheral image projectors cooperating to tile the retina with projected images. 8. The eye-mounted display of claim 1 where the images projected by the image projectors onto the retina are tiled according to a hexagonal tiling. 9. The eye-mounted display of claim 1 where the plurality of image projectors are foveal image projectors, and the eye-mounted display further comprises multiple peripheral image projectors, the foveal and peripheral image projectors cooperating to tile the retina with projected images. 10. The eye-mounted display of claim 1 where the non-folded optics has an optical axis that is perpendicular to a curvature of the contact lens. 11. The eye-mounted display of claim 1 where the non-folded optics comprise one or more refractive lens elements. 12. The eye-mounted display of claim 11 where the refractive lens element is injection molded. 13. The eye-mounted display of claim 11 where the image projector further comprises a housing in which the display element is mounted. 14. The eye-mounted display of claim 13 where the housing is a blackened housing. 15. The eye-mounted display of claim 11 where the non-folded optics further comprises an exit window. 16. The eye-mounted display of claim 11 where the refractive lens element is formed from one of diamond, silicon nitride, titanium dioxide, quartz and sapphire. 17. The eye-mounted display of claim 1 where the non-folded optics comprises a negative lens element and a positive lens element. 18. The eye-mounted display of claim 1 where the display element comprises a wafer thinned die. 19. The eye-mounted display of claim 1 where the display element comprises a wafer thinned die with a thickness of not more than 20 microns. 20. An eye mounted display, comprising:
a contact lens; and an image projector mounted in the contact lens, the image projector comprising:
a display element; and
non-folded optics that project an image from the display element onto a retina of a user wearing the contact lens, where the image projector has an in-line optical design. | A display device (e.g., in a contact lens) is mounted on the eye. The eye mounted display contains multiple sub-displays, each of which projects light to different retinal positions within a portion of the retina corresponding to the sub-display. Additionally, a “locally uniform resolution” mapping may be used to model the variable resolution of the eye. Accordingly, various aspects of the display device may be based on the locally uniform resolution mapping. For example, the light emitted from the sub-displays may be based on the locally uniform resolution mapping.1. An eye mounted display, comprising:
a contact lens; and a plurality of image projectors mounted in the contact lens, each image projector comprising:
a display element; and
non-folded optics that project an image from the display element onto a retina of a user wearing the contact lens, where the image projector has an in-line optical design. 2. The eye-mounted display of claim 1 where the image projector has a thickness of not more than 0.5 mm. 3. The eye-mounted display of claim 1 where the image projector has a volume of not more than 0.5 mm×0.5 mm×0.5 mm. 4. The eye-mounted display of claim 1 where the non-folded optics are constructed from at least two layers, each layer including a portion of the non-folded optics for more than one of the image projectors. 5. The eye-mounted display of claim 1 where the image projectors are cylindrically shaped and inserted into the contact lens. 6. The eye-mounted display of claim 1 where the image projectors comprise at least two different optical designs. 7. The eye-mounted display of claim 1 where the plurality of image projectors are peripheral image projectors, and the eye-mounted display further comprises one or more foveal image projectors, the foveal and peripheral image projectors cooperating to tile the retina with projected images. 8. The eye-mounted display of claim 1 where the images projected by the image projectors onto the retina are tiled according to a hexagonal tiling. 9. The eye-mounted display of claim 1 where the plurality of image projectors are foveal image projectors, and the eye-mounted display further comprises multiple peripheral image projectors, the foveal and peripheral image projectors cooperating to tile the retina with projected images. 10. The eye-mounted display of claim 1 where the non-folded optics has an optical axis that is perpendicular to a curvature of the contact lens. 11. The eye-mounted display of claim 1 where the non-folded optics comprise one or more refractive lens elements. 12. The eye-mounted display of claim 11 where the refractive lens element is injection molded. 13. The eye-mounted display of claim 11 where the image projector further comprises a housing in which the display element is mounted. 14. The eye-mounted display of claim 13 where the housing is a blackened housing. 15. The eye-mounted display of claim 11 where the non-folded optics further comprises an exit window. 16. The eye-mounted display of claim 11 where the refractive lens element is formed from one of diamond, silicon nitride, titanium dioxide, quartz and sapphire. 17. The eye-mounted display of claim 1 where the non-folded optics comprises a negative lens element and a positive lens element. 18. The eye-mounted display of claim 1 where the display element comprises a wafer thinned die. 19. The eye-mounted display of claim 1 where the display element comprises a wafer thinned die with a thickness of not more than 20 microns. 20. An eye mounted display, comprising:
a contact lens; and an image projector mounted in the contact lens, the image projector comprising:
a display element; and
non-folded optics that project an image from the display element onto a retina of a user wearing the contact lens, where the image projector has an in-line optical design. | 2,600 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.