Unnamed: 0
int64
0
350k
level_0
int64
0
351k
ApplicationNumber
int64
9.75M
96.1M
ArtUnit
int64
1.6k
3.99k
Abstract
stringlengths
1
8.37k
Claims
stringlengths
3
292k
abstract-claims
stringlengths
68
293k
TechCenter
int64
1.6k
3.9k
10,300
10,300
14,704,971
2,616
One embodiment is directed to a user display device comprising a housing frame mountable on the head of the user, a lens mountable on the housing frame and a projection sub system coupled to the housing frame to determine a location of appearance of a display object in a field of view of the user based at least in part on at least one of a detection of a head movement of the user and a prediction of a head movement of the user, and to project the display object to the user based on the determined location of appearance of the display object.
1. A method of operation in a virtual image presentation system, the method comprising: rendering a first complete frame having a first field and a second field to an image buffer, wherein the first field includes at least a first spiral scan line and the second field includes at least a second spiral scan line, the second spiral scan line interlaced with at least the first spiral scan line; reading out of the frame buffer which stores the first complete frame; and dynamically interrupting the reading out of the first complete frame before completion of the reading of the first complete frame by a reading out of an update to the first complete frame in which a portion of the pixel information has changed from the first complete frame. 2. The method of claim 1, wherein the dynamic interruption of the reading out is based at least in part on a detected head movement of an end user, wherein the detected head movement exceeds a nominal head movement value. 3. The method of claim 1, further comprising substituting an updated second spiral scan line for the second spiral scan line of the first complete frame. 4. The method of claim 1, further comprising: phase shifting the second spiral scan line with respect to the first spiral scan line to interlace the first and the second spiral scan lines. 5. The method of claim 1, further comprising: phase shifting a third spiral scan line with respect to the second spiral scan line to interlace the first, the second and the third spiral scan lines. 6. The method of claim 1, further comprising: phase shifting a fourth spiral scan line with respect to the third spiral scan line to interlace the first, the second, the third, and the fourth spiral scan lines. 7. The method of claim 1, wherein the portion comprises a trace of the spiral scan line. 8. The method of claim 1, wherein the dynamic interruption occurs during a presentation of a field. 9. The method of claim 1, wherein the dynamic interruption occurs during presentation of a line. 10. The method of claim 1, wherein the dynamic interruption occurs during a complete cycle of a spiral scan line.
One embodiment is directed to a user display device comprising a housing frame mountable on the head of the user, a lens mountable on the housing frame and a projection sub system coupled to the housing frame to determine a location of appearance of a display object in a field of view of the user based at least in part on at least one of a detection of a head movement of the user and a prediction of a head movement of the user, and to project the display object to the user based on the determined location of appearance of the display object.1. A method of operation in a virtual image presentation system, the method comprising: rendering a first complete frame having a first field and a second field to an image buffer, wherein the first field includes at least a first spiral scan line and the second field includes at least a second spiral scan line, the second spiral scan line interlaced with at least the first spiral scan line; reading out of the frame buffer which stores the first complete frame; and dynamically interrupting the reading out of the first complete frame before completion of the reading of the first complete frame by a reading out of an update to the first complete frame in which a portion of the pixel information has changed from the first complete frame. 2. The method of claim 1, wherein the dynamic interruption of the reading out is based at least in part on a detected head movement of an end user, wherein the detected head movement exceeds a nominal head movement value. 3. The method of claim 1, further comprising substituting an updated second spiral scan line for the second spiral scan line of the first complete frame. 4. The method of claim 1, further comprising: phase shifting the second spiral scan line with respect to the first spiral scan line to interlace the first and the second spiral scan lines. 5. The method of claim 1, further comprising: phase shifting a third spiral scan line with respect to the second spiral scan line to interlace the first, the second and the third spiral scan lines. 6. The method of claim 1, further comprising: phase shifting a fourth spiral scan line with respect to the third spiral scan line to interlace the first, the second, the third, and the fourth spiral scan lines. 7. The method of claim 1, wherein the portion comprises a trace of the spiral scan line. 8. The method of claim 1, wherein the dynamic interruption occurs during a presentation of a field. 9. The method of claim 1, wherein the dynamic interruption occurs during presentation of a line. 10. The method of claim 1, wherein the dynamic interruption occurs during a complete cycle of a spiral scan line.
2,600
10,301
10,301
15,739,799
2,674
The invention provides an augmented reality device and method for assisting a user in choosing appropriate luminaire fixtures to install within their home. A user may point the camera of a mobile device toward the region or location in a room where a new luminaire is desired, and based upon data generated by an orientation determination means included within the device, an appropriate luminaire or luminaire category is selected automatically for the user from a stored catalogue or database. Once an appropriate luminaire has been chosen, it is inserted within an image captured by the camera to generate an augmented reality image depicting the luminaire fixture in place within the user's room.
1. A mobile device, comprising: a display panel; a camera adapted to capture an image; an orientation determination sensor; and a processor, the processor adapted to: receive orientation data from the orientation determination sensor, the orientation data indicating a determined orientation of the mobile device, select a luminaire category from a set of luminaire categories based on the determined orientation, each luminaire category associated with at least one orientation, select a virtual image of a luminaire fixture of the selected luminaire category from among a stored set of virtual luminaire images, generate an augmented reality image by combining the selected virtual image of a luminaire fixture of the selected luminaire category with the image captured by the camera, and control the display panel to display the generated augmented reality image. 2. A mobile device as claimed in claim 1, wherein the orientation determination sensor comprises an inertial sensor. 3. A mobile device as claimed in claim 1, wherein the processor is adapted to generate the augmented reality image by overlaying the virtual image of a luminaire fixture at a target location within the camera image. 4. A mobile device as claimed in claim 1, wherein the mobile device further comprises a user input element adapted to generate output signals in response to user input commands, and wherein the processor is adapted to select a virtual image of a luminaire fixture by: selecting a subset of virtual luminaire fixture images from among the stored set of luminaire fixture images on the basis of data generated by the orientation determination sensor, and selecting a virtual image of a luminaire fixture from among the selected subset on the basis of output signals from the user input element. 5. A mobile device as claimed in claim 4, wherein the selected subset corresponds to a particular category of possible mounting location for a luminaire fixture within a room. 6. A mobile device as claimed in claim 5, wherein the category of possible mounting location corresponds to a particular elevation category within a room. 7. A mobile device as claimed in claim 4, wherein the processor is adapted to control the display panel to display a subgroup of the selected subset of virtual luminaire fixture images. 8. A mobile device as claimed in claim 1, wherein the stored virtual images of luminaire fixtures are stored externally to the mobile device, and the processor is adapted to generate the augmented reality image by downloading the selected virtual image from said externally stored virtual images. 9. A method of visualizing luminaire fixtures within a room using a mobile device, the mobile device comprising: a display panel; a camera; and an orientation determination sensor, the method comprising: capturing an image using the camera; capturing orientation data generated by the orientation determination sensor, the orientation data indicating a determined orientation of the mobile device; selecting a luminaire category from a set of luminaire categories based on the determined orientation, each luminaire category associated with at least one orientation, selecting a virtual image of a luminaire fixture of the selected luminaire category from among a stored set of virtual images at least partly on the basis of the data captured from the orientation determination sensor; generating an augmented reality image by combining the selected virtual image of a luminaire of the selected luminaire category with the image captured by the camera, and controlling the display panel to display the generated augmented reality image. 10. A method as claimed in claim 9, wherein generating the augmented reality image comprises overlaying the virtual image of a luminaire fixture at a target location within the camera image. 11. A method as claimed in claim 9, wherein the mobile device further comprises a user input element adapted to generate output signals in response to user input commands, and wherein the selecting a virtual image of a luminaire fixture comprises: selecting a subset of virtual luminaire fixture images from among the stored set of luminaire fixture images on the basis of data captured from the orientation determination sensor, and selecting a virtual image of a luminaire fixture from among the selected subset on the basis of output signals from the user input element. 12. A method as claimed in claim 11, further comprising controlling the display panel to display a subgroup of the selected subset of virtual luminaire images and determining the members of the subgroup which the control panel is controlled to display at least partly on the basis of output signals from the user input element. 13. A computer program comprising computer program code which is adapted, when run on a computer, to perform the steps of claim 9.
The invention provides an augmented reality device and method for assisting a user in choosing appropriate luminaire fixtures to install within their home. A user may point the camera of a mobile device toward the region or location in a room where a new luminaire is desired, and based upon data generated by an orientation determination means included within the device, an appropriate luminaire or luminaire category is selected automatically for the user from a stored catalogue or database. Once an appropriate luminaire has been chosen, it is inserted within an image captured by the camera to generate an augmented reality image depicting the luminaire fixture in place within the user's room.1. A mobile device, comprising: a display panel; a camera adapted to capture an image; an orientation determination sensor; and a processor, the processor adapted to: receive orientation data from the orientation determination sensor, the orientation data indicating a determined orientation of the mobile device, select a luminaire category from a set of luminaire categories based on the determined orientation, each luminaire category associated with at least one orientation, select a virtual image of a luminaire fixture of the selected luminaire category from among a stored set of virtual luminaire images, generate an augmented reality image by combining the selected virtual image of a luminaire fixture of the selected luminaire category with the image captured by the camera, and control the display panel to display the generated augmented reality image. 2. A mobile device as claimed in claim 1, wherein the orientation determination sensor comprises an inertial sensor. 3. A mobile device as claimed in claim 1, wherein the processor is adapted to generate the augmented reality image by overlaying the virtual image of a luminaire fixture at a target location within the camera image. 4. A mobile device as claimed in claim 1, wherein the mobile device further comprises a user input element adapted to generate output signals in response to user input commands, and wherein the processor is adapted to select a virtual image of a luminaire fixture by: selecting a subset of virtual luminaire fixture images from among the stored set of luminaire fixture images on the basis of data generated by the orientation determination sensor, and selecting a virtual image of a luminaire fixture from among the selected subset on the basis of output signals from the user input element. 5. A mobile device as claimed in claim 4, wherein the selected subset corresponds to a particular category of possible mounting location for a luminaire fixture within a room. 6. A mobile device as claimed in claim 5, wherein the category of possible mounting location corresponds to a particular elevation category within a room. 7. A mobile device as claimed in claim 4, wherein the processor is adapted to control the display panel to display a subgroup of the selected subset of virtual luminaire fixture images. 8. A mobile device as claimed in claim 1, wherein the stored virtual images of luminaire fixtures are stored externally to the mobile device, and the processor is adapted to generate the augmented reality image by downloading the selected virtual image from said externally stored virtual images. 9. A method of visualizing luminaire fixtures within a room using a mobile device, the mobile device comprising: a display panel; a camera; and an orientation determination sensor, the method comprising: capturing an image using the camera; capturing orientation data generated by the orientation determination sensor, the orientation data indicating a determined orientation of the mobile device; selecting a luminaire category from a set of luminaire categories based on the determined orientation, each luminaire category associated with at least one orientation, selecting a virtual image of a luminaire fixture of the selected luminaire category from among a stored set of virtual images at least partly on the basis of the data captured from the orientation determination sensor; generating an augmented reality image by combining the selected virtual image of a luminaire of the selected luminaire category with the image captured by the camera, and controlling the display panel to display the generated augmented reality image. 10. A method as claimed in claim 9, wherein generating the augmented reality image comprises overlaying the virtual image of a luminaire fixture at a target location within the camera image. 11. A method as claimed in claim 9, wherein the mobile device further comprises a user input element adapted to generate output signals in response to user input commands, and wherein the selecting a virtual image of a luminaire fixture comprises: selecting a subset of virtual luminaire fixture images from among the stored set of luminaire fixture images on the basis of data captured from the orientation determination sensor, and selecting a virtual image of a luminaire fixture from among the selected subset on the basis of output signals from the user input element. 12. A method as claimed in claim 11, further comprising controlling the display panel to display a subgroup of the selected subset of virtual luminaire images and determining the members of the subgroup which the control panel is controlled to display at least partly on the basis of output signals from the user input element. 13. A computer program comprising computer program code which is adapted, when run on a computer, to perform the steps of claim 9.
2,600
10,302
10,302
14,790,271
2,625
Methods, systems and computer program products for controlling display of different types of medical images and providing touchscreen interfaces for display on a mobile communication device and associated with different image types, e.g., different imaging modalities or different view modes. Detection of a multi-finger tap on the screen of the mobile communication device while viewing a first touchscreen interface for an image type invokes a second or auxiliary touchscreen interface for that image type having a subset of interface elements of the first touchscreen interface.
1-19. (canceled) 20. A computer-implemented method for controlling display of medical images on a medical image review workstation using a mobile communication device operatively coupled to the workstation, the method comprising: the mobile communication device displaying or invoking for display on a screen thereof a user interface for controlling display of a medical image on the review workstation, the user interface comprising a plurality of user interface elements displayed on the mobile communication device; the mobile communication device detecting selection of a user interface element of the displayed plurality; the mobile communication device sending an instruction to the review workstation to cause a change in the display of the medical image based on the selected user interface element. 21. The method of claim 20, wherein the user interface is customizable to control display of medical images generated by different imaging modalities. 22. The method of claim 20, wherein the user interface is customizable to control display of medical images generated by different view modes. 23. The method of claim 20, wherein the user interface is customizable to control display of medical images generated by different imaging modalities and view modes. 24. The method of claim 20, wherein the user interface is customizable to control display of medical images generated by different imaging devices. 25. The method of claim 24, wherein the different imaging devices include a first imaging device made by a first manufacturer, and a second imaging device made by a second manufacturer different from the first manufacturer. 26. The method of claim 24, wherein the different imaging devices include a first imaging device having a first interface type displayed on the workstation, and a second imaging device having a second interface type displayed on the workstation, the first interface type being different from the second interface type. 27. The method of claim 20, further comprising the mobile communication device automatically customizing the user interface based on the displayed medical image. 28. The method of claim 27, wherein the user interface is automatically customized by the mobile communication device based on a view mode of the displayed medical image. 29. The method of claim 27, wherein the user interface is automatically customized by the mobile communication device based on an imaging modality of the displayed medical image. 30. The method of claim 27, wherein automatically customizing the user interface comprises changing the plurality of interface elements displayed on the mobile communication device. 31. The method of claim 30, wherein changing the plurality of user interface elements displayed on the mobile communication device comprises changing a characteristic selected from the group consisting of number, shape and spatial arrangement of the displayed user interface elements. 32. The method of claim 27, further comprising the mobile communication device automatically customizing the user interface in real time based on the displayed medical image. 33. A computer program product comprising a non-transitory computer readable storage medium having stored thereupon a sequence of instructions which, when executed by a computer, causes the computer to perform a process for controlling display of medical images on a review workstation using a mobile communication device operatively coupled to the review workstation, the process comprising: the mobile communication device displaying or invoking for display, on a screen thereof, a user interface for controlling display of a medical image on the review workstation, the user interface comprising a plurality of user interface elements displayed on the mobile communication device; the mobile communication device detecting selection of a user interface element of the displayed plurality; the mobile communication device sending an instruction to the review workstation, the instruction to cause a change in the display of the medical image based on the selected user interface element.
Methods, systems and computer program products for controlling display of different types of medical images and providing touchscreen interfaces for display on a mobile communication device and associated with different image types, e.g., different imaging modalities or different view modes. Detection of a multi-finger tap on the screen of the mobile communication device while viewing a first touchscreen interface for an image type invokes a second or auxiliary touchscreen interface for that image type having a subset of interface elements of the first touchscreen interface.1-19. (canceled) 20. A computer-implemented method for controlling display of medical images on a medical image review workstation using a mobile communication device operatively coupled to the workstation, the method comprising: the mobile communication device displaying or invoking for display on a screen thereof a user interface for controlling display of a medical image on the review workstation, the user interface comprising a plurality of user interface elements displayed on the mobile communication device; the mobile communication device detecting selection of a user interface element of the displayed plurality; the mobile communication device sending an instruction to the review workstation to cause a change in the display of the medical image based on the selected user interface element. 21. The method of claim 20, wherein the user interface is customizable to control display of medical images generated by different imaging modalities. 22. The method of claim 20, wherein the user interface is customizable to control display of medical images generated by different view modes. 23. The method of claim 20, wherein the user interface is customizable to control display of medical images generated by different imaging modalities and view modes. 24. The method of claim 20, wherein the user interface is customizable to control display of medical images generated by different imaging devices. 25. The method of claim 24, wherein the different imaging devices include a first imaging device made by a first manufacturer, and a second imaging device made by a second manufacturer different from the first manufacturer. 26. The method of claim 24, wherein the different imaging devices include a first imaging device having a first interface type displayed on the workstation, and a second imaging device having a second interface type displayed on the workstation, the first interface type being different from the second interface type. 27. The method of claim 20, further comprising the mobile communication device automatically customizing the user interface based on the displayed medical image. 28. The method of claim 27, wherein the user interface is automatically customized by the mobile communication device based on a view mode of the displayed medical image. 29. The method of claim 27, wherein the user interface is automatically customized by the mobile communication device based on an imaging modality of the displayed medical image. 30. The method of claim 27, wherein automatically customizing the user interface comprises changing the plurality of interface elements displayed on the mobile communication device. 31. The method of claim 30, wherein changing the plurality of user interface elements displayed on the mobile communication device comprises changing a characteristic selected from the group consisting of number, shape and spatial arrangement of the displayed user interface elements. 32. The method of claim 27, further comprising the mobile communication device automatically customizing the user interface in real time based on the displayed medical image. 33. A computer program product comprising a non-transitory computer readable storage medium having stored thereupon a sequence of instructions which, when executed by a computer, causes the computer to perform a process for controlling display of medical images on a review workstation using a mobile communication device operatively coupled to the review workstation, the process comprising: the mobile communication device displaying or invoking for display, on a screen thereof, a user interface for controlling display of a medical image on the review workstation, the user interface comprising a plurality of user interface elements displayed on the mobile communication device; the mobile communication device detecting selection of a user interface element of the displayed plurality; the mobile communication device sending an instruction to the review workstation, the instruction to cause a change in the display of the medical image based on the selected user interface element.
2,600
10,303
10,303
15,621,966
2,621
An electronic device is configured to provide localized haptic feedback to a user on one or more regions or sections of a surface of the electronic device. The localized haptic feedback is provided by an array of piezoelectric haptic actuators below the surface of the electronic device. Actuators within the array of piezoelectric haptic actuators are separately controllable by a control circuit layer. The control circuit layer includes control circuitry, a master flexible circuit which passes between rows of actuators, and an array of slave flexible circuits. Each slave flexible circuit is connected to the master flexible circuit and an actuator. In further examples, the array of piezoelectric haptic actuators provides a unified structure for detecting touch and force inputs.
1. An electronic device, comprising: a cover sheet; a display positioned below the cover sheet; a chassis positioned below the display; an array of piezoelectric actuators positioned below and coupled to the chassis; and a flexible circuit assembly electrically coupled to each of the array of piezoelectric actuators, comprising: a master flexible circuit positioned along a row of piezoelectric actuators; a first slave flexible circuit electrically coupled to the master flexible circuit and electrically coupled to a first of the array of piezoelectric actuators; a second slave flexible circuit electrically coupled to the master flexible circuit and electrically coupled to a second of the array of piezoelectric actuators. 2. The electronic device of claim 1, further comprising control circuitry electrically coupled to the master flexible circuit and configured to generate control signals to selectively actuate the array of piezoelectric actuators. 3. The electronic device of claim 1, wherein each of the array of piezoelectric actuators comprises: a piezoelectric substrate having a first surface and a second surface parallel to the first surface; a first electrode formed on the first surface; and a second electrode formed on the second surface and a portion of the first surface. 4. The electronic device of claim 3, wherein the first electrode and the second electrode are formed by at least one of vapor deposition, sputtering, printing, and roll-to-roll processing. 5. The electronic device of claim 3, wherein the first electrode and second electrode are formed by plating the piezoelectric substrate with nickel. 6. The electronic device of claim 1, wherein the master flexible circuit, the first slave flexible circuit, and the second slave flexible circuit each comprise: a flexible substrate; and one or more conducting traces formed on or within the flexible substrate. 7. The electronic device of claim 6, wherein the flexible substrate comprises at least one of polyimide and polyethylene terephthalate. 8. The electronic device of claim 6, wherein the one or more conducting traces comprise at least one of silver, copper, constantan, and karma. 9. The electronic device of claim 6, wherein the first slave flexible circuit comprises a first conductive trace coupled to a control signal and a second conductive trace coupled to a reference voltage. 10. The electronic device of claim 6, wherein: the first slave flexible circuit comprises a first conductive trace coupled to a control signal; and the chassis is coupled to a reference voltage. 11. A haptic actuator module, comprising: a piezoelectric substrate defining a top surface and an opposing bottom surface; a top electrode coupled to the top surface; a bottom electrode coupled to the bottom surface; and a control system, comprising: a control circuit configured to generate control signals to induce a voltage across the piezoelectric substrate and cause the piezoelectric substrate to compress along a direction; a flexible circuit electrically connected to the control circuit and at least one of the top electrode and the bottom electrode, comprising: a master control flex connected to the control circuit; a first slave control flex connected to the master flex and at least one of the top electrode and the bottom electrode; a second slave control flex connected to another haptic actuator. 12. The haptic actuator module of claim 11, wherein: a portion of the bottom electrode is deposited on the top surface of the piezoelectric substrate; and the first slave control flex is connected to the top electrode and the bottom electrode at the top surface of the piezoelectric substrate. 13. The haptic actuator module of claim 12, wherein the first slave control flex is coupled to the top electrode and the portion of the bottom electrode by an anisotropic conductive film. 14. The haptic actuator module of claim 11, wherein: the top electrode is electrically connected to a support structure; the support structure is biased with a reference voltage level; and the first slave control flex is coupled to the bottom electrode and configured to provide a control signal to the bottom electrode. 15. The haptic actuator module of claim 14, wherein the support structure is coupled to the top electrode by an isotropic conductive film. 16. The haptic actuator module of claim 11, wherein: the first slave control flex is split at an end into a first portion and a second portion; the first portion is coupled to the top electrode and configured to provide a control signal to the top electrode; and the second portion is coupled to the bottom electrode and configured to provide a reference voltage level to the bottom electrode. 17. The haptic actuator module of claim 16, wherein the first portion is coupled to the top electrode by a first isotropic conductive film and the second portion is coupled to the bottom electrode by a second isotropic conductive film. 18. A method for connecting an array of piezoelectric haptic actuators to control circuitry, the method comprising: applying an electrically conductive bonding agent to a master flex member; aligning a first slave flex member with the master flex member; aligning a second slave flex member with the master flex member; bonding the first slave flex member and the second slave flex member with the master flex member; wherein: the master flex member is positioned along a row of slave flex members including the first slave flex member and the second slave flex member; the master flex member is configured to provide a first control signal to the first slave flex member and a second control signal to the second slave flex member; the first slave flex member is configured to provide the first control signal to a first piezoelectric haptic actuator; and the second slave flex member is configured to provide the second control signal to a second piezoelectric haptic actuator. 19. The method of claim 18, wherein: the electrically conductive bonding agent comprises a solder paste; and the bonding the first slave flex member and the second slave flex member with the master flex member comprises heating the first slave flex member, the second slave flex member, and the master flex member. 20. The method of claim 19, wherein the heating the first slave flex member, the second slave flex member, and the master flex member comprises heating in a reflow oven. 21. The method of claim 18, wherein: the electrically conductive bonding agent is a first electrically conductive bonding agent; and the method further comprises: applying a second electrically conductive bonding agent to the first slave flex member; and bonding the first piezoelectric haptic actuator to the first slave flex member. 22. The method of claim 21, wherein: the second electrically conductive bonding agent comprises an anisotropic conductive film; and the bonding the first piezoelectric haptic actuator to the first slave flex member comprises placing the first piezoelectric haptic actuator on the second electrically conductive bonding agent. 23. The method of claim 18, further comprising forming the first piezoelectric haptic actuator by: depositing a first electrode on a first side of a piezoelectric substrate; and depositing a second electrode on a second side of the piezoelectric substrate and a portion of the first side of the piezoelectric substrate. 24. An electronic device comprising: an enclosure; a display positioned within the enclosure; an input region positioned within the enclosure; and a sensor structure positioned below the input region, comprising: a piezoelectric substrate; a sensing layer comprising a plurality of drive electrodes and a plurality of sense electrodes; and a connection layer comprising a plurality of conductive elements connected to the plurality of sense electrodes by vias; wherein: the sensor structure is configured to detect a location of a touch within the user input region and to estimate an amount of force corresponding to the touch; the drive electrodes and the sense electrodes are coplanar; and the conductive elements are not coplanar with the sense electrodes. 25. The electronic device of claim 24, further comprising touch sensing circuitry operatively coupled to the sensor structure and configured to determine the location of the touch. 26. The electronic device of claim 25, further comprising force sensing circuitry operatively coupled to the sensor structure and configured to output a signal in response to the amount of force exceeding a given threshold. 27. The electronic device of claim 26, wherein the given threshold is dynamically configurable. 28. The electronic device of claim 26, wherein the touch sensing circuitry and the force sensing circuitry form a combined touch and force sensing circuitry. 29. The electronic device of claim 24, wherein the sensor structure is further configured to output haptic feedback to the input region. 30. The electronic device of claim 24, wherein the sensing layer comprises the plurality of drive electrodes arranged in rows and the plurality of sense electrodes arranged in columns. 31. The electronic device of claim 30, wherein the plurality of drive electrodes and the plurality of sense electrodes are coplanar. 32. The electronic device of claim 31, wherein; each of the plurality of sense electrodes spans a length of a column; two or more of the plurality of drive electrodes are disposed between pairs of sense electrodes; and a row of drive electrodes is electrically connected together. 33. The electronic device of claim 30, wherein the plurality of drive electrodes and the plurality of sense electrodes are non-coplanar. 34. The electronic device of claim 24, wherein: the piezoelectric substrate is a first piezoelectric substrate; and the sensor structure further comprises a second piezoelectric substrate coplanar to the first piezoelectric substrate. 35. A method of detecting a touch and estimating an amount of force of the touch, the method comprising: detecting the touch with a sensor structure comprising a piezoelectric substrate; detecting an electrical response caused by compression of the piezoelectric substrate with the sensor structure; estimating the amount of force using the electrical response; and outputting a signal indicating the estimated amount of force. 36. The method of claim 35, further comprising determining a location of the touch with touch sensing circuitry coupled to the sensor structure. 37. The method of claim 35, wherein the outputting the signal is in response to the estimated amount of force exceeding a given threshold. 38. The method of claim 37, wherein the given threshold is a dynamic threshold. 39. A user input device, comprising: a cover sheet comprising a user input surface; and a sensor structure positioned below the cover sheet, comprising: a piezoelectric substrate; and a sensing layer comprising a plurality of electrodes; wherein the sensor structure is configured to detect a location of a touch on the user input surface and to estimate an amount of force corresponding to the touch. 40. The user input device of claim 39, further comprising touch sensing circuitry operatively coupled to the sensor structure and configured to determine the location of the touch. 41. The user input device of claim 39, further comprising force sensing circuitry operatively coupled to the sensor structure and configured to output a signal in response to the amount of force exceeding a given threshold. 42. The user input device of claim 39, wherein the sensor structure is further configured to output haptic feedback to the user input surface. 43. The user input device of claim 39, wherein the user input device is a trackpad. 44. The user input device of claim 43, wherein the trackpad is incorporated into a laptop computer. 45. The user input device of claim 39, wherein the user input device is operatively coupled to a mobile device. 46. The user input device of claim 45, wherein the mobile device comprises a phone, a tablet, a speaker, a headphone, a mouse, or a musical instrument. 47. The user input device of claim 39, wherein the user input device is a touch- and force-sensitive keyboard. 48. The user input device of claim 40, wherein the touch sensing circuitry is configurable to define a touch-sensing region on the cover sheet. 49. The user input device of claim 48, further comprising force sensing circuitry operatively coupled to the sensor structure and configured to define a force-sensing region wherein the force sensing circuitry outputs a signal in response to the amount of force exceeding a given threshold.
An electronic device is configured to provide localized haptic feedback to a user on one or more regions or sections of a surface of the electronic device. The localized haptic feedback is provided by an array of piezoelectric haptic actuators below the surface of the electronic device. Actuators within the array of piezoelectric haptic actuators are separately controllable by a control circuit layer. The control circuit layer includes control circuitry, a master flexible circuit which passes between rows of actuators, and an array of slave flexible circuits. Each slave flexible circuit is connected to the master flexible circuit and an actuator. In further examples, the array of piezoelectric haptic actuators provides a unified structure for detecting touch and force inputs.1. An electronic device, comprising: a cover sheet; a display positioned below the cover sheet; a chassis positioned below the display; an array of piezoelectric actuators positioned below and coupled to the chassis; and a flexible circuit assembly electrically coupled to each of the array of piezoelectric actuators, comprising: a master flexible circuit positioned along a row of piezoelectric actuators; a first slave flexible circuit electrically coupled to the master flexible circuit and electrically coupled to a first of the array of piezoelectric actuators; a second slave flexible circuit electrically coupled to the master flexible circuit and electrically coupled to a second of the array of piezoelectric actuators. 2. The electronic device of claim 1, further comprising control circuitry electrically coupled to the master flexible circuit and configured to generate control signals to selectively actuate the array of piezoelectric actuators. 3. The electronic device of claim 1, wherein each of the array of piezoelectric actuators comprises: a piezoelectric substrate having a first surface and a second surface parallel to the first surface; a first electrode formed on the first surface; and a second electrode formed on the second surface and a portion of the first surface. 4. The electronic device of claim 3, wherein the first electrode and the second electrode are formed by at least one of vapor deposition, sputtering, printing, and roll-to-roll processing. 5. The electronic device of claim 3, wherein the first electrode and second electrode are formed by plating the piezoelectric substrate with nickel. 6. The electronic device of claim 1, wherein the master flexible circuit, the first slave flexible circuit, and the second slave flexible circuit each comprise: a flexible substrate; and one or more conducting traces formed on or within the flexible substrate. 7. The electronic device of claim 6, wherein the flexible substrate comprises at least one of polyimide and polyethylene terephthalate. 8. The electronic device of claim 6, wherein the one or more conducting traces comprise at least one of silver, copper, constantan, and karma. 9. The electronic device of claim 6, wherein the first slave flexible circuit comprises a first conductive trace coupled to a control signal and a second conductive trace coupled to a reference voltage. 10. The electronic device of claim 6, wherein: the first slave flexible circuit comprises a first conductive trace coupled to a control signal; and the chassis is coupled to a reference voltage. 11. A haptic actuator module, comprising: a piezoelectric substrate defining a top surface and an opposing bottom surface; a top electrode coupled to the top surface; a bottom electrode coupled to the bottom surface; and a control system, comprising: a control circuit configured to generate control signals to induce a voltage across the piezoelectric substrate and cause the piezoelectric substrate to compress along a direction; a flexible circuit electrically connected to the control circuit and at least one of the top electrode and the bottom electrode, comprising: a master control flex connected to the control circuit; a first slave control flex connected to the master flex and at least one of the top electrode and the bottom electrode; a second slave control flex connected to another haptic actuator. 12. The haptic actuator module of claim 11, wherein: a portion of the bottom electrode is deposited on the top surface of the piezoelectric substrate; and the first slave control flex is connected to the top electrode and the bottom electrode at the top surface of the piezoelectric substrate. 13. The haptic actuator module of claim 12, wherein the first slave control flex is coupled to the top electrode and the portion of the bottom electrode by an anisotropic conductive film. 14. The haptic actuator module of claim 11, wherein: the top electrode is electrically connected to a support structure; the support structure is biased with a reference voltage level; and the first slave control flex is coupled to the bottom electrode and configured to provide a control signal to the bottom electrode. 15. The haptic actuator module of claim 14, wherein the support structure is coupled to the top electrode by an isotropic conductive film. 16. The haptic actuator module of claim 11, wherein: the first slave control flex is split at an end into a first portion and a second portion; the first portion is coupled to the top electrode and configured to provide a control signal to the top electrode; and the second portion is coupled to the bottom electrode and configured to provide a reference voltage level to the bottom electrode. 17. The haptic actuator module of claim 16, wherein the first portion is coupled to the top electrode by a first isotropic conductive film and the second portion is coupled to the bottom electrode by a second isotropic conductive film. 18. A method for connecting an array of piezoelectric haptic actuators to control circuitry, the method comprising: applying an electrically conductive bonding agent to a master flex member; aligning a first slave flex member with the master flex member; aligning a second slave flex member with the master flex member; bonding the first slave flex member and the second slave flex member with the master flex member; wherein: the master flex member is positioned along a row of slave flex members including the first slave flex member and the second slave flex member; the master flex member is configured to provide a first control signal to the first slave flex member and a second control signal to the second slave flex member; the first slave flex member is configured to provide the first control signal to a first piezoelectric haptic actuator; and the second slave flex member is configured to provide the second control signal to a second piezoelectric haptic actuator. 19. The method of claim 18, wherein: the electrically conductive bonding agent comprises a solder paste; and the bonding the first slave flex member and the second slave flex member with the master flex member comprises heating the first slave flex member, the second slave flex member, and the master flex member. 20. The method of claim 19, wherein the heating the first slave flex member, the second slave flex member, and the master flex member comprises heating in a reflow oven. 21. The method of claim 18, wherein: the electrically conductive bonding agent is a first electrically conductive bonding agent; and the method further comprises: applying a second electrically conductive bonding agent to the first slave flex member; and bonding the first piezoelectric haptic actuator to the first slave flex member. 22. The method of claim 21, wherein: the second electrically conductive bonding agent comprises an anisotropic conductive film; and the bonding the first piezoelectric haptic actuator to the first slave flex member comprises placing the first piezoelectric haptic actuator on the second electrically conductive bonding agent. 23. The method of claim 18, further comprising forming the first piezoelectric haptic actuator by: depositing a first electrode on a first side of a piezoelectric substrate; and depositing a second electrode on a second side of the piezoelectric substrate and a portion of the first side of the piezoelectric substrate. 24. An electronic device comprising: an enclosure; a display positioned within the enclosure; an input region positioned within the enclosure; and a sensor structure positioned below the input region, comprising: a piezoelectric substrate; a sensing layer comprising a plurality of drive electrodes and a plurality of sense electrodes; and a connection layer comprising a plurality of conductive elements connected to the plurality of sense electrodes by vias; wherein: the sensor structure is configured to detect a location of a touch within the user input region and to estimate an amount of force corresponding to the touch; the drive electrodes and the sense electrodes are coplanar; and the conductive elements are not coplanar with the sense electrodes. 25. The electronic device of claim 24, further comprising touch sensing circuitry operatively coupled to the sensor structure and configured to determine the location of the touch. 26. The electronic device of claim 25, further comprising force sensing circuitry operatively coupled to the sensor structure and configured to output a signal in response to the amount of force exceeding a given threshold. 27. The electronic device of claim 26, wherein the given threshold is dynamically configurable. 28. The electronic device of claim 26, wherein the touch sensing circuitry and the force sensing circuitry form a combined touch and force sensing circuitry. 29. The electronic device of claim 24, wherein the sensor structure is further configured to output haptic feedback to the input region. 30. The electronic device of claim 24, wherein the sensing layer comprises the plurality of drive electrodes arranged in rows and the plurality of sense electrodes arranged in columns. 31. The electronic device of claim 30, wherein the plurality of drive electrodes and the plurality of sense electrodes are coplanar. 32. The electronic device of claim 31, wherein; each of the plurality of sense electrodes spans a length of a column; two or more of the plurality of drive electrodes are disposed between pairs of sense electrodes; and a row of drive electrodes is electrically connected together. 33. The electronic device of claim 30, wherein the plurality of drive electrodes and the plurality of sense electrodes are non-coplanar. 34. The electronic device of claim 24, wherein: the piezoelectric substrate is a first piezoelectric substrate; and the sensor structure further comprises a second piezoelectric substrate coplanar to the first piezoelectric substrate. 35. A method of detecting a touch and estimating an amount of force of the touch, the method comprising: detecting the touch with a sensor structure comprising a piezoelectric substrate; detecting an electrical response caused by compression of the piezoelectric substrate with the sensor structure; estimating the amount of force using the electrical response; and outputting a signal indicating the estimated amount of force. 36. The method of claim 35, further comprising determining a location of the touch with touch sensing circuitry coupled to the sensor structure. 37. The method of claim 35, wherein the outputting the signal is in response to the estimated amount of force exceeding a given threshold. 38. The method of claim 37, wherein the given threshold is a dynamic threshold. 39. A user input device, comprising: a cover sheet comprising a user input surface; and a sensor structure positioned below the cover sheet, comprising: a piezoelectric substrate; and a sensing layer comprising a plurality of electrodes; wherein the sensor structure is configured to detect a location of a touch on the user input surface and to estimate an amount of force corresponding to the touch. 40. The user input device of claim 39, further comprising touch sensing circuitry operatively coupled to the sensor structure and configured to determine the location of the touch. 41. The user input device of claim 39, further comprising force sensing circuitry operatively coupled to the sensor structure and configured to output a signal in response to the amount of force exceeding a given threshold. 42. The user input device of claim 39, wherein the sensor structure is further configured to output haptic feedback to the user input surface. 43. The user input device of claim 39, wherein the user input device is a trackpad. 44. The user input device of claim 43, wherein the trackpad is incorporated into a laptop computer. 45. The user input device of claim 39, wherein the user input device is operatively coupled to a mobile device. 46. The user input device of claim 45, wherein the mobile device comprises a phone, a tablet, a speaker, a headphone, a mouse, or a musical instrument. 47. The user input device of claim 39, wherein the user input device is a touch- and force-sensitive keyboard. 48. The user input device of claim 40, wherein the touch sensing circuitry is configurable to define a touch-sensing region on the cover sheet. 49. The user input device of claim 48, further comprising force sensing circuitry operatively coupled to the sensor structure and configured to define a force-sensing region wherein the force sensing circuitry outputs a signal in response to the amount of force exceeding a given threshold.
2,600
10,304
10,304
14,113,609
2,623
The present invention relates to an area-saving driving circuit for a display panel, which comprises a plurality of digital-to-analog converting circuits convert input data, respectively, and produce a pixel signal. A plurality of driving units are coupled to the plurality of digital-to-analog converting circuits, respectively. They produce a driving signal according to the pixel signal and transmit the driving signal to the display panel for displaying. A plurality of voltage booster units are coupled to the plurality of driving units, respectively, and produce a supply voltage according to a control signal. Then the supply voltage is provided to the plurality of driving units. Thereby, by providing the supply voltage to the plurality of driving units of the display panel by means of the plurality of voltage booster units, the area of the external storage capacitor is reduced. Alternative, the external storage capacitor can be even not required.
1. An area-saving driving circuit for a display panel, comprising: a plurality of digital-to-analog converting circuits, converting input pixel data and producing a pixel signal, respectively; a plurality of driving units, coupled to said plurality of digital-to-analog converting circuits, respectively, producing a driving signal according to said pixel signal, and transmitting said driving signal to said display panel for displaying; and a plurality of voltage booster units, coupled to said plurality of driving units, respectively, producing a supply voltage, and providing said supply voltage to said plurality of driving units, respectively. 2. The driving circuit of claim 1, wherein said plurality of voltage booster units produce said supply voltage according to a control signal. 3. The driving circuit of claim 2, wherein any control circuit inside said display panel produces said control signal and transmits said control signal to said plurality of voltage booster units. 4. The driving circuit of claim 1, and further comprising a voltage booster circuit coupled to said plurality of digital-to-analog converting circuits, and producing and providing said supply voltage to said plurality of digital-to-analog converting circuits. 5. The driving circuit of claim 1, wherein said driving circuit is further coupled to a voltage booster circuit, and said voltage booster circuit produces and provides said supply voltage to said plurality of digital-to-analog converting circuits. 6. The driving circuit of claim 1, wherein said display panel comprises a plurality of pixel structures coupled to said plurality of driving units, respectively. 7. The driving circuit of claim I. wherein said plurality of driving units are operational amplifiers. 8. The driving circuit of claim 1, wherein said driving circuit is applied to a source driver of said display panel. 9. The driving circuit of claim 1, wherein said display panel is a thin-film transistor liquid crystal display. 10. An area-saving driving circuit for a display panel, comprising: a plurality of digital-to-analog converting circuits, converting input pixel data and producing a pixel signal, respectively; a plurality of driving units, coupled to said plurality of digital-to-analog converting circuits, respectively, producing a driving signal according to said pixel signal, and transmitting said driving signal to said display panel for displaying; and at least a voltage booster unit, coupled to said plurality of driving units, producing a supply voltage, and providing said supply voltage to a portion of said plurality of driving units. 11. The driving circuit of claim 10, wherein said voltage booster unit produces said supply voltage according to a control signal. 12. The driving circuit of claim 10, wherein said voltage booster unit comprises: a flying capacitor, used for producing said supply voltage; a first transistor, with one terminal coupled to one terminal of said flying capacitor and another terminal receiving an input voltage and controlled by a first control signal; a second transistor, coupled to said flying capacitor and said first transistor, and controlled by a second control signal for outputting said supply voltage; a third transistor, with one terminal coupled to one terminal of said flying capacitor and another terminal receiving said input voltage and controlled by said second control signal; and a fourth transistor, with one terminal coupled to said flying capacitor and said third transistor and another terminal coupled to the ground and controlled by said first control signal. 13. The driving circuit of claim 12, wherein said voltage booster unit further comprises a storage capacitor, with one terminal coupled to said second transistor and the other terminal coupled to the ground for storing and outputting said supply voltage. 14. The driving circuit of claim 10, wherein said voltage booster unit comprises: a transistor, with one terminal receiving an input voltage, and controlled by a control signal; a diode, with one terminal coupled to said transistor and the other terminal coupled to the ground; a storage inductor. coupled to said transistor and said diode for storing the energy of said input voltage; and an output capacitor, with one terminal coupled to said storage inductor and the other terminal coupled to the ground for storing the energy of said input voltage and producing and outputting said supply voltage.
The present invention relates to an area-saving driving circuit for a display panel, which comprises a plurality of digital-to-analog converting circuits convert input data, respectively, and produce a pixel signal. A plurality of driving units are coupled to the plurality of digital-to-analog converting circuits, respectively. They produce a driving signal according to the pixel signal and transmit the driving signal to the display panel for displaying. A plurality of voltage booster units are coupled to the plurality of driving units, respectively, and produce a supply voltage according to a control signal. Then the supply voltage is provided to the plurality of driving units. Thereby, by providing the supply voltage to the plurality of driving units of the display panel by means of the plurality of voltage booster units, the area of the external storage capacitor is reduced. Alternative, the external storage capacitor can be even not required.1. An area-saving driving circuit for a display panel, comprising: a plurality of digital-to-analog converting circuits, converting input pixel data and producing a pixel signal, respectively; a plurality of driving units, coupled to said plurality of digital-to-analog converting circuits, respectively, producing a driving signal according to said pixel signal, and transmitting said driving signal to said display panel for displaying; and a plurality of voltage booster units, coupled to said plurality of driving units, respectively, producing a supply voltage, and providing said supply voltage to said plurality of driving units, respectively. 2. The driving circuit of claim 1, wherein said plurality of voltage booster units produce said supply voltage according to a control signal. 3. The driving circuit of claim 2, wherein any control circuit inside said display panel produces said control signal and transmits said control signal to said plurality of voltage booster units. 4. The driving circuit of claim 1, and further comprising a voltage booster circuit coupled to said plurality of digital-to-analog converting circuits, and producing and providing said supply voltage to said plurality of digital-to-analog converting circuits. 5. The driving circuit of claim 1, wherein said driving circuit is further coupled to a voltage booster circuit, and said voltage booster circuit produces and provides said supply voltage to said plurality of digital-to-analog converting circuits. 6. The driving circuit of claim 1, wherein said display panel comprises a plurality of pixel structures coupled to said plurality of driving units, respectively. 7. The driving circuit of claim I. wherein said plurality of driving units are operational amplifiers. 8. The driving circuit of claim 1, wherein said driving circuit is applied to a source driver of said display panel. 9. The driving circuit of claim 1, wherein said display panel is a thin-film transistor liquid crystal display. 10. An area-saving driving circuit for a display panel, comprising: a plurality of digital-to-analog converting circuits, converting input pixel data and producing a pixel signal, respectively; a plurality of driving units, coupled to said plurality of digital-to-analog converting circuits, respectively, producing a driving signal according to said pixel signal, and transmitting said driving signal to said display panel for displaying; and at least a voltage booster unit, coupled to said plurality of driving units, producing a supply voltage, and providing said supply voltage to a portion of said plurality of driving units. 11. The driving circuit of claim 10, wherein said voltage booster unit produces said supply voltage according to a control signal. 12. The driving circuit of claim 10, wherein said voltage booster unit comprises: a flying capacitor, used for producing said supply voltage; a first transistor, with one terminal coupled to one terminal of said flying capacitor and another terminal receiving an input voltage and controlled by a first control signal; a second transistor, coupled to said flying capacitor and said first transistor, and controlled by a second control signal for outputting said supply voltage; a third transistor, with one terminal coupled to one terminal of said flying capacitor and another terminal receiving said input voltage and controlled by said second control signal; and a fourth transistor, with one terminal coupled to said flying capacitor and said third transistor and another terminal coupled to the ground and controlled by said first control signal. 13. The driving circuit of claim 12, wherein said voltage booster unit further comprises a storage capacitor, with one terminal coupled to said second transistor and the other terminal coupled to the ground for storing and outputting said supply voltage. 14. The driving circuit of claim 10, wherein said voltage booster unit comprises: a transistor, with one terminal receiving an input voltage, and controlled by a control signal; a diode, with one terminal coupled to said transistor and the other terminal coupled to the ground; a storage inductor. coupled to said transistor and said diode for storing the energy of said input voltage; and an output capacitor, with one terminal coupled to said storage inductor and the other terminal coupled to the ground for storing the energy of said input voltage and producing and outputting said supply voltage.
2,600
10,305
10,305
14,540,446
2,611
A system, method, and computer-readable storage medium configured to collect and aggregate images using augmented reality.
1. An augmented reality device method comprising: recording an image with a camera; determining a location of the augmented reality device via a global positioning system (GPS) antenna; determining a direction of the augmented reality device via a gyroscope; determining a date and time with a processor; tagging the image with the date and time, the location and the direction of the augmented reality device with the processor, resulting in a tagged image; transmitting the tagged image to a collection server with a network interface. 2. The augmented reality device method of claim 1, wherein the image is either a video, a still picture, or a time-lapse image. 3. The augmented reality device method of claim 2, further comprising: tagging the image with an origination identifier. 4. The augmented reality device method of claim 3, further comprising: transmitting the origination identifier with the tagged image. 5. The augmented reality device method of claim 4, wherein the origination identifier identifies either the augmented reality device or a user of the augmented reality device. 6. A collection server method comprising: receiving a request with a network interface, the request indicating a requested date, a requested timeframe, and a requested location; searching an image index for images that match the request with a processor, resulting in matched images; retrieving the matched images with the processor; presenting the matched images to a viewer via a display or the network interface. 7. The collection server method of claim 6, further comprising: presenting an orientation of an augmented reality device with the matched images. 8. The collection server method of claim 7, wherein the matched images are associated with an originator identifier that identify either the augmented reality device or a wearer of the augmented reality device. 9. The collection server method of claim 8, further comprising: providing the viewer an opportunity to reward the wearer of the augmented reality device. 10. The collection server method of claim 9, wherein the matched image is either a video, a still picture, or a time-lapse image.
A system, method, and computer-readable storage medium configured to collect and aggregate images using augmented reality.1. An augmented reality device method comprising: recording an image with a camera; determining a location of the augmented reality device via a global positioning system (GPS) antenna; determining a direction of the augmented reality device via a gyroscope; determining a date and time with a processor; tagging the image with the date and time, the location and the direction of the augmented reality device with the processor, resulting in a tagged image; transmitting the tagged image to a collection server with a network interface. 2. The augmented reality device method of claim 1, wherein the image is either a video, a still picture, or a time-lapse image. 3. The augmented reality device method of claim 2, further comprising: tagging the image with an origination identifier. 4. The augmented reality device method of claim 3, further comprising: transmitting the origination identifier with the tagged image. 5. The augmented reality device method of claim 4, wherein the origination identifier identifies either the augmented reality device or a user of the augmented reality device. 6. A collection server method comprising: receiving a request with a network interface, the request indicating a requested date, a requested timeframe, and a requested location; searching an image index for images that match the request with a processor, resulting in matched images; retrieving the matched images with the processor; presenting the matched images to a viewer via a display or the network interface. 7. The collection server method of claim 6, further comprising: presenting an orientation of an augmented reality device with the matched images. 8. The collection server method of claim 7, wherein the matched images are associated with an originator identifier that identify either the augmented reality device or a wearer of the augmented reality device. 9. The collection server method of claim 8, further comprising: providing the viewer an opportunity to reward the wearer of the augmented reality device. 10. The collection server method of claim 9, wherein the matched image is either a video, a still picture, or a time-lapse image.
2,600
10,306
10,306
14,922,033
2,622
By correlating user grip information with micro-mobility events, electronic devices can provide support for a broad range of interactions and contextually-dependent techniques. Such correlation allows electronic devices to better identify device usage contexts, and in turn provide a more responsive and helpful user experience, especially in the context of reading and task performance. To allow for accurate and efficient device usage context identification, a model may be used to make device usage context determinations based on the correlated gesture and micro-mobility data. Once a context, device usage context, or gesture is identified, an action can be taken on one or more electronic devices.
1. A computing system, comprising: at least one processing unit; and memory configured to be in communication with the at least one processing unit, the memory storing instructions that based on execution by the at least one processing unit, cause the at least one processing unit to: receive sensor data from at least one electronic device; determine, based at least partly on the sensor data, a hand grip placement associated with the at least one electronic device; determine, based at least partly on the sensor data, a motion associated with the at least one electronic device; determine, based at least partly on the hand grip placement and the motion, a usage context of the at least one electronic device; and cause an action to be performed based on the usage context of the at least one electronic device. 2. The computing system of claim 1, wherein the hand grip placement and the motion are each associated with a first electronic device of the at least one electronic device. 3. The computing system of claim 2, wherein the action is caused to be performed on a second electronic device of the at least one electronic device. 4. The computing system of claim 1, wherein the hand grip placement is a first hand grip placement associated with a first user, wherein the first hand grip placement is associated with a first electronic device, and wherein the instructions further cause the at least one processing unit to determine, based at least partly on the sensor data, a second hand grip placement associated with the first electronic device, wherein the second hand grip placement is associated with a second user. 5. The computing system of claim 4, wherein the instructions further cause the at least one processing unit to determine the usage context based at least further on the second hand grip placement. 6. The computing system of claim 1, wherein the instructions further cause the at least one processing unit to: determine a type of hand grip placement; and determine the usage context based at least further on the type of hand grip placement. 7. The computing system of claim 1 wherein the instructions further cause the at least one processing unit to determine an identity of a user associated with the hand grip placement, and wherein determining the usage context of the at least one electronic device is further based at least partly on the identity of the user. 8. The computing system of claim 1 wherein the usage context of the at least one electronic device comprises a selection of a portion of content displayed on a first electronic device, and wherein the action comprises causing the portion of content to be displayed on a second electronic device. 9. The computing system of claim 1 wherein the at least one electronic device is part of the computing system. 10. A method comprising: receiving sensor data; determining, based at least partly on the sensor data, a hand grip placement associated with at least one electronic device; determining, based at least partly on the sensor data, a motion associated with the at least one electronic device; determining, based at least partly on the hand grip placement and the motion, a usage context of the at least one electronic device; and causing an action to be performed based on the usage context of the at least one electronic device. 11. The method of claim 10, wherein the hand grip placement and the motion are each associated with a first electronic device of the at least one electronic device. 12. The method of claim 11, wherein the action is caused to be performed on a second electronic device of the at least one electronic device. 13. The method of claim 10, wherein: the hand grip placement is a first hand grip placement associated with a first user; the first hand grip placement is associated with a first electronic device; and the method further comprises determining, based at least partly on the sensor data, a second hand grip placement associated with the first electronic device, wherein the second hand grip placement is associated with a second user. 14. The method of claim 13, wherein the action comprises initiating one of a multi-user mode or guest mode on the first electronic device based at least partly on the second hand grip placement. 15. The method of claim 10, wherein determining the hand grip placement further comprises determining a type of hand grip placement, and wherein the usage context is determined based at least further on the type of hand grip placement. 16. The method of claim 10, wherein determining the hand grip placement further comprises determining an identity of a user associated with the hand grip placement, and wherein determining the usage context of the at least one electronic device is further based at least partly on user information associated with the identity of the user. 17. The method of claim 10, wherein the usage context of the at least one electronic device comprises a collaborative task performance by two or more users, and wherein the action comprises causing the at least one electronic device to operate in a one of a guest mode or a collaboration mode. 18. An electronic device comprising, at least one processing unit; sensing hardware; and memory configured to be in communication with at least one processing unit, the memory storing instructions that in accordance with execution by the at least one processing unit, cause the at least one processing unit to: receive sensor data indicating signals received from the sensing hardware; determine, based at least partly on the sensor data, a hand grip placement associated with the electronic device; determine, based at least partly on the sensor data, a motion associated with the electronic device; determine, based at least partly on the hand grip placement and the motion, an interaction state of the electronic device; and cause an action to be performed on the electronic device based on the interaction state of the electronic device. 19. The electronic device of claim 18, wherein the action includes causing another action to be performed on a second electronic device. 20. The electronic device of claim 18, wherein the interaction state of the electronic device indicates that a user of the electronic device is reading content, and wherein the action comprises changing a graphical user interface displayed on the electronic device to remove other content.
By correlating user grip information with micro-mobility events, electronic devices can provide support for a broad range of interactions and contextually-dependent techniques. Such correlation allows electronic devices to better identify device usage contexts, and in turn provide a more responsive and helpful user experience, especially in the context of reading and task performance. To allow for accurate and efficient device usage context identification, a model may be used to make device usage context determinations based on the correlated gesture and micro-mobility data. Once a context, device usage context, or gesture is identified, an action can be taken on one or more electronic devices.1. A computing system, comprising: at least one processing unit; and memory configured to be in communication with the at least one processing unit, the memory storing instructions that based on execution by the at least one processing unit, cause the at least one processing unit to: receive sensor data from at least one electronic device; determine, based at least partly on the sensor data, a hand grip placement associated with the at least one electronic device; determine, based at least partly on the sensor data, a motion associated with the at least one electronic device; determine, based at least partly on the hand grip placement and the motion, a usage context of the at least one electronic device; and cause an action to be performed based on the usage context of the at least one electronic device. 2. The computing system of claim 1, wherein the hand grip placement and the motion are each associated with a first electronic device of the at least one electronic device. 3. The computing system of claim 2, wherein the action is caused to be performed on a second electronic device of the at least one electronic device. 4. The computing system of claim 1, wherein the hand grip placement is a first hand grip placement associated with a first user, wherein the first hand grip placement is associated with a first electronic device, and wherein the instructions further cause the at least one processing unit to determine, based at least partly on the sensor data, a second hand grip placement associated with the first electronic device, wherein the second hand grip placement is associated with a second user. 5. The computing system of claim 4, wherein the instructions further cause the at least one processing unit to determine the usage context based at least further on the second hand grip placement. 6. The computing system of claim 1, wherein the instructions further cause the at least one processing unit to: determine a type of hand grip placement; and determine the usage context based at least further on the type of hand grip placement. 7. The computing system of claim 1 wherein the instructions further cause the at least one processing unit to determine an identity of a user associated with the hand grip placement, and wherein determining the usage context of the at least one electronic device is further based at least partly on the identity of the user. 8. The computing system of claim 1 wherein the usage context of the at least one electronic device comprises a selection of a portion of content displayed on a first electronic device, and wherein the action comprises causing the portion of content to be displayed on a second electronic device. 9. The computing system of claim 1 wherein the at least one electronic device is part of the computing system. 10. A method comprising: receiving sensor data; determining, based at least partly on the sensor data, a hand grip placement associated with at least one electronic device; determining, based at least partly on the sensor data, a motion associated with the at least one electronic device; determining, based at least partly on the hand grip placement and the motion, a usage context of the at least one electronic device; and causing an action to be performed based on the usage context of the at least one electronic device. 11. The method of claim 10, wherein the hand grip placement and the motion are each associated with a first electronic device of the at least one electronic device. 12. The method of claim 11, wherein the action is caused to be performed on a second electronic device of the at least one electronic device. 13. The method of claim 10, wherein: the hand grip placement is a first hand grip placement associated with a first user; the first hand grip placement is associated with a first electronic device; and the method further comprises determining, based at least partly on the sensor data, a second hand grip placement associated with the first electronic device, wherein the second hand grip placement is associated with a second user. 14. The method of claim 13, wherein the action comprises initiating one of a multi-user mode or guest mode on the first electronic device based at least partly on the second hand grip placement. 15. The method of claim 10, wherein determining the hand grip placement further comprises determining a type of hand grip placement, and wherein the usage context is determined based at least further on the type of hand grip placement. 16. The method of claim 10, wherein determining the hand grip placement further comprises determining an identity of a user associated with the hand grip placement, and wherein determining the usage context of the at least one electronic device is further based at least partly on user information associated with the identity of the user. 17. The method of claim 10, wherein the usage context of the at least one electronic device comprises a collaborative task performance by two or more users, and wherein the action comprises causing the at least one electronic device to operate in a one of a guest mode or a collaboration mode. 18. An electronic device comprising, at least one processing unit; sensing hardware; and memory configured to be in communication with at least one processing unit, the memory storing instructions that in accordance with execution by the at least one processing unit, cause the at least one processing unit to: receive sensor data indicating signals received from the sensing hardware; determine, based at least partly on the sensor data, a hand grip placement associated with the electronic device; determine, based at least partly on the sensor data, a motion associated with the electronic device; determine, based at least partly on the hand grip placement and the motion, an interaction state of the electronic device; and cause an action to be performed on the electronic device based on the interaction state of the electronic device. 19. The electronic device of claim 18, wherein the action includes causing another action to be performed on a second electronic device. 20. The electronic device of claim 18, wherein the interaction state of the electronic device indicates that a user of the electronic device is reading content, and wherein the action comprises changing a graphical user interface displayed on the electronic device to remove other content.
2,600
10,307
10,307
15,023,421
2,622
A touch sensor is disclosed. The touch sensor includes a resonant circuit that has a resonant frequency configured to change in response to a force applied to the touch sensor. The touch sensor detects the applied force by detecting a change in the resonant frequency.
1. A touch sensor comprising a resonant circuit having a resonant frequency configured to change in response to a force applied to the touch sensor, the touch sensor detecting the applied force by detecting a change in the resonant frequency. 2. The touch sensor of claim 1, wherein the resonant circuit comprises a capacitor having a capacitance, the capacitance changing in response to a force applied to the touch sensor, the change in the capacitance changing the resonant frequency. 3. The touch sensor of claim 2, wherein the capacitor comprises parallel first and second conductive electrodes forming the capacitor, the first conductive electrode being substantially transparent. 4. The touch sensor of claim 3 having a touch sensitive area, the first conductive electrode extending across and covering the touch sensitive area. 5. The touch sensor of claim 2 further comprising a resistor and an inductor. 6. The touch sensor of claim 1, wherein the resonant circuit comprises a resistor having a resistance, the resistance changing in response to a force applied to the touch sensor, the change in the resistance changing the resonant frequency. 7. The touch sensor of claim 6 further comprising a capacitor and an inductor. 8. The touch sensor of claim 1, wherein the resonant circuit comprises a piezoelectric material having the resonant frequency.
A touch sensor is disclosed. The touch sensor includes a resonant circuit that has a resonant frequency configured to change in response to a force applied to the touch sensor. The touch sensor detects the applied force by detecting a change in the resonant frequency.1. A touch sensor comprising a resonant circuit having a resonant frequency configured to change in response to a force applied to the touch sensor, the touch sensor detecting the applied force by detecting a change in the resonant frequency. 2. The touch sensor of claim 1, wherein the resonant circuit comprises a capacitor having a capacitance, the capacitance changing in response to a force applied to the touch sensor, the change in the capacitance changing the resonant frequency. 3. The touch sensor of claim 2, wherein the capacitor comprises parallel first and second conductive electrodes forming the capacitor, the first conductive electrode being substantially transparent. 4. The touch sensor of claim 3 having a touch sensitive area, the first conductive electrode extending across and covering the touch sensitive area. 5. The touch sensor of claim 2 further comprising a resistor and an inductor. 6. The touch sensor of claim 1, wherein the resonant circuit comprises a resistor having a resistance, the resistance changing in response to a force applied to the touch sensor, the change in the resistance changing the resonant frequency. 7. The touch sensor of claim 6 further comprising a capacitor and an inductor. 8. The touch sensor of claim 1, wherein the resonant circuit comprises a piezoelectric material having the resonant frequency.
2,600
10,308
10,308
15,679,101
2,632
A plurality of vehicle identity collection modules are deployed at different toll collecting locations, wherein each vehicle identity collection module is configured to broadcast wireless communication signals to cover a mobile communication device associated with a vehicle passing by the toll collection location over a wireless communication network, wherein strength of the signals is maximized so that mobile communication device switches and connects with the vehicle identity collection module during a wireless cell re-selection process. A mobile communication channel is then established and identification information of one or more of the vehicle, the driver, and the mobile communication device is retrieved via the mobile communication channel. Based on the retrieved information, actual moving path of the vehicle from its initial toll collecting location where the vehicle is first sensed to its current toll collecting location where the vehicle is last sensed is generated and a toll amount is calculated accordingly.
1. A system to support electronic toll collection (ETC) via mobile communication devices, comprising: a plurality of vehicle identity collection modules, each at a toll collection location and configured to broadcast wireless communication signals to cover a mobile communication device associated with a vehicle passing by the toll collection location over a wireless communication network, wherein strength of the wireless communication signals is maximized so that mobile communication device switches and connects with the vehicle identity collection module during a wireless cell re-selection process; establish a mobile communication channel with the mobile communication device following a wireless network communication protocol; retrieve identification information of one or more of the vehicle, the driver, and the mobile communication device via the mobile communication channel; an electronic toll collection engine running on a host, which in operation, is configured to determine current toll collecting location of the vehicle based on the identification information of the vehicle, driver, and/or the mobile communication device; generate actual moving path of the vehicle from its initial toll collecting location where the vehicle is first connected to a vehicle identity collection module to its current toll collecting location where the vehicle is last connected to a vehicle identity collection module; calculate a toll amount owed by the driver of the vehicle based on the actual path from the initial toll collecting location to the current toll collecting location as well toll collection rules. 2. The system of claim 1, wherein: the plurality of vehicle identity collection modules are at geographically distinguishable toll collecting locations. 3. The system of claim 1, wherein: the wireless communication network is one of GSM, 3G, 4G, LTE, CDMA, and W-CDMA. 4. The system of claim 1, wherein: the mobile communication device is configured to have an ETC app running on it, wherein the ETC app is configured to maintain identification (ID) number/information of the vehicle, the user, and/or the mobile communication device; and communicate such information to the vehicle identity collection module at a toll collecting location. 5. The system of claim 1, wherein: the vehicle identity collection module is configured to modulate and encode data formatted after the wireless network communication protocol into digital signals for transmission to the mobile communication device; demodulate, error-check, and decode digitized signals received from the mobile communication device to restore protocol data. 6. The system of claim 1, wherein: the vehicle identity collection module is configured to maximize the strength of the broadcasted wireless communication signals to be strongest among base stations covering the mobile communication device by modifying one or more broadcasting parameters of the wireless network communication protocol. 7. The system of claim 6, wherein: the vehicle identity collection module is configured to generate and transmit one or more switch signals to dynamically affect the broadcasted wireless communication signals so that they are the strongest and/or the most stable among all of the base stations covering the mobile communication device. 8. The system of claim 7, wherein: the vehicle identity collection module is configured to generate the switch signals based on a series of precise time pulses. 9. The system of claim 7, wherein: the vehicle identity collection module is configured to transmit the switch signals via antennas of different frequencies to accommodate different types of wireless communication networks. 10. The system of claim 1, wherein: the vehicle identity collection module is configured to transmit location information and/or the identification information of the vehicle, the driver, and/or the mobile communication device in real time for route tracking and toll calculation of the vehicle. 11. The system of claim 1, wherein: the moving path includes a plurality of toll collecting locations the vehicle has passed by along the path, wherein the vehicle identity collection module at each of the toll collecting locations is configured to transmit the identification information of the vehicle for toll calculation. 12. The system of claim 1, wherein: the vehicle identity collection module further includes a high resolution video camera module configured to capture and identify the identification information of the vehicle when it is passing through the toll collecting location. 13. The system of claim 1, wherein: the electronic toll collection engine is configured to push the generated actual path and/or the calculated toll amount to an ETC app running on the mobile communication device for the driver to track his/her toll collection status in real time. 14. A method to support electronic toll collection (ETC) via mobile communication devices, comprising: broadcasting wireless communication signals to cover a mobile communication device associated with a vehicle passing by a vehicle identity collection module at a toll collection location over a wireless communication network, wherein strength of the wireless communication signals is maximized so that mobile communication device switches and connects with the vehicle identity collection module during a wireless cell re-selection process; establishing a mobile communication channel between the mobile communication device and the vehicle identity collection module following a wireless network communication protocol; retrieving identification information of one or more of the vehicle, the driver, and the mobile communication device via the mobile communication channel; determining current toll collecting location of the vehicle based on the identification information of the vehicle, driver, and/or the mobile communication device; generating actual moving path of the vehicle from its initial toll collecting location where the vehicle is first connected to a vehicle identity collection module to its current toll collecting location where the vehicle is last connected to a vehicle identity collection module; calculating a toll amount owed by the driver of the vehicle based on the actual path from the initial toll collecting location to the current toll collecting location as well toll collection rules. 15. The method of claim 14, further comprising: modulating and encoding data formatted after the wireless network communication protocol into digital signals for transmission to the mobile communication device; demodulating, error-checking, and decoding digitized signals received from the mobile communication device to restore protocol data. 16. The method of claim 14, further comprising: maximizing the strength of the broadcasted wireless communication signals to be strongest among base stations covering the mobile communication device by modifying one or more broadcasting parameters of the wireless network communication protocol. 17. The method of claim 16, further comprising: generating and transmitting one or more switch signals to dynamically affect the broadcasted wireless communication signals so that they are the strongest and/or the most stable among all of the base stations covering the mobile communication device. 18. The method of claim 17, further comprising: transmitting the switch signals via antennas of different frequencies to accommodate different types of wireless communication networks. 19. The method of claim 14, further comprising: capturing and identifying the identification information of the vehicle when it is passing through the toll collecting location via a high resolution video camera module. 20. The method of claim 14, further comprising: pushing the generated actual path and/or the calculated toll amount to an ETC app running on the mobile communication device for the driver to track his/her toll collection status in real time. 21. The system of claim 1, wherein: the plurality of vehicle identity collection modules are each configured to retrieve the identification information of the mobile communication device via the mobile communication channel, and the electronic toll collection engine running is configured to determine the current toll collecting location of the vehicle based on the identification information of the mobile communication device, wherein the identification information of the mobile communication device is an International Mobile Equipment Identity (IMEI) of the mobile communication device. 22. The system of claim 1, wherein the wireless communication signals are stronger and more stable than a cell signal covering the mobile communication device.
A plurality of vehicle identity collection modules are deployed at different toll collecting locations, wherein each vehicle identity collection module is configured to broadcast wireless communication signals to cover a mobile communication device associated with a vehicle passing by the toll collection location over a wireless communication network, wherein strength of the signals is maximized so that mobile communication device switches and connects with the vehicle identity collection module during a wireless cell re-selection process. A mobile communication channel is then established and identification information of one or more of the vehicle, the driver, and the mobile communication device is retrieved via the mobile communication channel. Based on the retrieved information, actual moving path of the vehicle from its initial toll collecting location where the vehicle is first sensed to its current toll collecting location where the vehicle is last sensed is generated and a toll amount is calculated accordingly.1. A system to support electronic toll collection (ETC) via mobile communication devices, comprising: a plurality of vehicle identity collection modules, each at a toll collection location and configured to broadcast wireless communication signals to cover a mobile communication device associated with a vehicle passing by the toll collection location over a wireless communication network, wherein strength of the wireless communication signals is maximized so that mobile communication device switches and connects with the vehicle identity collection module during a wireless cell re-selection process; establish a mobile communication channel with the mobile communication device following a wireless network communication protocol; retrieve identification information of one or more of the vehicle, the driver, and the mobile communication device via the mobile communication channel; an electronic toll collection engine running on a host, which in operation, is configured to determine current toll collecting location of the vehicle based on the identification information of the vehicle, driver, and/or the mobile communication device; generate actual moving path of the vehicle from its initial toll collecting location where the vehicle is first connected to a vehicle identity collection module to its current toll collecting location where the vehicle is last connected to a vehicle identity collection module; calculate a toll amount owed by the driver of the vehicle based on the actual path from the initial toll collecting location to the current toll collecting location as well toll collection rules. 2. The system of claim 1, wherein: the plurality of vehicle identity collection modules are at geographically distinguishable toll collecting locations. 3. The system of claim 1, wherein: the wireless communication network is one of GSM, 3G, 4G, LTE, CDMA, and W-CDMA. 4. The system of claim 1, wherein: the mobile communication device is configured to have an ETC app running on it, wherein the ETC app is configured to maintain identification (ID) number/information of the vehicle, the user, and/or the mobile communication device; and communicate such information to the vehicle identity collection module at a toll collecting location. 5. The system of claim 1, wherein: the vehicle identity collection module is configured to modulate and encode data formatted after the wireless network communication protocol into digital signals for transmission to the mobile communication device; demodulate, error-check, and decode digitized signals received from the mobile communication device to restore protocol data. 6. The system of claim 1, wherein: the vehicle identity collection module is configured to maximize the strength of the broadcasted wireless communication signals to be strongest among base stations covering the mobile communication device by modifying one or more broadcasting parameters of the wireless network communication protocol. 7. The system of claim 6, wherein: the vehicle identity collection module is configured to generate and transmit one or more switch signals to dynamically affect the broadcasted wireless communication signals so that they are the strongest and/or the most stable among all of the base stations covering the mobile communication device. 8. The system of claim 7, wherein: the vehicle identity collection module is configured to generate the switch signals based on a series of precise time pulses. 9. The system of claim 7, wherein: the vehicle identity collection module is configured to transmit the switch signals via antennas of different frequencies to accommodate different types of wireless communication networks. 10. The system of claim 1, wherein: the vehicle identity collection module is configured to transmit location information and/or the identification information of the vehicle, the driver, and/or the mobile communication device in real time for route tracking and toll calculation of the vehicle. 11. The system of claim 1, wherein: the moving path includes a plurality of toll collecting locations the vehicle has passed by along the path, wherein the vehicle identity collection module at each of the toll collecting locations is configured to transmit the identification information of the vehicle for toll calculation. 12. The system of claim 1, wherein: the vehicle identity collection module further includes a high resolution video camera module configured to capture and identify the identification information of the vehicle when it is passing through the toll collecting location. 13. The system of claim 1, wherein: the electronic toll collection engine is configured to push the generated actual path and/or the calculated toll amount to an ETC app running on the mobile communication device for the driver to track his/her toll collection status in real time. 14. A method to support electronic toll collection (ETC) via mobile communication devices, comprising: broadcasting wireless communication signals to cover a mobile communication device associated with a vehicle passing by a vehicle identity collection module at a toll collection location over a wireless communication network, wherein strength of the wireless communication signals is maximized so that mobile communication device switches and connects with the vehicle identity collection module during a wireless cell re-selection process; establishing a mobile communication channel between the mobile communication device and the vehicle identity collection module following a wireless network communication protocol; retrieving identification information of one or more of the vehicle, the driver, and the mobile communication device via the mobile communication channel; determining current toll collecting location of the vehicle based on the identification information of the vehicle, driver, and/or the mobile communication device; generating actual moving path of the vehicle from its initial toll collecting location where the vehicle is first connected to a vehicle identity collection module to its current toll collecting location where the vehicle is last connected to a vehicle identity collection module; calculating a toll amount owed by the driver of the vehicle based on the actual path from the initial toll collecting location to the current toll collecting location as well toll collection rules. 15. The method of claim 14, further comprising: modulating and encoding data formatted after the wireless network communication protocol into digital signals for transmission to the mobile communication device; demodulating, error-checking, and decoding digitized signals received from the mobile communication device to restore protocol data. 16. The method of claim 14, further comprising: maximizing the strength of the broadcasted wireless communication signals to be strongest among base stations covering the mobile communication device by modifying one or more broadcasting parameters of the wireless network communication protocol. 17. The method of claim 16, further comprising: generating and transmitting one or more switch signals to dynamically affect the broadcasted wireless communication signals so that they are the strongest and/or the most stable among all of the base stations covering the mobile communication device. 18. The method of claim 17, further comprising: transmitting the switch signals via antennas of different frequencies to accommodate different types of wireless communication networks. 19. The method of claim 14, further comprising: capturing and identifying the identification information of the vehicle when it is passing through the toll collecting location via a high resolution video camera module. 20. The method of claim 14, further comprising: pushing the generated actual path and/or the calculated toll amount to an ETC app running on the mobile communication device for the driver to track his/her toll collection status in real time. 21. The system of claim 1, wherein: the plurality of vehicle identity collection modules are each configured to retrieve the identification information of the mobile communication device via the mobile communication channel, and the electronic toll collection engine running is configured to determine the current toll collecting location of the vehicle based on the identification information of the mobile communication device, wherein the identification information of the mobile communication device is an International Mobile Equipment Identity (IMEI) of the mobile communication device. 22. The system of claim 1, wherein the wireless communication signals are stronger and more stable than a cell signal covering the mobile communication device.
2,600
10,309
10,309
15,575,404
2,613
Examples relate to testing applications using virtual reality. In one example, a computing device may: cause display of a viewable portion of a virtual environment on a VR display of the VR device: cause display of a virtual user device within the viewable portion of the virtual environment, the virtual user device corresponding to a hardware device that is running an application under test (AUT); cause display, on the virtual user device, of a virtual user interface of the AUT; receive feedback data indicating i) a change in the virtual environment, ii) a change in a state of the AUT, or iii) an interaction with the virtual user device; and in response to receiving feedback data, cause display of an updated viewable portion of the virtual environment on the VR display.
1. A non-transitory machine-readable storage medium encoded with instructions executable by a hardware processor of a virtual reality (VR) device for testing applications using virtual reality, the machine-readable storage medium comprising instructions to cause the hardware processor to: cause display of a viewable portion of a virtual environment on a VR display of the VR device; cause display of a virtual user device within the viewable portion of the virtual environment, the virtual user device corresponding to a hardware device that is running an application under test (AUT); cause display, on the virtual user device, of a virtual user interface of the AUT; receive feedback data indicating i) a change in the virtual environment, ii) a change in a state of the AUT, or iii) an interaction with the virtual user device; and in response to receiving feedback data, cause display of an updated viewable portion of the virtual environment on the VR display. 2. The storage medium of claim 1, wherein the feedback data indicates a change in the virtual environment, the change in the virtual environment comprising at least one of: a change in a position, within the virtual environment, of a virtual user of the VR device; a change in a view orientation of a virtual user of the VR device; an addition, removal, or change of an object within the viewable portion of the virtual environment; or an addition, removal, or change of an object within a non-viewable portion of the virtual environment. 3. The storage medium of claim 2, wherein the updated viewable portion of the virtual environment includes an updated virtual user interface of the AUT. 4. The storage medium of claim 1, wherein: the feedback data is received from a separate computing device; the feedback data indicates a change in the state of the AUT, and the updated viewable portion of the virtual environment includes an updated virtual user interface of the AUT. 5. The storage medium of claim 1, wherein: the feedback data indicates an interaction with the virtual user device, and the updated viewable portion of the virtual environment includes an updated virtual user interface of the AUT. 6. The storage medium of claim 1, wherein the virtual environment includes a second virtual device that corresponds to a second AUT, and wherein the instructions further cause the hardware processor to: receive second AUT data from a separate computing device; and in response to receiving second AUT data, cause display, on the virtual user device, of an updated virtual user interface of the AUT. 7. The storage medium of claim 1, wherein: the feedback data indicates a change in position, within the virtual environment, of a virtual user of the VR device, and the updated viewable portion of the virtual environment is based on the change in position, and wherein the instructions further cause the hardware processor to: send data indicating the change in position to the hardware device running the AUT; receive, from the hardware device, AUT user interface data, the AUT user interface data being based on the change in position; and cause display of an updated virtual user interface of the AUT, the updated virtual user interface being based on the AUT user interface data. 8. A computing device for testing applications using virtual reality, the computing device comprising: a hardware processor; and a data storage device storing instructions that, when executed by the hardware processor, cause the hardware processor to: provide virtual environment data to a virtual reality (VR) device, the virtual environment data specifying a virtual environment in which an application under test (AUT) is to be tested; provide virtual computing device data to the VR device, the virtual computing device data specifying a virtual computing device on which the AUT is to be tested, the virtual computing device corresponding to the computing device; provide virtual user interface data to the VR device, the virtual user interface data specifying data to be displayed, by the VR device, on a virtual display of the virtual computing device; receive, from the VR device, feedback data indicating i) a change in position, within the virtual environment, of a virtual user of the VR device, or ii) a change in a view orientation of a virtual user of the VR device; provide a virtual environment simulation module with sensory data indicating at least one of a position or orientation of the virtual user of the VR device, the sensory data being based on the feedback data; receive, from the virtual environment simulation module, computing device state data indicating a change in a simulated state of the computing device; obtain, using the AUT, updated virtual user interface data that is based on the change in the simulated state of the computing device; and provide, to the VR device, the updated virtual user interface data for display on the virtual display of the virtual computing device. 9. The computing device of claim 8, wherein the instructions further cause the hardware processor to: obtain interaction test data indicating an interaction with the virtual user device; obtain updated AUT state data using the interaction test data, the updated AUT state data indicating a change in a state of the AUT; and provide, to the VR device, updated display data for display on a VR display of the VR device, the updated display data being based on the updated AUT state data. 10. The computing device of claim 8, wherein the instructions further cause the hardware processor to: obtain environment test data indicating a change in the virtual environment; obtain updated AUT state data using the environment test data, the updated AUT state data indicating a change in a state of the AUT; and provide, to the VR device, updated display data for display on a VR display of the VR device, the updated display data being based on the updated AUT state data. 11. The computing device of claim 8, wherein the instructions further cause the hardware processor to: receive, from the AUT, updated AUT state data indicating a change in a state of the AUT; and provide, to the VR device, updated display data for display on a VR display of the VR device, the updated display data being based on the updated AUT state data. 12. A method for testing applications using virtual reality implemented by at least one data processor, the method comprising: providing virtual environment data to a virtual reality (VR) device, the virtual environment data specifying a virtual environment in which an application under test (AUT) is to be tested; providing virtual computing device data to the VR device, the virtual computing device data specifying a virtual computing device on which the AUT is to be tested; providing virtual user interface data to the VR device, the virtual user interface data i) being based on a current state of the AUT, and ii) specifying data to be displayed, by the VR device, on a virtual display of the virtual computing device; obtain, from the AUT, updated AUT state data indicating a change in the current state of the AUT; and provide, to the VR device, updated virtual interface data for display on the virtual display of the virtual computing device, the updated virtual interface data being based on the updated AUT state data. 13. The method of claim 12, further comprising: receiving, from the VR device, feedback data indicating i) a change in position, within the virtual environment, of a virtual user of the VR device, or ii) a change in a view orientation of a virtual user of the VR device; providing a virtual environment simulation module with sensory data indicating at least one of a position or orientation of the virtual user of the VR device, the sensory data being based on the feedback data; receive, from the virtual environment simulation module, computing device state data indicating a change in a simulated state of the computing device; obtain, using the AUT, second updated virtual user interface data that is based on the change in the simulated state of the computing device; and provide, to the VR device, the second updated virtual user interface data for display on the virtual display of the virtual computing device. 14. The method of claim 12, further comprising: obtaining interaction test data indicating an interaction with the virtual user device; obtaining second updated AUT state data using the interaction test data, the second updated AUT state data indicating a second change in the current state of the AUT; and provide, to the VR device, updated display data for display on a VR display of the VR device, the updated display data being based on the second updated AUT state data. 15. The method of claim 12, further comprising: obtaining environment test data indicating a change in the virtual environment; obtaining updated AUT state data using the environment test data, the updated AUT state data indicating a change in a state of the AUT; and providing, to the VR device, updated display data for display on a VR display of the VR device, the updated display data being based on the updated AUT state data.
Examples relate to testing applications using virtual reality. In one example, a computing device may: cause display of a viewable portion of a virtual environment on a VR display of the VR device: cause display of a virtual user device within the viewable portion of the virtual environment, the virtual user device corresponding to a hardware device that is running an application under test (AUT); cause display, on the virtual user device, of a virtual user interface of the AUT; receive feedback data indicating i) a change in the virtual environment, ii) a change in a state of the AUT, or iii) an interaction with the virtual user device; and in response to receiving feedback data, cause display of an updated viewable portion of the virtual environment on the VR display.1. A non-transitory machine-readable storage medium encoded with instructions executable by a hardware processor of a virtual reality (VR) device for testing applications using virtual reality, the machine-readable storage medium comprising instructions to cause the hardware processor to: cause display of a viewable portion of a virtual environment on a VR display of the VR device; cause display of a virtual user device within the viewable portion of the virtual environment, the virtual user device corresponding to a hardware device that is running an application under test (AUT); cause display, on the virtual user device, of a virtual user interface of the AUT; receive feedback data indicating i) a change in the virtual environment, ii) a change in a state of the AUT, or iii) an interaction with the virtual user device; and in response to receiving feedback data, cause display of an updated viewable portion of the virtual environment on the VR display. 2. The storage medium of claim 1, wherein the feedback data indicates a change in the virtual environment, the change in the virtual environment comprising at least one of: a change in a position, within the virtual environment, of a virtual user of the VR device; a change in a view orientation of a virtual user of the VR device; an addition, removal, or change of an object within the viewable portion of the virtual environment; or an addition, removal, or change of an object within a non-viewable portion of the virtual environment. 3. The storage medium of claim 2, wherein the updated viewable portion of the virtual environment includes an updated virtual user interface of the AUT. 4. The storage medium of claim 1, wherein: the feedback data is received from a separate computing device; the feedback data indicates a change in the state of the AUT, and the updated viewable portion of the virtual environment includes an updated virtual user interface of the AUT. 5. The storage medium of claim 1, wherein: the feedback data indicates an interaction with the virtual user device, and the updated viewable portion of the virtual environment includes an updated virtual user interface of the AUT. 6. The storage medium of claim 1, wherein the virtual environment includes a second virtual device that corresponds to a second AUT, and wherein the instructions further cause the hardware processor to: receive second AUT data from a separate computing device; and in response to receiving second AUT data, cause display, on the virtual user device, of an updated virtual user interface of the AUT. 7. The storage medium of claim 1, wherein: the feedback data indicates a change in position, within the virtual environment, of a virtual user of the VR device, and the updated viewable portion of the virtual environment is based on the change in position, and wherein the instructions further cause the hardware processor to: send data indicating the change in position to the hardware device running the AUT; receive, from the hardware device, AUT user interface data, the AUT user interface data being based on the change in position; and cause display of an updated virtual user interface of the AUT, the updated virtual user interface being based on the AUT user interface data. 8. A computing device for testing applications using virtual reality, the computing device comprising: a hardware processor; and a data storage device storing instructions that, when executed by the hardware processor, cause the hardware processor to: provide virtual environment data to a virtual reality (VR) device, the virtual environment data specifying a virtual environment in which an application under test (AUT) is to be tested; provide virtual computing device data to the VR device, the virtual computing device data specifying a virtual computing device on which the AUT is to be tested, the virtual computing device corresponding to the computing device; provide virtual user interface data to the VR device, the virtual user interface data specifying data to be displayed, by the VR device, on a virtual display of the virtual computing device; receive, from the VR device, feedback data indicating i) a change in position, within the virtual environment, of a virtual user of the VR device, or ii) a change in a view orientation of a virtual user of the VR device; provide a virtual environment simulation module with sensory data indicating at least one of a position or orientation of the virtual user of the VR device, the sensory data being based on the feedback data; receive, from the virtual environment simulation module, computing device state data indicating a change in a simulated state of the computing device; obtain, using the AUT, updated virtual user interface data that is based on the change in the simulated state of the computing device; and provide, to the VR device, the updated virtual user interface data for display on the virtual display of the virtual computing device. 9. The computing device of claim 8, wherein the instructions further cause the hardware processor to: obtain interaction test data indicating an interaction with the virtual user device; obtain updated AUT state data using the interaction test data, the updated AUT state data indicating a change in a state of the AUT; and provide, to the VR device, updated display data for display on a VR display of the VR device, the updated display data being based on the updated AUT state data. 10. The computing device of claim 8, wherein the instructions further cause the hardware processor to: obtain environment test data indicating a change in the virtual environment; obtain updated AUT state data using the environment test data, the updated AUT state data indicating a change in a state of the AUT; and provide, to the VR device, updated display data for display on a VR display of the VR device, the updated display data being based on the updated AUT state data. 11. The computing device of claim 8, wherein the instructions further cause the hardware processor to: receive, from the AUT, updated AUT state data indicating a change in a state of the AUT; and provide, to the VR device, updated display data for display on a VR display of the VR device, the updated display data being based on the updated AUT state data. 12. A method for testing applications using virtual reality implemented by at least one data processor, the method comprising: providing virtual environment data to a virtual reality (VR) device, the virtual environment data specifying a virtual environment in which an application under test (AUT) is to be tested; providing virtual computing device data to the VR device, the virtual computing device data specifying a virtual computing device on which the AUT is to be tested; providing virtual user interface data to the VR device, the virtual user interface data i) being based on a current state of the AUT, and ii) specifying data to be displayed, by the VR device, on a virtual display of the virtual computing device; obtain, from the AUT, updated AUT state data indicating a change in the current state of the AUT; and provide, to the VR device, updated virtual interface data for display on the virtual display of the virtual computing device, the updated virtual interface data being based on the updated AUT state data. 13. The method of claim 12, further comprising: receiving, from the VR device, feedback data indicating i) a change in position, within the virtual environment, of a virtual user of the VR device, or ii) a change in a view orientation of a virtual user of the VR device; providing a virtual environment simulation module with sensory data indicating at least one of a position or orientation of the virtual user of the VR device, the sensory data being based on the feedback data; receive, from the virtual environment simulation module, computing device state data indicating a change in a simulated state of the computing device; obtain, using the AUT, second updated virtual user interface data that is based on the change in the simulated state of the computing device; and provide, to the VR device, the second updated virtual user interface data for display on the virtual display of the virtual computing device. 14. The method of claim 12, further comprising: obtaining interaction test data indicating an interaction with the virtual user device; obtaining second updated AUT state data using the interaction test data, the second updated AUT state data indicating a second change in the current state of the AUT; and provide, to the VR device, updated display data for display on a VR display of the VR device, the updated display data being based on the second updated AUT state data. 15. The method of claim 12, further comprising: obtaining environment test data indicating a change in the virtual environment; obtaining updated AUT state data using the environment test data, the updated AUT state data indicating a change in a state of the AUT; and providing, to the VR device, updated display data for display on a VR display of the VR device, the updated display data being based on the updated AUT state data.
2,600
10,310
10,310
14,591,047
2,622
A driving method of a light emitting device, in which when an N-type driving TFT is connected to an anode of a light emitting element or a P-type driving TFT is connected to a cathode thereof, the driving TFT operates in a saturation region and an image can be displayed with a desired gray scale level depending on a video signal. In addition, a light emitting device adopting the driving method is provided. According to the invention, when a potential having image data is supplied to a gate of a driving TFT depending on a video signal, a reverse bias voltage is applied to the driving TFT and a light emitting element which are connected in series with each other. Meanwhile, when a pixel displays an image depending on the video signal, a forward bias voltage is applied to the driving TFT and the light emitting element.
1. (canceled) 2. A light emitting device comprising: a scan line; a first signal line; a second signal line; a power supply line; a first transistor; a second transistor; a third transistor; a fourth transistor; a first capacitor; a second capacitor; a first light emitting element; and a second light emitting element, wherein a gate of the first transistor is connected to the scan line, wherein one of a source and a drain of the first transistor is connected to the first signal line, wherein the other of the source and the drain of the first transistor is directly connected to a gate of the second transistor, wherein one of a source and a drain of the second transistor is connected to the first light emitting element, wherein the other of the source and the drain of the second transistor is directly connected to the power supply line, wherein a first electrode of the first capacitor is connected to the gate of the second transistor, wherein a second electrode of the first capacitor is connected to the one of the source and the drain of the second transistor, wherein a gate of the third transistor is connected to the scan line, wherein one of a source and a drain of the third transistor is connected to the second signal line, wherein the other of the source and the drain of the third transistor is directly connected to a gate of the fourth transistor, wherein one of a source and a drain of the fourth transistor is connected to the second light emitting element, wherein the other of the source and the drain of the fourth transistor is directly connected to the power supply line, wherein a first electrode of the second capacitor is connected to the gate of the fourth transistor, and wherein a second electrode of the second capacitor is connected to the one of the source and the drain of the fourth transistor. 3. The light emitting device according to claim 2, wherein the power supply line is supplied with a pulse signal. 4. The light emitting device according to claim 2, wherein the power supply line is functionally connected to an inverter. 5. An electronic apparatus comprising the light emitting device according to claim 2 and a housing. 6. A light emitting device comprising: a first line; a second line; a third line; a fourth line; a first transistor; a second transistor; a third transistor; a fourth transistor; a first capacitor; a second capacitor; a first light emitting element; and a second light emitting element, wherein a gate of the first transistor is connected to the first line, wherein one of a source and a drain of the first transistor is connected to the second line, wherein the other of the source and the drain of the first transistor is directly connected to a gate of the second transistor, wherein one of a source and a drain of the second transistor is connected to the first light emitting element, wherein the other of the source and the drain of the second transistor is directly connected to the fourth line, wherein a first electrode of the first capacitor is connected to the gate of the second transistor, wherein a second electrode of the first capacitor is connected to the one of the source and the drain of the second transistor, wherein a gate of the third transistor is connected to the first line, wherein one of a source and a drain of the third transistor is connected to the third line, wherein the other of the source and the drain of the third transistor is directly connected to a gate of the fourth transistor, wherein one of a source and a drain of the fourth transistor is connected to the second light emitting element, wherein the other of the source and the drain of the fourth transistor is directly connected to the fourth line, wherein a first electrode of the second capacitor is connected to the gate of the fourth transistor, and wherein a second electrode of the second capacitor is connected to the one of the source and the drain of the fourth transistor. 7. The light emitting device according to claim 6, wherein the fourth line is supplied with a pulse signal. 8. The light emitting device according to claim 6, wherein the fourth line is functionally connected to an inverter. 9. An electronic apparatus comprising the light emitting device according to claim 6 and a housing. 10. A light emitting device comprising: a first line; a second line; a third line; a fourth line; a fifth line; a sixth line; a first transistor; a second transistor; a third transistor; a fourth transistor; a fifth transistor; a sixth transistor; a first capacitor; a second capacitor; a third capacitor; a first light emitting element; a second light emitting element; and a third light emitting element, wherein a gate of the first transistor is connected to the first line, wherein one of a source and a drain of the first transistor is connected to the second line, wherein the other of the source and the drain of the first transistor is directly connected to a gate of the second transistor, wherein one of a source and a drain of the second transistor is connected to the first light emitting element, wherein the other of the source and the drain of the second transistor is directly connected to the fourth line, wherein a first electrode of the first capacitor is connected to the gate of the second transistor, wherein a second electrode of the first capacitor is connected to the one of the source and the drain of the second transistor, wherein a gate of the third transistor is connected to the first line, wherein one of a source and a drain of the third transistor is connected to the third line, wherein the other of the source and the drain of the third transistor is directly connected to a gate of the fourth transistor, wherein one of a source and a drain of the fourth transistor is connected to the second light emitting element, wherein the other of the source and the drain of the fourth transistor is directly connected to the fourth line, wherein a first electrode of the second capacitor is connected to the gate of the fourth transistor, wherein a second electrode of the second capacitor is connected to the one of the source and the drain of the fourth transistor, wherein a gate of the fifth transistor is connected to the fifth line, wherein one of a source and a drain of the fifth transistor is connected to the second line, wherein the other of the source and the drain of the fifth transistor is directly connected to a gate of the sixth transistor, wherein one of a source and a drain of the sixth transistor is connected to the third light emitting element, wherein the other of the source and the drain of the sixth transistor is directly connected to the sixth line, wherein a first electrode of the third capacitor is connected to the gate of the sixth transistor, and wherein a second electrode of the third capacitor is connected to the one of the source and the drain of the sixth transistor. 11. The light emitting device according to claim 10, wherein the fourth line is not directly connected to the sixth line. 12. The light emitting device according to claim 10, wherein the fourth line is supplied with a first pulse signal, and wherein the sixth line is supplied with a second pulse signal. 13. The light emitting device according to claim 10, wherein the fourth line is functionally connected to an inverter. 14. An electronic apparatus comprising the light emitting device according to claim 10 and a housing.
A driving method of a light emitting device, in which when an N-type driving TFT is connected to an anode of a light emitting element or a P-type driving TFT is connected to a cathode thereof, the driving TFT operates in a saturation region and an image can be displayed with a desired gray scale level depending on a video signal. In addition, a light emitting device adopting the driving method is provided. According to the invention, when a potential having image data is supplied to a gate of a driving TFT depending on a video signal, a reverse bias voltage is applied to the driving TFT and a light emitting element which are connected in series with each other. Meanwhile, when a pixel displays an image depending on the video signal, a forward bias voltage is applied to the driving TFT and the light emitting element.1. (canceled) 2. A light emitting device comprising: a scan line; a first signal line; a second signal line; a power supply line; a first transistor; a second transistor; a third transistor; a fourth transistor; a first capacitor; a second capacitor; a first light emitting element; and a second light emitting element, wherein a gate of the first transistor is connected to the scan line, wherein one of a source and a drain of the first transistor is connected to the first signal line, wherein the other of the source and the drain of the first transistor is directly connected to a gate of the second transistor, wherein one of a source and a drain of the second transistor is connected to the first light emitting element, wherein the other of the source and the drain of the second transistor is directly connected to the power supply line, wherein a first electrode of the first capacitor is connected to the gate of the second transistor, wherein a second electrode of the first capacitor is connected to the one of the source and the drain of the second transistor, wherein a gate of the third transistor is connected to the scan line, wherein one of a source and a drain of the third transistor is connected to the second signal line, wherein the other of the source and the drain of the third transistor is directly connected to a gate of the fourth transistor, wherein one of a source and a drain of the fourth transistor is connected to the second light emitting element, wherein the other of the source and the drain of the fourth transistor is directly connected to the power supply line, wherein a first electrode of the second capacitor is connected to the gate of the fourth transistor, and wherein a second electrode of the second capacitor is connected to the one of the source and the drain of the fourth transistor. 3. The light emitting device according to claim 2, wherein the power supply line is supplied with a pulse signal. 4. The light emitting device according to claim 2, wherein the power supply line is functionally connected to an inverter. 5. An electronic apparatus comprising the light emitting device according to claim 2 and a housing. 6. A light emitting device comprising: a first line; a second line; a third line; a fourth line; a first transistor; a second transistor; a third transistor; a fourth transistor; a first capacitor; a second capacitor; a first light emitting element; and a second light emitting element, wherein a gate of the first transistor is connected to the first line, wherein one of a source and a drain of the first transistor is connected to the second line, wherein the other of the source and the drain of the first transistor is directly connected to a gate of the second transistor, wherein one of a source and a drain of the second transistor is connected to the first light emitting element, wherein the other of the source and the drain of the second transistor is directly connected to the fourth line, wherein a first electrode of the first capacitor is connected to the gate of the second transistor, wherein a second electrode of the first capacitor is connected to the one of the source and the drain of the second transistor, wherein a gate of the third transistor is connected to the first line, wherein one of a source and a drain of the third transistor is connected to the third line, wherein the other of the source and the drain of the third transistor is directly connected to a gate of the fourth transistor, wherein one of a source and a drain of the fourth transistor is connected to the second light emitting element, wherein the other of the source and the drain of the fourth transistor is directly connected to the fourth line, wherein a first electrode of the second capacitor is connected to the gate of the fourth transistor, and wherein a second electrode of the second capacitor is connected to the one of the source and the drain of the fourth transistor. 7. The light emitting device according to claim 6, wherein the fourth line is supplied with a pulse signal. 8. The light emitting device according to claim 6, wherein the fourth line is functionally connected to an inverter. 9. An electronic apparatus comprising the light emitting device according to claim 6 and a housing. 10. A light emitting device comprising: a first line; a second line; a third line; a fourth line; a fifth line; a sixth line; a first transistor; a second transistor; a third transistor; a fourth transistor; a fifth transistor; a sixth transistor; a first capacitor; a second capacitor; a third capacitor; a first light emitting element; a second light emitting element; and a third light emitting element, wherein a gate of the first transistor is connected to the first line, wherein one of a source and a drain of the first transistor is connected to the second line, wherein the other of the source and the drain of the first transistor is directly connected to a gate of the second transistor, wherein one of a source and a drain of the second transistor is connected to the first light emitting element, wherein the other of the source and the drain of the second transistor is directly connected to the fourth line, wherein a first electrode of the first capacitor is connected to the gate of the second transistor, wherein a second electrode of the first capacitor is connected to the one of the source and the drain of the second transistor, wherein a gate of the third transistor is connected to the first line, wherein one of a source and a drain of the third transistor is connected to the third line, wherein the other of the source and the drain of the third transistor is directly connected to a gate of the fourth transistor, wherein one of a source and a drain of the fourth transistor is connected to the second light emitting element, wherein the other of the source and the drain of the fourth transistor is directly connected to the fourth line, wherein a first electrode of the second capacitor is connected to the gate of the fourth transistor, wherein a second electrode of the second capacitor is connected to the one of the source and the drain of the fourth transistor, wherein a gate of the fifth transistor is connected to the fifth line, wherein one of a source and a drain of the fifth transistor is connected to the second line, wherein the other of the source and the drain of the fifth transistor is directly connected to a gate of the sixth transistor, wherein one of a source and a drain of the sixth transistor is connected to the third light emitting element, wherein the other of the source and the drain of the sixth transistor is directly connected to the sixth line, wherein a first electrode of the third capacitor is connected to the gate of the sixth transistor, and wherein a second electrode of the third capacitor is connected to the one of the source and the drain of the sixth transistor. 11. The light emitting device according to claim 10, wherein the fourth line is not directly connected to the sixth line. 12. The light emitting device according to claim 10, wherein the fourth line is supplied with a first pulse signal, and wherein the sixth line is supplied with a second pulse signal. 13. The light emitting device according to claim 10, wherein the fourth line is functionally connected to an inverter. 14. An electronic apparatus comprising the light emitting device according to claim 10 and a housing.
2,600
10,311
10,311
15,664,630
2,643
The present disclosure relates to a communication method and system for converging a 5 th -Generation (5G) communication system for supporting higher data rates beyond a 4 th -Generation (4G) system with a technology for Internet of Things (IoT). The present disclosure may be applied to intelligent services based on the 5G communication technology and the IoT-related technology, such as smart home, smart building, smart city, smart car, connected car, health care, digital education, smart retail, security and safety services. An information format and apparatus used by a base station to make a scheduling decision when the base station allocates resources to a terminal in a mobile communication system are provided. Operations of a terminal to report a maximum transmission power accurately to the base station in a scheduling process are also provided. A method for calculating a maximum transmit power in a constant manner regardless of a channel status is also provided.
1. A method for reporting power headroom (PH) by a terminal in a wireless communication system, the method comprising: detecting a PH report (PHR) triggering event; identifying whether a serving cell with an uplink is configured and a simultaneous physical uplink control channel (PUCCH) and physical uplink shared channel (PUSCH) transmission is configured; and transmitting, to a base station, a PHR based on a result of the identification. 2. The method of claim 1, wherein the transmitting of the PHR comprises, if the serving cell with the uplink is configured and the simultaneous PUCCH and PUSCH transmission is configured, transmitting the PHR including a first PH based on a maximum transmission power (PCMAX) value of the terminal, PCMAX information corresponding to the PCMAX value, and a second PH. 3. The method of claim 2, wherein the second PH includes a type 2 PH associated with information on the transmit power for the simultaneous transmission of PUCCH and PUSCH. 4. The method of claim 2, wherein the PCMAX value is determined based on a terminal power class and a maximum uplink transmit power allowed for the serving cell. 5. The method of claim 1, wherein the transmitting of the PHR triggering event comprises, if the serving cell with the uplink is configured and the simultaneous PUCCH and PUSCH transmission is not configured, transmitting the PHR including a first PH of the terminal based on a maximum transmission power (PCMAX) value of the terminal and PCMAX information corresponding to the PCMAX value. 6. The method of claim 1, wherein the transmitting of the PHR triggering event comprises, if the serving cell with the uplink is not configured and the simultaneous PUCCH and PUSCH transmission is not configured, transmitting the PHR including a first PH of the terminal based on a maximum transmission power (PCMAX) value of the terminal. 7. The method of claim 1, further comprising: receiving, from the base station, first control information configuring an extended PHR; and receiving, from the base station, second control information configuring the simultaneous PUCCH and PUSCH transmission. 8. A method for receiving power headroom (PH) by a base station in a wireless communication system, the method comprising: identifying whether a serving cell with an uplink is configured to a terminal and a simultaneous physical uplink control channel (PUCCH) and physical uplink shared channel (PUSCH) transmission is configured to the terminal; and receiving, from the terminal, a PH report (PHR) based on a result of the identification. 9. The method of claim 8, wherein the receiving of the PHR comprises, if the serving cell with the uplink is configured and the simultaneous PUCCH and PUSCH transmission is configured, receiving the PHR including a first PH based on a maximum transmission power (PCMAX) value of the terminal, PCMAX information corresponding to the PCMAX value, and a second PH. 10. The method of claim 8, wherein the receiving of the PHR comprises, if the serving cell with the uplink is configured and the simultaneous PUCCH and PUSCH transmission is not configured, receiving the PHR including a first PH of the terminal based on a maximum transmission power (PCMAX) value of the terminal and PCMAX information corresponding to the PCMAX value. 11. The method of claim 8, wherein the receiving of the PHR comprises, if the serving cell with the uplink is not configured and the simultaneous PUCCH and PUSCH transmission is not configured, receiving the PHR including a first PH of the terminal based on a maximum transmission power (PCMAX) value of the terminal. 12. A terminal for reporting power headroom (PH) in a wireless communication system, the terminal comprising: a transceiver configured to transmit and receive signals; and a controller coupled with the transceiver and configured to: detect a PH report (PHR) triggering event, identify whether a serving cell with an uplink is configured and a simultaneous physical uplink control channel (PUCCH) and physical uplink shared channel (PUSCH) transmission is configured, and transmit, to a base station, a PHR based on a result of the identification. 13. The terminal of claim 12, wherein the controller is further configured to, if the serving cell with the uplink is configured and the simultaneous PUCCH and PUSCH transmission is configured, transmit the PHR including a first PH based on a maximum transmission power (PCMAX) value of the terminal, PCMAX information corresponding to the PCMAX value, and a second PH. 14. The terminal of claim 13, wherein the PCMAX value is determined based on a terminal power class and a maximum uplink transmit power allowed for the serving cell. 15. The terminal of claim 13, wherein the second PH includes a type 2 PH associated with information on the transmit power for the simultaneous transmission of PUCCH and PUSCH. 16. The terminal of claim 12, wherein the controller is further configured to, if the serving cell with the uplink is configured and the simultaneous PUCCH and PUSCH transmission is not configured, transmit the PHR including a first PH of the terminal based on a maximum transmission power (PCMAX) value of the terminal and PCMAX information corresponding to the PCMAX value. 17. The terminal of claim 12, wherein the controller is further configured to, if the serving cell with the uplink is not configured and the simultaneous PUCCH and PUSCH transmission is not configured, transmit the PHR including a first PH of the terminal based on a maximum transmission power (PCMAX) value of the terminal. 18. The terminal of claim 12, wherein the controller is further configured to: receive, from the base station, a first control information configuring an extended PHR, and receive, from the base station, a second control information configuring the simultaneous PUCCH and PUSCH transmission. 19. A base station for receiving power headroom (PH) in a wireless communication system, the method comprising: a transceiver configured to transmit and receive signals; and a controller coupled with the transceiver and configured to: identify whether a serving cell with an uplink is configured to a terminal and a simultaneous physical uplink control channel (PUCCH) and physical uplink shared channel (PUSCH) transmission is configured to the terminal, and receive, from the terminal, a PH report (PHR) based on a result of the identification. 20. The base station of claim 19, wherein the controller is further configured to, if the serving cell with the uplink is configured and the simultaneous PUCCH and PUSCH transmission is configured, receive the PHR including a first PH based on a maximum transmission power (PCMAX) value of the terminal, PCMAX information corresponding to the PCMAX value, and a second PH. 21. The base station of claim 19, wherein the controller is further configured to, if the serving cell with the uplink is configured and the simultaneous PUCCH and PUSCH transmission is not configured, receive the PHR including a first PH of the terminal based on a maximum transmission power (PCMAX) value of the terminal and PCMAX information corresponding to the PCMAX value. 22. The base station of claim 19, wherein the controller is further configured to, if the serving cell with the uplink is not configured and the simultaneous PUCCH and PUSCH transmission is not configured, receive the PHR including a first PH of the terminal based on a maximum transmission power (PCMAX) value of the terminal.
The present disclosure relates to a communication method and system for converging a 5 th -Generation (5G) communication system for supporting higher data rates beyond a 4 th -Generation (4G) system with a technology for Internet of Things (IoT). The present disclosure may be applied to intelligent services based on the 5G communication technology and the IoT-related technology, such as smart home, smart building, smart city, smart car, connected car, health care, digital education, smart retail, security and safety services. An information format and apparatus used by a base station to make a scheduling decision when the base station allocates resources to a terminal in a mobile communication system are provided. Operations of a terminal to report a maximum transmission power accurately to the base station in a scheduling process are also provided. A method for calculating a maximum transmit power in a constant manner regardless of a channel status is also provided.1. A method for reporting power headroom (PH) by a terminal in a wireless communication system, the method comprising: detecting a PH report (PHR) triggering event; identifying whether a serving cell with an uplink is configured and a simultaneous physical uplink control channel (PUCCH) and physical uplink shared channel (PUSCH) transmission is configured; and transmitting, to a base station, a PHR based on a result of the identification. 2. The method of claim 1, wherein the transmitting of the PHR comprises, if the serving cell with the uplink is configured and the simultaneous PUCCH and PUSCH transmission is configured, transmitting the PHR including a first PH based on a maximum transmission power (PCMAX) value of the terminal, PCMAX information corresponding to the PCMAX value, and a second PH. 3. The method of claim 2, wherein the second PH includes a type 2 PH associated with information on the transmit power for the simultaneous transmission of PUCCH and PUSCH. 4. The method of claim 2, wherein the PCMAX value is determined based on a terminal power class and a maximum uplink transmit power allowed for the serving cell. 5. The method of claim 1, wherein the transmitting of the PHR triggering event comprises, if the serving cell with the uplink is configured and the simultaneous PUCCH and PUSCH transmission is not configured, transmitting the PHR including a first PH of the terminal based on a maximum transmission power (PCMAX) value of the terminal and PCMAX information corresponding to the PCMAX value. 6. The method of claim 1, wherein the transmitting of the PHR triggering event comprises, if the serving cell with the uplink is not configured and the simultaneous PUCCH and PUSCH transmission is not configured, transmitting the PHR including a first PH of the terminal based on a maximum transmission power (PCMAX) value of the terminal. 7. The method of claim 1, further comprising: receiving, from the base station, first control information configuring an extended PHR; and receiving, from the base station, second control information configuring the simultaneous PUCCH and PUSCH transmission. 8. A method for receiving power headroom (PH) by a base station in a wireless communication system, the method comprising: identifying whether a serving cell with an uplink is configured to a terminal and a simultaneous physical uplink control channel (PUCCH) and physical uplink shared channel (PUSCH) transmission is configured to the terminal; and receiving, from the terminal, a PH report (PHR) based on a result of the identification. 9. The method of claim 8, wherein the receiving of the PHR comprises, if the serving cell with the uplink is configured and the simultaneous PUCCH and PUSCH transmission is configured, receiving the PHR including a first PH based on a maximum transmission power (PCMAX) value of the terminal, PCMAX information corresponding to the PCMAX value, and a second PH. 10. The method of claim 8, wherein the receiving of the PHR comprises, if the serving cell with the uplink is configured and the simultaneous PUCCH and PUSCH transmission is not configured, receiving the PHR including a first PH of the terminal based on a maximum transmission power (PCMAX) value of the terminal and PCMAX information corresponding to the PCMAX value. 11. The method of claim 8, wherein the receiving of the PHR comprises, if the serving cell with the uplink is not configured and the simultaneous PUCCH and PUSCH transmission is not configured, receiving the PHR including a first PH of the terminal based on a maximum transmission power (PCMAX) value of the terminal. 12. A terminal for reporting power headroom (PH) in a wireless communication system, the terminal comprising: a transceiver configured to transmit and receive signals; and a controller coupled with the transceiver and configured to: detect a PH report (PHR) triggering event, identify whether a serving cell with an uplink is configured and a simultaneous physical uplink control channel (PUCCH) and physical uplink shared channel (PUSCH) transmission is configured, and transmit, to a base station, a PHR based on a result of the identification. 13. The terminal of claim 12, wherein the controller is further configured to, if the serving cell with the uplink is configured and the simultaneous PUCCH and PUSCH transmission is configured, transmit the PHR including a first PH based on a maximum transmission power (PCMAX) value of the terminal, PCMAX information corresponding to the PCMAX value, and a second PH. 14. The terminal of claim 13, wherein the PCMAX value is determined based on a terminal power class and a maximum uplink transmit power allowed for the serving cell. 15. The terminal of claim 13, wherein the second PH includes a type 2 PH associated with information on the transmit power for the simultaneous transmission of PUCCH and PUSCH. 16. The terminal of claim 12, wherein the controller is further configured to, if the serving cell with the uplink is configured and the simultaneous PUCCH and PUSCH transmission is not configured, transmit the PHR including a first PH of the terminal based on a maximum transmission power (PCMAX) value of the terminal and PCMAX information corresponding to the PCMAX value. 17. The terminal of claim 12, wherein the controller is further configured to, if the serving cell with the uplink is not configured and the simultaneous PUCCH and PUSCH transmission is not configured, transmit the PHR including a first PH of the terminal based on a maximum transmission power (PCMAX) value of the terminal. 18. The terminal of claim 12, wherein the controller is further configured to: receive, from the base station, a first control information configuring an extended PHR, and receive, from the base station, a second control information configuring the simultaneous PUCCH and PUSCH transmission. 19. A base station for receiving power headroom (PH) in a wireless communication system, the method comprising: a transceiver configured to transmit and receive signals; and a controller coupled with the transceiver and configured to: identify whether a serving cell with an uplink is configured to a terminal and a simultaneous physical uplink control channel (PUCCH) and physical uplink shared channel (PUSCH) transmission is configured to the terminal, and receive, from the terminal, a PH report (PHR) based on a result of the identification. 20. The base station of claim 19, wherein the controller is further configured to, if the serving cell with the uplink is configured and the simultaneous PUCCH and PUSCH transmission is configured, receive the PHR including a first PH based on a maximum transmission power (PCMAX) value of the terminal, PCMAX information corresponding to the PCMAX value, and a second PH. 21. The base station of claim 19, wherein the controller is further configured to, if the serving cell with the uplink is configured and the simultaneous PUCCH and PUSCH transmission is not configured, receive the PHR including a first PH of the terminal based on a maximum transmission power (PCMAX) value of the terminal and PCMAX information corresponding to the PCMAX value. 22. The base station of claim 19, wherein the controller is further configured to, if the serving cell with the uplink is not configured and the simultaneous PUCCH and PUSCH transmission is not configured, receive the PHR including a first PH of the terminal based on a maximum transmission power (PCMAX) value of the terminal.
2,600
10,312
10,312
15,300,542
2,632
A distribution point unit using discrete multi-tone technology, the distribution point unit being configured for connection to a wired shared medium associated with an available spectrum, the wired shared medium connecting the distribution point unit with a plurality of users, the distribution point unit including an assigning unit configured for assigning a first portion of the available spectrum to a first user of the plurality of users and a second portion of the available spectrum to a second user of the plurality of users; a sending and receiving unit configured for encoding and decoding digital data, using discrete multi-tone technology, and configured for sending and receiving encoded digital data over the assigned first portion to/from the first user and over the assigned second portion to/from the second user.
1. A distribution point unit using discrete multi-tone technology, said distribution point unit being configured for connection to a wired shared medium associated with an available spectrum, said wired shared medium connecting said distribution point unit with a plurality of users, said distribution point unit comprising: an assigning unit configured for assigning a first portion of the available spectrum to a first user of said plurality of users and a second portion of the available spectrum to a second user of said plurality of users; a sending and receiving unit configured for encoding and decoding digital data, using discrete multi-tone technology, and configured for sending and receiving encoded digital data over the assigned first portion to/from the first user and over the assigned second portion to/from the second user; wherein the assigning unit is configured for initially assigning a first portion and a second portion of a predetermined initialization band of the available spectrum to a first user and a second user, respectively, and for subsequently modifying the assigned first and second portion to another first and second portion outside the predetermined initialization band. 2. The distribution point unit according to claim 1, wherein the assigned first portion of the available spectrum comprises at least a first downstream sub portion for downstream digital data traffic and a first upstream sub portion for upstream digital data traffic, and wherein the assigned second portion of the available spectrum comprises at least a second downstream sub portion for downstream digital data traffic and a second upstream sub portion for upstream digital data traffic; wherein said first downstream sub portion does not overlap with said second downstream sub portion and wherein said first upstream sub portion does not overlap with said second upstream sub portion. 3. The distribution point unit according to claim 1, wherein the assigned first portion does not overlap with the assigned second portion; and wherein the assigned first portion and assigned second portion are used for downstream and upstream digital data traffic. 4. The distribution point unit according to claim 1, wherein the assigning unit is configured for setting a gain of at least one carrier included in the assigned first portion to a first predetermined value, and for setting a gain of at least one carrier outside the assigned first portion to a second predetermined value, in order to indicate the range of the assigned first portion to the first user. 5. The distribution point unit according to claim 1, wherein the assigning unit is configured for receiving from at least one user of said plurality of users a unique user identification; and wherein said assigning unit is configured for performing said initial assigning by providing a robust management channel (RMC) symbol to said user, said RMC symbol containing a user identifier based on said unique user identification, which user identifier is recognizable by said at least one user, such that said at least one user is assigned a predefined portion of the spectrum. 6. The distribution point unit according to claim 1, wherein the assigning unit is configured for modifying the assigned first portion and/or the assigned second portion; wherein the assigning unit is preferably configured for collecting input data, which preferably comprises signal-to-noise ratio parameters and/or data rate demands, from the first and the second user of the plurality of users, and for modifying the assigned first and/or second portion of the spectrum based on the collected input data. 7. The distribution point unit according to claim 1, wherein the assigning unit is configured for assigning in a first step an intermediate band of the available spectrum to said first user and said second user and for assigning in a second step a first portion of said intermediate band to the first user and a second portion of said intermediate band to the second user. 8. The distribution point unit according to claim 1, wherein the assigning unit is configured for selecting a set of pilot tones, for sending said set of pilot tones to said plurality of users. 9. A method for using discrete multi-tone technology in connection with a wired shared medium associated with an available spectrum, which is connected with a plurality of users, comprising; assigning a first portion of the available spectrum to a first user of said plurality of users and a second portion of the available spectrum to a second user of said plurality of users; encoding and decoding digital data, using discrete multi-tone technology; sending and receiving encoded digital data over the first portion to/from the first user and over the second portion to/from the second user; wherein said assigning comprises initially assigning a first portion and a second portion of a predetermined initialization band of the spectrum to a first user and a second user, respectively; and subsequently modifying the assigned first and second portion to another first and second portion outside the predetermined initialization band. 10. The method according to claim 9, wherein the assigned first portion of the available spectrum comprises at least a first downstream sub portion for downstream digital data traffic and a first upstream sub portion for upstream digital data traffic, and wherein the assigned second portion of the available spectrum comprises at least a second downstream sub portion for downstream digital data traffic and a second upstream sub portion for upstream digital data traffic; wherein said first downstream sub portion does not overlap with said second downstream sub portion and wherein said first upstream sub portion does not overlap with said second upstream sub portion. 11. The method according to claim 9, further comprising collecting input data from at least the first and the second user of the plurality of users and modifying the first portion and/or the second portion based on the collected input data. 12. A customer premises equipment for being connected through a wired shared medium with a distribution point unit, said customer premises equipment being configured for being assigned a portion of the available spectrum by said distribution point unit; encoding and decoding digital data, using discrete multi-tone technology; and sending and receiving encoded digital data over the assigned portion to/from the distribution point unit; and being configured for being initially assigned a portion of a predetermined initialization band of the available spectrum; and being subsequently assigned another portion outside the predetermined initialization band. 13. The customer premises equipment according to claim 12, further being configured for receiving from the distribution point unit a set of pilot tones, for selecting at least one pilot tone of said set; and/or wherein the customer premises equipment is configured for being assigned a portion of the available spectrum by: receiving gain adjuster data of a carrier from the DPU, placing said carrier in a monitored tone set if said received gain adjuster data fulfills a predetermined criterion indicating that the carrier is with the portion to be assigned; building up a equalizer for said carrier; and requesting bitloading for said carrier. 14. (canceled) 15. A digital data storage medium encoding a machine-executable program of instructions to perform any one of the steps of the method of claim 9.
A distribution point unit using discrete multi-tone technology, the distribution point unit being configured for connection to a wired shared medium associated with an available spectrum, the wired shared medium connecting the distribution point unit with a plurality of users, the distribution point unit including an assigning unit configured for assigning a first portion of the available spectrum to a first user of the plurality of users and a second portion of the available spectrum to a second user of the plurality of users; a sending and receiving unit configured for encoding and decoding digital data, using discrete multi-tone technology, and configured for sending and receiving encoded digital data over the assigned first portion to/from the first user and over the assigned second portion to/from the second user.1. A distribution point unit using discrete multi-tone technology, said distribution point unit being configured for connection to a wired shared medium associated with an available spectrum, said wired shared medium connecting said distribution point unit with a plurality of users, said distribution point unit comprising: an assigning unit configured for assigning a first portion of the available spectrum to a first user of said plurality of users and a second portion of the available spectrum to a second user of said plurality of users; a sending and receiving unit configured for encoding and decoding digital data, using discrete multi-tone technology, and configured for sending and receiving encoded digital data over the assigned first portion to/from the first user and over the assigned second portion to/from the second user; wherein the assigning unit is configured for initially assigning a first portion and a second portion of a predetermined initialization band of the available spectrum to a first user and a second user, respectively, and for subsequently modifying the assigned first and second portion to another first and second portion outside the predetermined initialization band. 2. The distribution point unit according to claim 1, wherein the assigned first portion of the available spectrum comprises at least a first downstream sub portion for downstream digital data traffic and a first upstream sub portion for upstream digital data traffic, and wherein the assigned second portion of the available spectrum comprises at least a second downstream sub portion for downstream digital data traffic and a second upstream sub portion for upstream digital data traffic; wherein said first downstream sub portion does not overlap with said second downstream sub portion and wherein said first upstream sub portion does not overlap with said second upstream sub portion. 3. The distribution point unit according to claim 1, wherein the assigned first portion does not overlap with the assigned second portion; and wherein the assigned first portion and assigned second portion are used for downstream and upstream digital data traffic. 4. The distribution point unit according to claim 1, wherein the assigning unit is configured for setting a gain of at least one carrier included in the assigned first portion to a first predetermined value, and for setting a gain of at least one carrier outside the assigned first portion to a second predetermined value, in order to indicate the range of the assigned first portion to the first user. 5. The distribution point unit according to claim 1, wherein the assigning unit is configured for receiving from at least one user of said plurality of users a unique user identification; and wherein said assigning unit is configured for performing said initial assigning by providing a robust management channel (RMC) symbol to said user, said RMC symbol containing a user identifier based on said unique user identification, which user identifier is recognizable by said at least one user, such that said at least one user is assigned a predefined portion of the spectrum. 6. The distribution point unit according to claim 1, wherein the assigning unit is configured for modifying the assigned first portion and/or the assigned second portion; wherein the assigning unit is preferably configured for collecting input data, which preferably comprises signal-to-noise ratio parameters and/or data rate demands, from the first and the second user of the plurality of users, and for modifying the assigned first and/or second portion of the spectrum based on the collected input data. 7. The distribution point unit according to claim 1, wherein the assigning unit is configured for assigning in a first step an intermediate band of the available spectrum to said first user and said second user and for assigning in a second step a first portion of said intermediate band to the first user and a second portion of said intermediate band to the second user. 8. The distribution point unit according to claim 1, wherein the assigning unit is configured for selecting a set of pilot tones, for sending said set of pilot tones to said plurality of users. 9. A method for using discrete multi-tone technology in connection with a wired shared medium associated with an available spectrum, which is connected with a plurality of users, comprising; assigning a first portion of the available spectrum to a first user of said plurality of users and a second portion of the available spectrum to a second user of said plurality of users; encoding and decoding digital data, using discrete multi-tone technology; sending and receiving encoded digital data over the first portion to/from the first user and over the second portion to/from the second user; wherein said assigning comprises initially assigning a first portion and a second portion of a predetermined initialization band of the spectrum to a first user and a second user, respectively; and subsequently modifying the assigned first and second portion to another first and second portion outside the predetermined initialization band. 10. The method according to claim 9, wherein the assigned first portion of the available spectrum comprises at least a first downstream sub portion for downstream digital data traffic and a first upstream sub portion for upstream digital data traffic, and wherein the assigned second portion of the available spectrum comprises at least a second downstream sub portion for downstream digital data traffic and a second upstream sub portion for upstream digital data traffic; wherein said first downstream sub portion does not overlap with said second downstream sub portion and wherein said first upstream sub portion does not overlap with said second upstream sub portion. 11. The method according to claim 9, further comprising collecting input data from at least the first and the second user of the plurality of users and modifying the first portion and/or the second portion based on the collected input data. 12. A customer premises equipment for being connected through a wired shared medium with a distribution point unit, said customer premises equipment being configured for being assigned a portion of the available spectrum by said distribution point unit; encoding and decoding digital data, using discrete multi-tone technology; and sending and receiving encoded digital data over the assigned portion to/from the distribution point unit; and being configured for being initially assigned a portion of a predetermined initialization band of the available spectrum; and being subsequently assigned another portion outside the predetermined initialization band. 13. The customer premises equipment according to claim 12, further being configured for receiving from the distribution point unit a set of pilot tones, for selecting at least one pilot tone of said set; and/or wherein the customer premises equipment is configured for being assigned a portion of the available spectrum by: receiving gain adjuster data of a carrier from the DPU, placing said carrier in a monitored tone set if said received gain adjuster data fulfills a predetermined criterion indicating that the carrier is with the portion to be assigned; building up a equalizer for said carrier; and requesting bitloading for said carrier. 14. (canceled) 15. A digital data storage medium encoding a machine-executable program of instructions to perform any one of the steps of the method of claim 9.
2,600
10,313
10,313
16,009,696
2,611
A method includes: receiving three-dimensional (3D) information generated by a first 3D system, the 3D information including images of a scene and depth data about the scene; identifying, using the depth data, first image content in the images associated with a depth value that satisfies a criterion; and generating modified 3D information by applying first shading regarding the identified first image content. The modified 3D information can be provided to a second 3D system. The scene can contain an object in the images, and generating the modified 3D information can include determining a surface normal for second image content of the object, and applying second shading regarding the second image content based on the determined surface normal. A portion of the object can have a greater depth value than another portion, and second shading can be applied regarding a portion of the images where the second portion is located.
1. A method comprising: receiving three-dimensional (3D) information generated by a first 3D system, the 3D information including images of a scene and depth data about the scene, the images generated by cameras capturing respective views of the scene; identifying, using the depth data, first image content in the images associated with a depth value that satisfies a criterion; generating modified 3D information by applying first shading regarding the identified first image content; making, using the modified 3D information, a 3D presentation that includes at least a portion of the respective views of the scene; and identifying a hole in at least one of the images, wherein generating the modified 3D information comprises applying second shading regarding the hole. 2. The method of claim 1, wherein the criterion includes that the first image content is beyond a predefined depth in the scene. 3. The method of claim 2, wherein applying the first shading comprises causing the first image content to be rendered as black. 4. The method of claim 3, wherein use of the predefined depth, and applying the first shading, comprises causing a background of the images to be rendered as black. 5. The method of claim 1, wherein the first shading is dependent on a depth value of the first image content. 6. The method of claim 1, wherein the criterion includes that the first image content is closer than a predefined depth in the scene. 7. The method of claim 1, wherein the scene contains an object in the images, and wherein generating the modified 3D information further comprises determining a surface normal for second image content of the object, and applying second shading regarding the second image content based on the determined surface normal. 8. The method of claim 7, wherein applying the second shading comprises determining a dot product between the surface normal and a camera vector, and selecting the second shading based on the determined dot product. 9. The method of claim 7, wherein applying the second shading comprises fading the second image content to black based on the second image content facing away in the images. 10. A method comprising: receiving three-dimensional (3D) information generated by a first 3D system, the 3D information including images of a scene and depth data about the scene, the images generated by cameras capturing respective views of the scene; identifying, using the depth data, first image content in the images associated with a depth value that satisfies a criterion; generating modified 3D information by applying first shading regarding the identified first image content; and making, using the modified 3D information, a 3D presentation that includes at least a portion of the respective views of the scene; wherein the scene contains an object in the images and a first portion of the object has a greater depth value in the depth data than a second portion of the object, and wherein generating the modified 3D information further comprises applying second shading regarding a portion of the images where second image content corresponding to the second portion is located. 11. The method of claim 10, wherein applying the second shading comprises selecting the portion of the images based on a portion of a display for presentation of the images. 12. The method of claim 11, wherein the object comprises a person, the first portion of the object comprises a face of the person, the second portion of the object comprises a torso of the person, and the portion of the display comprises a bottom of the display. 13. (canceled) 14. The method of claim 1, wherein generating the modified 3D information further comprises hiding a depth error in the 3D information. 15. The method of claim 14, wherein the depth data is based on infrared (IR) signals returned from the scene, and wherein generating the modified 3D information comprises applying second shading proportional to a strength of the IR signals. 16. The method of claim 18, further comprising stereoscopically presenting the modified 3D information at the second 3D system, wherein the first image content has the first shading. 17. The method of claim 16, wherein stereoscopically presenting the modified 3D information comprises additively rendering the images. 18. The method of claim 1, further comprising providing the modified 3D information to a second 3D system. 19. A system comprising: cameras; a depth sensor; and a three-dimensional (3D) content module having a processor executing instructions stored in a memory, the instructions causing the processor to identify, using depth data included in 3D information, first image content in images of a scene included in the 3D information, the images generated by the cameras capturing respective views of the scene, the first image content identified as being associated with a depth value that satisfies a criterion, to generate modified 3D information by applying first shading regarding the identified first image content, and to make, using the modified 3D information, a 3D presentation that includes at least a portion of the respective views of the scene, wherein the scene contains an object in the images, and wherein generating the modified 3D information further comprises determining a surface normal for second image content of the object, and applying second shading regarding the second image content based on the determined surface normal. 20. (canceled) 21. The system of claim 19, wherein the scene contains an object in the images and a first portion of the object has a greater depth value than a second portion of the object, and wherein generating the modified 3D information further comprises applying second shading regarding a portion of the images where second image content corresponding to the second portion is located.
A method includes: receiving three-dimensional (3D) information generated by a first 3D system, the 3D information including images of a scene and depth data about the scene; identifying, using the depth data, first image content in the images associated with a depth value that satisfies a criterion; and generating modified 3D information by applying first shading regarding the identified first image content. The modified 3D information can be provided to a second 3D system. The scene can contain an object in the images, and generating the modified 3D information can include determining a surface normal for second image content of the object, and applying second shading regarding the second image content based on the determined surface normal. A portion of the object can have a greater depth value than another portion, and second shading can be applied regarding a portion of the images where the second portion is located.1. A method comprising: receiving three-dimensional (3D) information generated by a first 3D system, the 3D information including images of a scene and depth data about the scene, the images generated by cameras capturing respective views of the scene; identifying, using the depth data, first image content in the images associated with a depth value that satisfies a criterion; generating modified 3D information by applying first shading regarding the identified first image content; making, using the modified 3D information, a 3D presentation that includes at least a portion of the respective views of the scene; and identifying a hole in at least one of the images, wherein generating the modified 3D information comprises applying second shading regarding the hole. 2. The method of claim 1, wherein the criterion includes that the first image content is beyond a predefined depth in the scene. 3. The method of claim 2, wherein applying the first shading comprises causing the first image content to be rendered as black. 4. The method of claim 3, wherein use of the predefined depth, and applying the first shading, comprises causing a background of the images to be rendered as black. 5. The method of claim 1, wherein the first shading is dependent on a depth value of the first image content. 6. The method of claim 1, wherein the criterion includes that the first image content is closer than a predefined depth in the scene. 7. The method of claim 1, wherein the scene contains an object in the images, and wherein generating the modified 3D information further comprises determining a surface normal for second image content of the object, and applying second shading regarding the second image content based on the determined surface normal. 8. The method of claim 7, wherein applying the second shading comprises determining a dot product between the surface normal and a camera vector, and selecting the second shading based on the determined dot product. 9. The method of claim 7, wherein applying the second shading comprises fading the second image content to black based on the second image content facing away in the images. 10. A method comprising: receiving three-dimensional (3D) information generated by a first 3D system, the 3D information including images of a scene and depth data about the scene, the images generated by cameras capturing respective views of the scene; identifying, using the depth data, first image content in the images associated with a depth value that satisfies a criterion; generating modified 3D information by applying first shading regarding the identified first image content; and making, using the modified 3D information, a 3D presentation that includes at least a portion of the respective views of the scene; wherein the scene contains an object in the images and a first portion of the object has a greater depth value in the depth data than a second portion of the object, and wherein generating the modified 3D information further comprises applying second shading regarding a portion of the images where second image content corresponding to the second portion is located. 11. The method of claim 10, wherein applying the second shading comprises selecting the portion of the images based on a portion of a display for presentation of the images. 12. The method of claim 11, wherein the object comprises a person, the first portion of the object comprises a face of the person, the second portion of the object comprises a torso of the person, and the portion of the display comprises a bottom of the display. 13. (canceled) 14. The method of claim 1, wherein generating the modified 3D information further comprises hiding a depth error in the 3D information. 15. The method of claim 14, wherein the depth data is based on infrared (IR) signals returned from the scene, and wherein generating the modified 3D information comprises applying second shading proportional to a strength of the IR signals. 16. The method of claim 18, further comprising stereoscopically presenting the modified 3D information at the second 3D system, wherein the first image content has the first shading. 17. The method of claim 16, wherein stereoscopically presenting the modified 3D information comprises additively rendering the images. 18. The method of claim 1, further comprising providing the modified 3D information to a second 3D system. 19. A system comprising: cameras; a depth sensor; and a three-dimensional (3D) content module having a processor executing instructions stored in a memory, the instructions causing the processor to identify, using depth data included in 3D information, first image content in images of a scene included in the 3D information, the images generated by the cameras capturing respective views of the scene, the first image content identified as being associated with a depth value that satisfies a criterion, to generate modified 3D information by applying first shading regarding the identified first image content, and to make, using the modified 3D information, a 3D presentation that includes at least a portion of the respective views of the scene, wherein the scene contains an object in the images, and wherein generating the modified 3D information further comprises determining a surface normal for second image content of the object, and applying second shading regarding the second image content based on the determined surface normal. 20. (canceled) 21. The system of claim 19, wherein the scene contains an object in the images and a first portion of the object has a greater depth value than a second portion of the object, and wherein generating the modified 3D information further comprises applying second shading regarding a portion of the images where second image content corresponding to the second portion is located.
2,600
10,314
10,314
13,710,347
2,694
A user can provide input to a device in a 2-dimensional manner (such as by entering the data using a stylus or finger to apply pressure to a touchscreen) or in a 3-dimensional manner (such as by moving the device in 3-dimensional space). While the user input is being received, biometric data in the form of accelerometer data is collected. The accelerometer data indicates an amount of force applied in each of one or more directions over time. The collected accelerometer data thus provides an indication of the manner in which the device was moved, whether intentionally or unintentionally, while the user input was being received. The collected accelerometer data can be used in various manners, such as to verify the user input, as a signature of the user, as an authentication value to access a service or device, and so forth.
1. A method comprising: receiving a user input at a device; collecting, at the device, accelerometer data that identifies movement of the device while receiving the user input at the device; and storing the accelerometer data as associated with the user input. 2. A method as recited in claim 1, receiving the user input comprising receiving a signature of the user. 3. A method as recited in claim 2, the receiving the signature comprising receiving the signature in 2-dimensional space via a touchscreen of the device, and the collecting accelerometer data comprising collecting accelerometer data identifying movement of the device in 3-dimensional space. 4. A method as recited in claim 1, the collecting accelerometer data comprising collecting the accelerometer data at fixed intervals. 5. A method as recited in claim 1, the collecting accelerometer data comprising collecting the accelerometer data in response to a force along one or more axes of the device changing by at least a threshold amount. 6. A method as recited in claim 1, the storing the accelerometer data comprising embedding the accelerometer data in a document. 7. A method as recited in claim 6, further comprising embedding user input data for the user input in the document. 8. A method as recited in claim 1, the receiving the user input comprising receiving a user input that is an authentication value to access a service, the method further comprising providing the accelerometer data to a verification system to determine whether the user is permitted to access the service. 9. A method as recited in claim 1, the collecting accelerometer data that identifies movement of the device comprising collecting accelerometer data that identifies an amount of force applied along each of multiple axes of the device. 10. A method as recited in claim 1, the receiving the user input comprising receiving the user input in 2-dimensional space via the device. 11. A method as recited in claim 10, the collecting accelerometer data comprising collecting accelerometer data identifying inadvertent movement of the device in 3-dimensional space. 12. A method comprising: identifying movement, in 3-dimensional space, of a device by a user; collecting, at the device, accelerometer data that identifies the movement of the device in 3-dimensional space; and storing the collected accelerometer data as a signature of the user. 13. A method as recited in claim 12, the identifying movement of the device comprising identifying a tracing in 3-dimensional space of a pre-defined shape to be used as the signature of the user. 14. A method as recited in claim 12, the storing comprising storing the collected device movement data for use as the signature of the user on multiple documents. 15. A method as recited in claim 12, further comprising providing the collected accelerometer data to a verification system for verification of the signature. 16. A method as recited in claim 12, further comprising identifying additional movement of the device in 3-dimensional space; checking whether the additional movement of the device is a tracing of a pre-defined shape; and allowing access to a document to be signed by the user only if the additional movement of the device is a tracing of the pre-defined shape. 17. A device comprising: a user input module configured to receive a user input to the device; one or more accelerometers configured to generate accelerometer data that identifies movement of the device while receiving the user input; and an accelerometer data collection module configured to record the accelerometer data as associated with the user input. 18. A device as recited in claim 17, further comprising a verification module configured to use the accelerometer data as an authentication value to log into the device, and to permit access to log into the device based on whether verification data for the accelerometer data is verified. 19. A device as recited in claim 17, further comprising a verification module configured to use the accelerometer data as an authentication value to access a document, and to permit access to the document based on whether verification data for the accelerometer data is verified. 20. A device as recited in claim 17, further comprising an interface configured to provide, to an online service, the accelerometer data as an authentication value to access the online service. 21. A device as recited in claim 17, further comprising an accelerometer data store, the accelerometer data collection module being configured to record the accelerometer data as associated with the user input by storing the accelerometer data and the associated user input in the accelerometer data store. 22. A device as recited in claim 17, the one or more accelerometers being further configured to generate the accelerometer data that identifies movement of the device in 3-dimensional space. 23. A device as recited in claim 17, the one or more accelerometers being configured to generate as accelerometer data an amount of force applied along each of one or more axes of the device at particular points in time.
A user can provide input to a device in a 2-dimensional manner (such as by entering the data using a stylus or finger to apply pressure to a touchscreen) or in a 3-dimensional manner (such as by moving the device in 3-dimensional space). While the user input is being received, biometric data in the form of accelerometer data is collected. The accelerometer data indicates an amount of force applied in each of one or more directions over time. The collected accelerometer data thus provides an indication of the manner in which the device was moved, whether intentionally or unintentionally, while the user input was being received. The collected accelerometer data can be used in various manners, such as to verify the user input, as a signature of the user, as an authentication value to access a service or device, and so forth.1. A method comprising: receiving a user input at a device; collecting, at the device, accelerometer data that identifies movement of the device while receiving the user input at the device; and storing the accelerometer data as associated with the user input. 2. A method as recited in claim 1, receiving the user input comprising receiving a signature of the user. 3. A method as recited in claim 2, the receiving the signature comprising receiving the signature in 2-dimensional space via a touchscreen of the device, and the collecting accelerometer data comprising collecting accelerometer data identifying movement of the device in 3-dimensional space. 4. A method as recited in claim 1, the collecting accelerometer data comprising collecting the accelerometer data at fixed intervals. 5. A method as recited in claim 1, the collecting accelerometer data comprising collecting the accelerometer data in response to a force along one or more axes of the device changing by at least a threshold amount. 6. A method as recited in claim 1, the storing the accelerometer data comprising embedding the accelerometer data in a document. 7. A method as recited in claim 6, further comprising embedding user input data for the user input in the document. 8. A method as recited in claim 1, the receiving the user input comprising receiving a user input that is an authentication value to access a service, the method further comprising providing the accelerometer data to a verification system to determine whether the user is permitted to access the service. 9. A method as recited in claim 1, the collecting accelerometer data that identifies movement of the device comprising collecting accelerometer data that identifies an amount of force applied along each of multiple axes of the device. 10. A method as recited in claim 1, the receiving the user input comprising receiving the user input in 2-dimensional space via the device. 11. A method as recited in claim 10, the collecting accelerometer data comprising collecting accelerometer data identifying inadvertent movement of the device in 3-dimensional space. 12. A method comprising: identifying movement, in 3-dimensional space, of a device by a user; collecting, at the device, accelerometer data that identifies the movement of the device in 3-dimensional space; and storing the collected accelerometer data as a signature of the user. 13. A method as recited in claim 12, the identifying movement of the device comprising identifying a tracing in 3-dimensional space of a pre-defined shape to be used as the signature of the user. 14. A method as recited in claim 12, the storing comprising storing the collected device movement data for use as the signature of the user on multiple documents. 15. A method as recited in claim 12, further comprising providing the collected accelerometer data to a verification system for verification of the signature. 16. A method as recited in claim 12, further comprising identifying additional movement of the device in 3-dimensional space; checking whether the additional movement of the device is a tracing of a pre-defined shape; and allowing access to a document to be signed by the user only if the additional movement of the device is a tracing of the pre-defined shape. 17. A device comprising: a user input module configured to receive a user input to the device; one or more accelerometers configured to generate accelerometer data that identifies movement of the device while receiving the user input; and an accelerometer data collection module configured to record the accelerometer data as associated with the user input. 18. A device as recited in claim 17, further comprising a verification module configured to use the accelerometer data as an authentication value to log into the device, and to permit access to log into the device based on whether verification data for the accelerometer data is verified. 19. A device as recited in claim 17, further comprising a verification module configured to use the accelerometer data as an authentication value to access a document, and to permit access to the document based on whether verification data for the accelerometer data is verified. 20. A device as recited in claim 17, further comprising an interface configured to provide, to an online service, the accelerometer data as an authentication value to access the online service. 21. A device as recited in claim 17, further comprising an accelerometer data store, the accelerometer data collection module being configured to record the accelerometer data as associated with the user input by storing the accelerometer data and the associated user input in the accelerometer data store. 22. A device as recited in claim 17, the one or more accelerometers being further configured to generate the accelerometer data that identifies movement of the device in 3-dimensional space. 23. A device as recited in claim 17, the one or more accelerometers being configured to generate as accelerometer data an amount of force applied along each of one or more axes of the device at particular points in time.
2,600
10,315
10,315
14,818,371
2,677
An image forming apparatus includes a display having a function of editing input image data and a function of displaying page images immediately after reading in order. When a first type of instruction to change shape of an image formed on a recording medium is given, the edition is not reflected on a document display mode screen image, and when a second type of instruction to edit page by page the recording medium having images formed thereon is given, the edition is reflected on the document display mode screen image. By switching between a finish preview screen image and the document display mode screen image, the image after edition and the image immediately after reading can be compared easily.
1. (canceled) 2. An image forming apparatus, comprising: a display unit for displaying page images generated from input image data; a preview display unit for displaying a preview image of an image formed on a recording medium based on said input image data; and an editing instructing unit for giving an editing instruction on said preview image; wherein said editing instruction on said preview image includes a first type of editing instruction which is not reflected on said page images, and a second type of editing instruction which is reflected on said page images. 3. The image forming apparatus according to claim 2, wherein said second type of editing instruction is an editing instruction that influences at least one of arrangement and a direction of a display of pages of said page images. 4. The image forming apparatus according to claim 2, wherein said first type of editing instruction is an editing instruction that does not influence either arrangement or a direction of a display of pages of said page images. 5. The image forming apparatus according to claim 3, wherein said second type of editing instruction is at least one of an editing instruction to change order of arrangement of pages of said page images displayed on said display unit, an editing instruction to delete a designated one of said page images, and an editing instruction to rotate a designated one of said page images. 6. The image forming apparatus according to claim 2, further comprising a display switching unit for switching a display by said display unit and a display by said preview display unit, in response to a user instruction. 7. In an image forming apparatus including a display unit for displaying an image, a method of displaying information, comprising the steps of: displaying page images generated from input image data; displaying a preview image of an image formed on a recording medium based on said input image data; and giving an editing instruction on said preview image; wherein said editing instruction on said preview image includes a first type of editing instruction which is not reflected on said page images, and a second type of editing instruction which is reflected on said page images. 8. An image forming apparatus, comprising: an input unit for inputting image data, an editing instructing unit for receiving an editing instruction on said image data input by said input unit, an icon display unit for displaying an icon group including a plurality of icons for giving said editing instruction; and a text display unit for displaying a text group including text information corresponding to said plurality of icons, in response to an operation made on said icon group; wherein if a prescribed time period passes without any operation to said icon group or said text group after said text group is displayed in response to said operation, or if an operation toward said icon group is made on said text group, said text group is erased. 9. In an image forming apparatus including a display unit for displaying an image, a method of displaying information, comprising the steps of: inputting image data; receiving an editing instruction on said image data; displaying an icon group including a plurality of icons on said display unit for giving said editing instruction; displaying a text group including text information corresponding to said plurality of icons, in response to an operation made on said icon group; and if a prescribed time period passes without any operation to said icon group or said text group after said text group is displayed in response to said operation, or if an operation toward said icon group is made on said text group, erasing said text group.
An image forming apparatus includes a display having a function of editing input image data and a function of displaying page images immediately after reading in order. When a first type of instruction to change shape of an image formed on a recording medium is given, the edition is not reflected on a document display mode screen image, and when a second type of instruction to edit page by page the recording medium having images formed thereon is given, the edition is reflected on the document display mode screen image. By switching between a finish preview screen image and the document display mode screen image, the image after edition and the image immediately after reading can be compared easily.1. (canceled) 2. An image forming apparatus, comprising: a display unit for displaying page images generated from input image data; a preview display unit for displaying a preview image of an image formed on a recording medium based on said input image data; and an editing instructing unit for giving an editing instruction on said preview image; wherein said editing instruction on said preview image includes a first type of editing instruction which is not reflected on said page images, and a second type of editing instruction which is reflected on said page images. 3. The image forming apparatus according to claim 2, wherein said second type of editing instruction is an editing instruction that influences at least one of arrangement and a direction of a display of pages of said page images. 4. The image forming apparatus according to claim 2, wherein said first type of editing instruction is an editing instruction that does not influence either arrangement or a direction of a display of pages of said page images. 5. The image forming apparatus according to claim 3, wherein said second type of editing instruction is at least one of an editing instruction to change order of arrangement of pages of said page images displayed on said display unit, an editing instruction to delete a designated one of said page images, and an editing instruction to rotate a designated one of said page images. 6. The image forming apparatus according to claim 2, further comprising a display switching unit for switching a display by said display unit and a display by said preview display unit, in response to a user instruction. 7. In an image forming apparatus including a display unit for displaying an image, a method of displaying information, comprising the steps of: displaying page images generated from input image data; displaying a preview image of an image formed on a recording medium based on said input image data; and giving an editing instruction on said preview image; wherein said editing instruction on said preview image includes a first type of editing instruction which is not reflected on said page images, and a second type of editing instruction which is reflected on said page images. 8. An image forming apparatus, comprising: an input unit for inputting image data, an editing instructing unit for receiving an editing instruction on said image data input by said input unit, an icon display unit for displaying an icon group including a plurality of icons for giving said editing instruction; and a text display unit for displaying a text group including text information corresponding to said plurality of icons, in response to an operation made on said icon group; wherein if a prescribed time period passes without any operation to said icon group or said text group after said text group is displayed in response to said operation, or if an operation toward said icon group is made on said text group, said text group is erased. 9. In an image forming apparatus including a display unit for displaying an image, a method of displaying information, comprising the steps of: inputting image data; receiving an editing instruction on said image data; displaying an icon group including a plurality of icons on said display unit for giving said editing instruction; displaying a text group including text information corresponding to said plurality of icons, in response to an operation made on said icon group; and if a prescribed time period passes without any operation to said icon group or said text group after said text group is displayed in response to said operation, or if an operation toward said icon group is made on said text group, erasing said text group.
2,600
10,316
10,316
14,090,862
2,626
One or more embodiments of techniques or systems for search context association are provided herein. Search context association can include search filtering and/or search expansion. For example, when a query is submitted, data or data sets can be filtered to narrow a search or expanded such that additional data sets or queries are included. Data or data sets can be aggregated, filtered, or expanded based on a context of the query. A context can include user characteristics, environmental factors, social media factors, route based characteristics, or destination based characteristics. As an example, when a new product (e.g., a new mobile phone) is released, limited quantities may be available. Users may be directed to different retailers or stores based on inventory levels, length of lines (which may be determined using social media among other things), distance, and the like.
1. A method for search context association, comprising: receiving a query associated with a user; determining a context for the query, wherein the context of the query is associated with one or more user characteristics, one or more environmental factors, one or more social media factors, one or more route based characteristics, or one or more destination based characteristics; and aggregating or filtering one or more data sets in response to the query based on the context of the query, wherein the receiving, the determining, or the aggregating is implemented via a processing unit. 2. The method of claim 1, comprising rendering a response to the query based on the context of the query. 3. The method of claim 1, wherein one or more of the user characteristics comprises a location of the user and the query is associated with one or more potential destinations associated with one or more corresponding destination locations. 4. The method of claim 3, wherein one or more of the environmental factors is indicative of a weather pattern at the location of the user, one or more of the destination locations, or along one or more routes from the location of the user to one or more of the potential destinations. 5. The method of claim 3, wherein one or more of the route based characteristics is indicative of a traffic pattern, estimated travel time, or distance associated with one or more routes between the location of the user and one or more of the potential destinations. 6. The method of claim 1, wherein the query for the user is automatically generated. 7. The method of claim 1, wherein the query for the user is generated by the user. 8. The method of claim 1, comprising filtering one or more data sets based on the context of the query. 9. The method of claim 1, comprising searching for one or more additional data sets based on the context of the query. 10. The method of claim 1, wherein one or more of the data sets is indicative of one or more potential destinations associated with the query. 11. A system for search context association, comprising: a search component for receiving a query associated with one or more consumers; a context component for determining a context for the query, wherein the context of the query is associated with one or more consumer characteristics of one or more of the consumers; and a data engine for aggregating or filtering one or more data sets in response to the query based on the context of the query, wherein the search component, the context component, or the data engine is implemented via a processing unit. 12. The system of claim 11, wherein the data engine aggregates one or more of the data sets based on one or more retailer characteristics. 13. The system of claim 12, wherein one or more of the retailer characteristics comprises one or more offers from one or more retailers. 14. The system of claim 12, wherein the data engine filters one or more of the data sets based on one or more of the retailer characteristics or one or more of the consumer characteristics. 15. The system of claim 11, wherein one or more of the consumer characteristics comprises one or more offers received by one or more of the consumers. 16. The system of claim 11, wherein the context component determines the context for the query based on a physical location of one or more of the consumers. 17. A method for search context association, comprising: inferring a context for a search based on one or more user characteristics of a user; generating a query for the search based on the inferred context; and aggregating or filtering one or more data sets in response to the query based on the inferred context, wherein the inferring, the generating, or the aggregating is implemented via a processing unit. 18. The method of claim 17, comprising rendering one or more of the data sets based on the inferred context. 19. The method of claim 17, comprising filtering one or more of the data sets based on the inferred context. 20. The method of claim 17, wherein one or more of the user characteristics comprises a search history for the user.
One or more embodiments of techniques or systems for search context association are provided herein. Search context association can include search filtering and/or search expansion. For example, when a query is submitted, data or data sets can be filtered to narrow a search or expanded such that additional data sets or queries are included. Data or data sets can be aggregated, filtered, or expanded based on a context of the query. A context can include user characteristics, environmental factors, social media factors, route based characteristics, or destination based characteristics. As an example, when a new product (e.g., a new mobile phone) is released, limited quantities may be available. Users may be directed to different retailers or stores based on inventory levels, length of lines (which may be determined using social media among other things), distance, and the like.1. A method for search context association, comprising: receiving a query associated with a user; determining a context for the query, wherein the context of the query is associated with one or more user characteristics, one or more environmental factors, one or more social media factors, one or more route based characteristics, or one or more destination based characteristics; and aggregating or filtering one or more data sets in response to the query based on the context of the query, wherein the receiving, the determining, or the aggregating is implemented via a processing unit. 2. The method of claim 1, comprising rendering a response to the query based on the context of the query. 3. The method of claim 1, wherein one or more of the user characteristics comprises a location of the user and the query is associated with one or more potential destinations associated with one or more corresponding destination locations. 4. The method of claim 3, wherein one or more of the environmental factors is indicative of a weather pattern at the location of the user, one or more of the destination locations, or along one or more routes from the location of the user to one or more of the potential destinations. 5. The method of claim 3, wherein one or more of the route based characteristics is indicative of a traffic pattern, estimated travel time, or distance associated with one or more routes between the location of the user and one or more of the potential destinations. 6. The method of claim 1, wherein the query for the user is automatically generated. 7. The method of claim 1, wherein the query for the user is generated by the user. 8. The method of claim 1, comprising filtering one or more data sets based on the context of the query. 9. The method of claim 1, comprising searching for one or more additional data sets based on the context of the query. 10. The method of claim 1, wherein one or more of the data sets is indicative of one or more potential destinations associated with the query. 11. A system for search context association, comprising: a search component for receiving a query associated with one or more consumers; a context component for determining a context for the query, wherein the context of the query is associated with one or more consumer characteristics of one or more of the consumers; and a data engine for aggregating or filtering one or more data sets in response to the query based on the context of the query, wherein the search component, the context component, or the data engine is implemented via a processing unit. 12. The system of claim 11, wherein the data engine aggregates one or more of the data sets based on one or more retailer characteristics. 13. The system of claim 12, wherein one or more of the retailer characteristics comprises one or more offers from one or more retailers. 14. The system of claim 12, wherein the data engine filters one or more of the data sets based on one or more of the retailer characteristics or one or more of the consumer characteristics. 15. The system of claim 11, wherein one or more of the consumer characteristics comprises one or more offers received by one or more of the consumers. 16. The system of claim 11, wherein the context component determines the context for the query based on a physical location of one or more of the consumers. 17. A method for search context association, comprising: inferring a context for a search based on one or more user characteristics of a user; generating a query for the search based on the inferred context; and aggregating or filtering one or more data sets in response to the query based on the inferred context, wherein the inferring, the generating, or the aggregating is implemented via a processing unit. 18. The method of claim 17, comprising rendering one or more of the data sets based on the inferred context. 19. The method of claim 17, comprising filtering one or more of the data sets based on the inferred context. 20. The method of claim 17, wherein one or more of the user characteristics comprises a search history for the user.
2,600
10,317
10,317
15,434,272
2,672
A printing system is disclosed. The printing system includes a printer to print image data to a medium and a print controller including a halftone calibration module to dynamically generate calibrated halftones to compensate for optical density changes that occur at the printer.
1. A printing system comprising: a print controller including: interpreter module to receive print job data and render the print job data into image data; a halftone calibration module to dynamically generate calibrated halftones to compensate for optical density changes that occur at the printing system, including generating a first calibrated halftone, generating a second calibrated halftone upon detecting the optical density changes and replacing the first calibrated halftone with the second calibrated halftone; and a halftoning module to perform halftoning on the image data using the first and second calibrated halftones. 2. (canceled) 3. The printing system of claim 1, wherein the halftone calibration module generating the calibrated halftones comprises receiving an un-calibrated halftone, transforming threshold values in the un-calibrated halftone via an inverse transfer function to generate calibrated halftone threshold values; and generating one or more calibrated halftones based on the calibrated halftone threshold values. 4. The printing system of claim 3, further comprising a measurement module to obtain measurement data from the image data printed to the medium to detect the optical density changes and transmit the measurement data to the print controller. 5. The printing system of claim 4, wherein the halftone calibration module computes a first inverse transfer function to achieve a target response based on first measurement data. 6. The printing system of claim 5, wherein the halftone calibration module generates the first calibrated halftone based on the first inverse transfer function. 7. The printing system of claim 1, further comprising a printer to print image data to a medium. 8. The printing system of claim 7, wherein the halftone calibration module computes a second inverse transfer function to achieve a target response based on second measurement data during implementation of the first calibrated halftone to perform halftoning. 9. The printing system of claim 8, wherein the halftone calibration module generates the second calibrated halftone based on the second inverse transfer function. 10. The printing system of claim 1, wherein the halftone calibration module generates a calibrated halftone using a multi-bit threshold array (MTA). 11. The printing system of claim 5, wherein the measurement module comprises an edge sensor, wherein the threshold values in the uncalibrated halftone are transformed via the first inverse transfer function. 12. The printing system of claim 5, wherein the measurement module comprises a camera system, wherein the threshold values in the uncalibrated halftone corresponding to the measurement data are transformed via the first inverse transfer function. 13. A non-transitory machine-readable medium including data that, when accessed by a machine, cause the machine to: receive print job data; render the print job data into image data; and dynamically generate calibrated halftones to compensate for optical density changes that occur at a printer, including, generating a first calibrated halftone; generating a second calibrated halftone upon detecting the optical density changes and replacing the first calibrated halftone with the second calibrated halftone; and halftoning the image data using the first and second calibrated halftones. 14. (canceled) 15. The machine-readable medium of claim 13, wherein generating the calibrated halftones comprises receiving an un-calibrated halftone, transforming threshold values in the un-calibrated halftone via an inverse transfer function to generate calibrated halftone threshold values, and generating one or more calibrated halftones based on the calibrated halftone threshold values. 16. The machine-readable medium of claim 15, comprising a machine-readable medium including data that, when accessed by a machine, further cause the machine to receive measurement data from image data printed to a print medium to detect the optical density changes. 17. The machine-readable medium of claim 16, comprising a machine-readable medium including data that, when accessed by a machine, further cause the machine to compute a first inverse transfer function to achieve a target response based on first measurement data; generate the first calibrated halftone based on the first inverse transfer function; and perform halftoning on the image data using the first calibrated halftone. 18. The machine-readable medium of claim 17, comprising a machine-readable medium including data that, when accessed by a machine, further cause the machine to: compute a second inverse transfer function to achieve a target response based on second measurement data; generate the second calibrated halftone based on the second inverse transfer function; and perform halftoning on the image data using the second calibrated halftone. 19. A printing system comprising: a print controller to receive print job data and render the print job data into image data, dynamically generate calibrated halftones to compensate for optical density changes that occur at the printer, including receiving an un-calibrated halftone, transforming threshold values in the un-calibrated halftone via an inverse transfer function to generate calibrated halftone threshold values; and generating one or more calibrated halftones based on the calibrated halftone threshold values and perform halftoning on the image data using the calibrated halftones. 20. The printing system of claim 19, wherein the halftone calibration module dynamically generating the calibrated halftones comprises generating a first calibrated halftone, generating a second calibrated halftone upon detecting the optical density changes and replacing the first calibrated halftone with the second calibrated halftone. 21. The printing system of claim 19, further comprising a printer to print image data to a medium.
A printing system is disclosed. The printing system includes a printer to print image data to a medium and a print controller including a halftone calibration module to dynamically generate calibrated halftones to compensate for optical density changes that occur at the printer.1. A printing system comprising: a print controller including: interpreter module to receive print job data and render the print job data into image data; a halftone calibration module to dynamically generate calibrated halftones to compensate for optical density changes that occur at the printing system, including generating a first calibrated halftone, generating a second calibrated halftone upon detecting the optical density changes and replacing the first calibrated halftone with the second calibrated halftone; and a halftoning module to perform halftoning on the image data using the first and second calibrated halftones. 2. (canceled) 3. The printing system of claim 1, wherein the halftone calibration module generating the calibrated halftones comprises receiving an un-calibrated halftone, transforming threshold values in the un-calibrated halftone via an inverse transfer function to generate calibrated halftone threshold values; and generating one or more calibrated halftones based on the calibrated halftone threshold values. 4. The printing system of claim 3, further comprising a measurement module to obtain measurement data from the image data printed to the medium to detect the optical density changes and transmit the measurement data to the print controller. 5. The printing system of claim 4, wherein the halftone calibration module computes a first inverse transfer function to achieve a target response based on first measurement data. 6. The printing system of claim 5, wherein the halftone calibration module generates the first calibrated halftone based on the first inverse transfer function. 7. The printing system of claim 1, further comprising a printer to print image data to a medium. 8. The printing system of claim 7, wherein the halftone calibration module computes a second inverse transfer function to achieve a target response based on second measurement data during implementation of the first calibrated halftone to perform halftoning. 9. The printing system of claim 8, wherein the halftone calibration module generates the second calibrated halftone based on the second inverse transfer function. 10. The printing system of claim 1, wherein the halftone calibration module generates a calibrated halftone using a multi-bit threshold array (MTA). 11. The printing system of claim 5, wherein the measurement module comprises an edge sensor, wherein the threshold values in the uncalibrated halftone are transformed via the first inverse transfer function. 12. The printing system of claim 5, wherein the measurement module comprises a camera system, wherein the threshold values in the uncalibrated halftone corresponding to the measurement data are transformed via the first inverse transfer function. 13. A non-transitory machine-readable medium including data that, when accessed by a machine, cause the machine to: receive print job data; render the print job data into image data; and dynamically generate calibrated halftones to compensate for optical density changes that occur at a printer, including, generating a first calibrated halftone; generating a second calibrated halftone upon detecting the optical density changes and replacing the first calibrated halftone with the second calibrated halftone; and halftoning the image data using the first and second calibrated halftones. 14. (canceled) 15. The machine-readable medium of claim 13, wherein generating the calibrated halftones comprises receiving an un-calibrated halftone, transforming threshold values in the un-calibrated halftone via an inverse transfer function to generate calibrated halftone threshold values, and generating one or more calibrated halftones based on the calibrated halftone threshold values. 16. The machine-readable medium of claim 15, comprising a machine-readable medium including data that, when accessed by a machine, further cause the machine to receive measurement data from image data printed to a print medium to detect the optical density changes. 17. The machine-readable medium of claim 16, comprising a machine-readable medium including data that, when accessed by a machine, further cause the machine to compute a first inverse transfer function to achieve a target response based on first measurement data; generate the first calibrated halftone based on the first inverse transfer function; and perform halftoning on the image data using the first calibrated halftone. 18. The machine-readable medium of claim 17, comprising a machine-readable medium including data that, when accessed by a machine, further cause the machine to: compute a second inverse transfer function to achieve a target response based on second measurement data; generate the second calibrated halftone based on the second inverse transfer function; and perform halftoning on the image data using the second calibrated halftone. 19. A printing system comprising: a print controller to receive print job data and render the print job data into image data, dynamically generate calibrated halftones to compensate for optical density changes that occur at the printer, including receiving an un-calibrated halftone, transforming threshold values in the un-calibrated halftone via an inverse transfer function to generate calibrated halftone threshold values; and generating one or more calibrated halftones based on the calibrated halftone threshold values and perform halftoning on the image data using the calibrated halftones. 20. The printing system of claim 19, wherein the halftone calibration module dynamically generating the calibrated halftones comprises generating a first calibrated halftone, generating a second calibrated halftone upon detecting the optical density changes and replacing the first calibrated halftone with the second calibrated halftone. 21. The printing system of claim 19, further comprising a printer to print image data to a medium.
2,600
10,318
10,318
15,300,556
2,632
A new power and area efficient ADC-digital co-design approach is introduced to IF digital beam forming that combines continuous-time band-pass ΔΣ modulators and bit-stream processing. An array of compact (0.03 mm2), low-power (13.1 mW) delta sigma modulators directly digitizes 260 MHz IF signals from eight input elements. Digital beam forming is directly performed on the over-sampled, undecimated low-resolution outputs of the delta sigma modulator array. The unique combination of delta sigma modulators and bit stream processing has several advantages.
1. A method for digital beamforming, comprising: receiving, by an array of sigma delta modulators, a plurality of analog radio frequency (RF) signals from an RF front-end; converting, by the array of sigma delta modulators, each of the analog RF signals into a corresponding digital signal using sigma-delta modulation; bit-stream processing, by a bit stream processor, the digital signals received directly from the array of sigma delta modulators, the bit-stream processing includes down mixing each digital signal using a first multiplication operation and phase shifting each multiplied digital signal by weighting the respective multiplied digital signal using a second multiplication operation; and summing, by the bit stream processor, each of the bit-stream processed digital signals to form a resultant signal. 2. The method of claim 1 wherein each sigma delta modulator in the array of sigma delta modulators is further defined as a continuous-time band-pass sigma delta modulator. 3. The method of claim 1 further comprises down mixing each digital signal at one quarter sampling rate of the sigma delta modulators. 4. The method of claim 1 wherein down mixing further comprises multiplying each digital signal by a multiplier, the multiplier being selected from a group of three or more values. 5. The method of claim 4 wherein phase shifting further comprises multiplying each digital signal by a multiplier, the multiplier being selected from a group of five or more values. 6. The method of claim 5 wherein phase shifting includes multiplying a value of a given digital signal by two by left shifting the value of the given digital signal. 7. The method of claim 6 further comprises multiplying each digital signal by a multiplier using a multiplexer and phase shifting each multiplied signal using a multiplexer. 8. The method of claim 7 further comprises decimating the resultant signal using a filter. 9. A method for digital beamforming, comprising: receiving, by an array of sigma delta modulators, a plurality of intermediate frequency (IF) analog signals from an RF front-end; converting, by the array of sigma delta modulators, each of the IF analog signals into a corresponding digital signal using sigma-delta modulation; down mixing, by a first set of multiplexers, each digital signal by multiplying the respective digital signal by a multiplier, where the multiplier is selected from a group of one, zero or minus one; weighting, by a second set of multiplexers, each down mixed digital signal with a weight, where the weight is selected from a group of two, one, zero, minus one or minus two; and summing, by a bit stream processor, each of the weighted digital signals together to form a resultant digital signal. 10. The method of claim 9 wherein each sigma delta modulator in the array of sigma delta modulators is further defined as a continuous-time band-pass sigma delta modulator. 11. The method of claim 10 wherein values output by each sigma delta modulator in the array of sigma delta modulators is represented by five signal levels. 12. The method of claim 11 further comprises down mixing each digital signal to generate a corresponding in-phase signal and a corresponding quadrature signal. 13. The method of claim 12 further comprises weighting the in-phase signal and the quadrature signal with a weight and selecting one of the weighted in-phase signal and the weighted quadrature signal using a multiplexer. 14. The method of claim 12 wherein weighting each down mixed signals includes multiplying by two by left shifting the values of a given down mixed digital signal. 15. The method of claim 10 further comprises down mixing each digital signal at one quarter of sampling rate of the sigma delta modulators. 16. The method of claim 10 further comprises decimating the resultant signal using a filter. 17. A digital beamformer, comprising: an array of sigma delta modulators, each sigma delta modulator configured to receive an intermediate frequency (IF) analog signal and operates to convert the IF analog signal to a corresponding digital signal; a first set of multiplexers configured to receive the digital signals from the array of sigma delta modulators, each multiplexer in the first set of multiplexers operates to down mix one of the digital signals using a multiplication operation; a second set of multiplexers configured to receive the down mixed digital signals from the first set of multiplexers and operates to phase shift each down mixed digital signal using a multiplication operations; a set of additive mixers configured to receive the phase shifted digital signals from the second set of multiplexers and operates to add the phase shifted signals together to form a resultant digital signal. 18. The digital beamformer of claim 17 wherein each sigma delta modulator in the array of sigma delta modulators is further defined as a continuous-time band-pass sigma delta modulator. 19. The digital beamformer of claim 17 wherein, for each digital signal received from the array of sigma delta modulators, the first set of multiplexers includes one multiplexer that outputs an in-phase signal components for a given digital signal and another multiplexer that outputs a quadrature signal component for the given digital signal. 20. The digital beamformer of claim 17 further comprises one or more decimator filters, where the number of decimator filters equals the number of resultant signals.
A new power and area efficient ADC-digital co-design approach is introduced to IF digital beam forming that combines continuous-time band-pass ΔΣ modulators and bit-stream processing. An array of compact (0.03 mm2), low-power (13.1 mW) delta sigma modulators directly digitizes 260 MHz IF signals from eight input elements. Digital beam forming is directly performed on the over-sampled, undecimated low-resolution outputs of the delta sigma modulator array. The unique combination of delta sigma modulators and bit stream processing has several advantages.1. A method for digital beamforming, comprising: receiving, by an array of sigma delta modulators, a plurality of analog radio frequency (RF) signals from an RF front-end; converting, by the array of sigma delta modulators, each of the analog RF signals into a corresponding digital signal using sigma-delta modulation; bit-stream processing, by a bit stream processor, the digital signals received directly from the array of sigma delta modulators, the bit-stream processing includes down mixing each digital signal using a first multiplication operation and phase shifting each multiplied digital signal by weighting the respective multiplied digital signal using a second multiplication operation; and summing, by the bit stream processor, each of the bit-stream processed digital signals to form a resultant signal. 2. The method of claim 1 wherein each sigma delta modulator in the array of sigma delta modulators is further defined as a continuous-time band-pass sigma delta modulator. 3. The method of claim 1 further comprises down mixing each digital signal at one quarter sampling rate of the sigma delta modulators. 4. The method of claim 1 wherein down mixing further comprises multiplying each digital signal by a multiplier, the multiplier being selected from a group of three or more values. 5. The method of claim 4 wherein phase shifting further comprises multiplying each digital signal by a multiplier, the multiplier being selected from a group of five or more values. 6. The method of claim 5 wherein phase shifting includes multiplying a value of a given digital signal by two by left shifting the value of the given digital signal. 7. The method of claim 6 further comprises multiplying each digital signal by a multiplier using a multiplexer and phase shifting each multiplied signal using a multiplexer. 8. The method of claim 7 further comprises decimating the resultant signal using a filter. 9. A method for digital beamforming, comprising: receiving, by an array of sigma delta modulators, a plurality of intermediate frequency (IF) analog signals from an RF front-end; converting, by the array of sigma delta modulators, each of the IF analog signals into a corresponding digital signal using sigma-delta modulation; down mixing, by a first set of multiplexers, each digital signal by multiplying the respective digital signal by a multiplier, where the multiplier is selected from a group of one, zero or minus one; weighting, by a second set of multiplexers, each down mixed digital signal with a weight, where the weight is selected from a group of two, one, zero, minus one or minus two; and summing, by a bit stream processor, each of the weighted digital signals together to form a resultant digital signal. 10. The method of claim 9 wherein each sigma delta modulator in the array of sigma delta modulators is further defined as a continuous-time band-pass sigma delta modulator. 11. The method of claim 10 wherein values output by each sigma delta modulator in the array of sigma delta modulators is represented by five signal levels. 12. The method of claim 11 further comprises down mixing each digital signal to generate a corresponding in-phase signal and a corresponding quadrature signal. 13. The method of claim 12 further comprises weighting the in-phase signal and the quadrature signal with a weight and selecting one of the weighted in-phase signal and the weighted quadrature signal using a multiplexer. 14. The method of claim 12 wherein weighting each down mixed signals includes multiplying by two by left shifting the values of a given down mixed digital signal. 15. The method of claim 10 further comprises down mixing each digital signal at one quarter of sampling rate of the sigma delta modulators. 16. The method of claim 10 further comprises decimating the resultant signal using a filter. 17. A digital beamformer, comprising: an array of sigma delta modulators, each sigma delta modulator configured to receive an intermediate frequency (IF) analog signal and operates to convert the IF analog signal to a corresponding digital signal; a first set of multiplexers configured to receive the digital signals from the array of sigma delta modulators, each multiplexer in the first set of multiplexers operates to down mix one of the digital signals using a multiplication operation; a second set of multiplexers configured to receive the down mixed digital signals from the first set of multiplexers and operates to phase shift each down mixed digital signal using a multiplication operations; a set of additive mixers configured to receive the phase shifted digital signals from the second set of multiplexers and operates to add the phase shifted signals together to form a resultant digital signal. 18. The digital beamformer of claim 17 wherein each sigma delta modulator in the array of sigma delta modulators is further defined as a continuous-time band-pass sigma delta modulator. 19. The digital beamformer of claim 17 wherein, for each digital signal received from the array of sigma delta modulators, the first set of multiplexers includes one multiplexer that outputs an in-phase signal components for a given digital signal and another multiplexer that outputs a quadrature signal component for the given digital signal. 20. The digital beamformer of claim 17 further comprises one or more decimator filters, where the number of decimator filters equals the number of resultant signals.
2,600
10,319
10,319
14,830,133
2,619
There is provided an image processing apparatus including: an image information acquiring part configured to acquire two-dimensional image information including a texture image; a polygon model information acquiring part configured to acquire polygon model information representing a three-dimensional polygon model as an object on which to map the texture image, the polygon model information including position information about a plurality of vertexes; a polygon model information updating part configured to update the position information about at least one and other vertexes included in the polygon model information, on the basis of predetermined relations reflecting vertex movement information representing the movement of at least one of the vertexes; and a mapping part configured to map the texture image on the polygon model based on the updated polygon model information.
1. An image processing apparatus comprising: an image information acquiring part configured to acquire two-dimensional image information including a texture image; a polygon model information acquiring part configured to acquire polygon model information representing a three-dimensional polygon model as an object on which to map the texture image, the polygon model information including position information about a plurality of vertexes; a polygon model information updating part configured to update the position information about at least one and other vertexes included in the polygon model information, on a basis of predetermined relations reflecting vertex movement information representing a movement of at least one of the vertexes; and a mapping part configured to map the texture image on the polygon model based on the updated polygon model information. 2. The image processing apparatus according to claim 1, wherein the predetermined relations reflect a spring model having a virtual spring provided for each of the sides connecting the vertexes; and the image processing apparatus further comprises a physical simulation part configured to perform physical simulation based on the spring model and on the vertex movement information. 3. The information processing apparatus according to claim 2, further comprising a spring model updating part configured to update a natural length of each of the springs at a predetermined timing. 4. The information processing apparatus according to claim 2, further comprising a superposed image information generating part configured to generate superposed image information for displaying a wire model representing the vertexes and the sides of the polygon model in a manner superposed on the image information, on a basis of the polygon model information. 5. The information processing apparatus according to claim 4, wherein the superposed image information generating part generates the superposed image information for displaying in a superposed manner the wire model representing a portion of the vertexes and of the sides included in the polygon model. 6. The information processing apparatus according to claim 5, wherein the physical simulation part performs the physical simulation using the portion of the vertexes and of the sides. 7. An image processing method comprising: acquiring two-dimensional image information including a texture image; acquiring polygon model information representing a three-dimensional polygon model as an object on which to map the texture image, the polygon model information including position information about a plurality of vertexes; updating the position information about at least one and other vertexes included in the polygon model information, on the basis of predetermined relations reflecting vertex movement information representing the movement of at least one of the vertexes; and mapping the texture image on the polygon model based on the updated polygon model information. 8. An image processing program for causing a computer to function as an apparatus comprising: an image information acquiring part configured to acquire two-dimensional image information including a texture image; a polygon model information acquiring part configured to acquire polygon model information representing a three-dimensional polygon model as an object on which to map the texture image, the polygon model information including position information about a plurality of vertexes; a polygon model information updating part configured to update the position information about at least one and other vertexes included in the polygon model information, on a basis of predetermined relations reflecting vertex movement information representing a movement of at least one of the vertexes; and a mapping part configured to map the texture image on the polygon model based on the updated polygon model information.
There is provided an image processing apparatus including: an image information acquiring part configured to acquire two-dimensional image information including a texture image; a polygon model information acquiring part configured to acquire polygon model information representing a three-dimensional polygon model as an object on which to map the texture image, the polygon model information including position information about a plurality of vertexes; a polygon model information updating part configured to update the position information about at least one and other vertexes included in the polygon model information, on the basis of predetermined relations reflecting vertex movement information representing the movement of at least one of the vertexes; and a mapping part configured to map the texture image on the polygon model based on the updated polygon model information.1. An image processing apparatus comprising: an image information acquiring part configured to acquire two-dimensional image information including a texture image; a polygon model information acquiring part configured to acquire polygon model information representing a three-dimensional polygon model as an object on which to map the texture image, the polygon model information including position information about a plurality of vertexes; a polygon model information updating part configured to update the position information about at least one and other vertexes included in the polygon model information, on a basis of predetermined relations reflecting vertex movement information representing a movement of at least one of the vertexes; and a mapping part configured to map the texture image on the polygon model based on the updated polygon model information. 2. The image processing apparatus according to claim 1, wherein the predetermined relations reflect a spring model having a virtual spring provided for each of the sides connecting the vertexes; and the image processing apparatus further comprises a physical simulation part configured to perform physical simulation based on the spring model and on the vertex movement information. 3. The information processing apparatus according to claim 2, further comprising a spring model updating part configured to update a natural length of each of the springs at a predetermined timing. 4. The information processing apparatus according to claim 2, further comprising a superposed image information generating part configured to generate superposed image information for displaying a wire model representing the vertexes and the sides of the polygon model in a manner superposed on the image information, on a basis of the polygon model information. 5. The information processing apparatus according to claim 4, wherein the superposed image information generating part generates the superposed image information for displaying in a superposed manner the wire model representing a portion of the vertexes and of the sides included in the polygon model. 6. The information processing apparatus according to claim 5, wherein the physical simulation part performs the physical simulation using the portion of the vertexes and of the sides. 7. An image processing method comprising: acquiring two-dimensional image information including a texture image; acquiring polygon model information representing a three-dimensional polygon model as an object on which to map the texture image, the polygon model information including position information about a plurality of vertexes; updating the position information about at least one and other vertexes included in the polygon model information, on the basis of predetermined relations reflecting vertex movement information representing the movement of at least one of the vertexes; and mapping the texture image on the polygon model based on the updated polygon model information. 8. An image processing program for causing a computer to function as an apparatus comprising: an image information acquiring part configured to acquire two-dimensional image information including a texture image; a polygon model information acquiring part configured to acquire polygon model information representing a three-dimensional polygon model as an object on which to map the texture image, the polygon model information including position information about a plurality of vertexes; a polygon model information updating part configured to update the position information about at least one and other vertexes included in the polygon model information, on a basis of predetermined relations reflecting vertex movement information representing a movement of at least one of the vertexes; and a mapping part configured to map the texture image on the polygon model based on the updated polygon model information.
2,600
10,320
10,320
15,764,778
2,612
A vehicular display device configured to display a guide route to a destination of a host vehicle includes a navigation device configured to calculate the guide route, a display controller configured to draw an arrow having a shaft with an arrowhead at one end of the shaft, as a guide figure for guidance along the guide route calculated by the navigation device, such that the arrow is shaped as a three-dimensional object in which one surface of the arrow is a continuous flat surface from the shaft to the arrowhead and the other surface of the arrow includes a first portion with a first width having a first height, and a second portion with a second width smaller than the first width and having a second height smaller than the first height, and a display configured to display an image drawn by the display controller in a display area.
1. A vehicular display device configured to display a guide route to a destination of a host vehicle, the vehicular display device comprising: a navigation device configured to calculate the guide route; a display controller configured to draw an arrow having a shaft with an arrowhead at one end of the shaft, as a guide figure for guidance along the guide route calculated by the navigation device, such that the arrow is shaped as a three-dimensional object in which one surface of the arrow is a continuous flat surface from the shaft to the arrowhead and the other surface of the arrow includes a first portion with a first width having a first height, and a second portion with a second width smaller than the first width and having a second height smaller than the first height; and a display configured to display an image drawn by the display controller in a display area provided to overlap a position of a windshield, wherein the display controller is configured to display the arrow such that the one surface appears to be superimposed on a ground. 2. The vehicular display device according to claim 1, wherein the display controller is configured to draw the arrow in a three-dimensional arch shape which is raised in a center portion in a width direction. 3. A vehicular display device configured to display a guide route to a destination of a host vehicle, the vehicular display device comprising: a navigation device configured to calculate the guide route; a display controller configured to draw an arrow having a shaft with an arrowhead at one end of the shaft, as a guide figure for guidance along the guide route calculated by the navigation device, such that the arrow is shaped as a three-dimensional object in which one surface of the arrow is a continuous flat surface from the shaft to the arrowhead and the other surface of the arrow includes a first portion of with a first width having a first height, and a second portion with a second width smaller than the first width and having a second height smaller than the first height and a display configured to display an image drawn by the display controller in a display area provided to overlap a position of a windshield, wherein the display controller is configured to draw the arrow in a three-dimensional ridge shape which is bent at a center in a width direction. 4. A vehicular display device configured to display a guide route to a destination of a host vehicle, the vehicular display device comprising: a navigation device configured to calculate the guide route; a display controller configured to draw an arrow having a shaft with an arrowhead at one end of the shaft, as a guide figure for guidance along the guide route calculated by the navigation device, such that the arrow is shaped as a three-dimensional object in which one surface of the arrow is a continuous flat surface from the shaft to the arrowhead and the other surface of the arrow includes a first portion of with a first width having a first height, and a second portion with a second width smaller than the first width and having a second height smaller than the first height; and a display configured to display an image drawn by the display controller in a display area provided to overlap a position of a windshield, wherein the display controller is configured to draw the arrow in a three-dimensional cockscomb shape only a center portion of which in a width direction is protruded to have a height.
A vehicular display device configured to display a guide route to a destination of a host vehicle includes a navigation device configured to calculate the guide route, a display controller configured to draw an arrow having a shaft with an arrowhead at one end of the shaft, as a guide figure for guidance along the guide route calculated by the navigation device, such that the arrow is shaped as a three-dimensional object in which one surface of the arrow is a continuous flat surface from the shaft to the arrowhead and the other surface of the arrow includes a first portion with a first width having a first height, and a second portion with a second width smaller than the first width and having a second height smaller than the first height, and a display configured to display an image drawn by the display controller in a display area.1. A vehicular display device configured to display a guide route to a destination of a host vehicle, the vehicular display device comprising: a navigation device configured to calculate the guide route; a display controller configured to draw an arrow having a shaft with an arrowhead at one end of the shaft, as a guide figure for guidance along the guide route calculated by the navigation device, such that the arrow is shaped as a three-dimensional object in which one surface of the arrow is a continuous flat surface from the shaft to the arrowhead and the other surface of the arrow includes a first portion with a first width having a first height, and a second portion with a second width smaller than the first width and having a second height smaller than the first height; and a display configured to display an image drawn by the display controller in a display area provided to overlap a position of a windshield, wherein the display controller is configured to display the arrow such that the one surface appears to be superimposed on a ground. 2. The vehicular display device according to claim 1, wherein the display controller is configured to draw the arrow in a three-dimensional arch shape which is raised in a center portion in a width direction. 3. A vehicular display device configured to display a guide route to a destination of a host vehicle, the vehicular display device comprising: a navigation device configured to calculate the guide route; a display controller configured to draw an arrow having a shaft with an arrowhead at one end of the shaft, as a guide figure for guidance along the guide route calculated by the navigation device, such that the arrow is shaped as a three-dimensional object in which one surface of the arrow is a continuous flat surface from the shaft to the arrowhead and the other surface of the arrow includes a first portion of with a first width having a first height, and a second portion with a second width smaller than the first width and having a second height smaller than the first height and a display configured to display an image drawn by the display controller in a display area provided to overlap a position of a windshield, wherein the display controller is configured to draw the arrow in a three-dimensional ridge shape which is bent at a center in a width direction. 4. A vehicular display device configured to display a guide route to a destination of a host vehicle, the vehicular display device comprising: a navigation device configured to calculate the guide route; a display controller configured to draw an arrow having a shaft with an arrowhead at one end of the shaft, as a guide figure for guidance along the guide route calculated by the navigation device, such that the arrow is shaped as a three-dimensional object in which one surface of the arrow is a continuous flat surface from the shaft to the arrowhead and the other surface of the arrow includes a first portion of with a first width having a first height, and a second portion with a second width smaller than the first width and having a second height smaller than the first height; and a display configured to display an image drawn by the display controller in a display area provided to overlap a position of a windshield, wherein the display controller is configured to draw the arrow in a three-dimensional cockscomb shape only a center portion of which in a width direction is protruded to have a height.
2,600
10,321
10,321
14,961,693
2,657
The present invention provides a system and method for representing quasi-periodic (“qp”) waveforms comprising, representing a plurality of limited decompositions of the qp waveform, wherein each decomposition includes a first and second amplitude value and at least one time value. In some embodiments, each of the decompositions is phase adjusted such that the arithmetic sum of the plurality of limited decompositions reconstructs the qp waveform. These decompositions are stored into a data structure having a plurality of attributes. Optionally, these attributes are used to reconstruct the qp waveform, or patterns or features of the qp wave can be determined by using various pattern-recognition techniques. Some embodiments provide a system that uses software, embedded hardware or firmware to carry out the above-described method. Some embodiments use a computer-readable medium to store the data structure and/or instructions to execute the method.
1. An apparatus comprising: a computer having a storage unit; a receiver operatively coupled to the computer and configured to obtain a digitized signal having a series of digital values of a quasi-periodic waveform and to store the series of digital values in the storage unit, wherein the quasi-periodic waveform includes a cardiac signal; a state generator operative coupled to generate a series of states defined by phase relationships between a plurality of frequency components of the cardiac signal based on the stored series of digital values; and an automatic segment generator configured to automatically output representations of segments of the cardiac signal based on the series of states. 2. A computer-implemented method comprising: obtaining a digitized signal having a series of digital values of a quasi-periodic waveform, wherein the quasi-periodic waveform includes a cardiac signal; generating a series of states defined by phase relationships between a plurality of frequency components of the cardiac signal based on the series of digital values; and automatically generating and outputting representations of segments of the cardiac signal based on the series of states. 3. The computer-implemented method of claim 2, wherein the generating of the series of states further includes: frequency filtering the cardiac signal into a plurality of frequency bands of a plurality of different frequencies, wherein each one of the plurality of frequency bands corresponds to one of the plurality of frequency components; generating a plurality of fractional phase representations, wherein each one of the plurality of fractional-phase representations corresponds to one of the plurality of frequency bands; and determining the series of states based on a sequence of phases in the plurality of fractional-phase representations. 4. A non-transitory computer-readable medium having instructions stored thereon, wherein the instructions when executed on a suitable information processor, perform a method comprising: obtaining a digitized signal having a series of digital values of a quasi-periodic waveform, wherein the quasi-periodic waveform includes a cardiac signal; generating a series of states defined by phase relationships between a plurality of frequency components of the cardiac signal based on the series of digital values; and automatically generating and outputting representations of segments of the cardiac signal based on the series of states. 5. The computer-readable medium of claim 4, further comprising instructions such that the generating of the series of states further includes: frequency filtering the cardiac signal into a plurality of frequency bands of a plurality of different frequencies, wherein each one of the plurality of frequency bands corresponds to one of the plurality of frequency components; generating a plurality of fractional phase representations, wherein each one of the plurality of fractional-phase representations corresponds to one of the plurality of frequency bands; and determining the series of states based on a sequence of phases in the plurality of fractional-phase representations. 6. The computer-readable medium of claim 4, further comprising instructions such that the automatically generating and outputting representations of segments of the cardiac signal includes creating graphical representations of the segments of the cardiac signal. 7. The computer-readable medium of claim 4, further comprising instructions such that the generating of the series of states further includes: frequency filtering the cardiac signal into a plurality of frequency bands of a plurality of different frequencies, wherein each one of the plurality of frequency bands corresponds to one of the plurality of frequency components; generating a plurality of fractional phase representations, wherein each one of the plurality of fractional-phase representations corresponds to one of the plurality of frequency bands; determining the series of states based on a sequence of phases in the plurality of fractional-phase representations, wherein each of the plurality of fractional phase representations includes a plurality of fractional phases, wherein each fractional phase is associated with a phase label and includes one or more values representing at least one of an abscissa and an ordinate for each of one or more local cycles of the cardiac signal. 8. The computer-readable medium of claim 4, further comprising instructions such that the generating of the series of states further includes: frequency filtering the cardiac signal into a plurality of frequency bands of a plurality of different frequencies, wherein each one of the plurality of frequency bands corresponds to one of the plurality of frequency components; generating a plurality of fractional phase representations, wherein each one of the plurality of fractional-phase representations corresponds to one of the plurality of frequency bands; determining the series of states based on a sequence of phases in the plurality of fractional-phase representations, wherein the series of digital values are complex numbers, and wherein each one of the plurality of fractional-phase representations is associated with an angular range of a complex argument of the cardiac signal. 9. The computer-readable medium of claim 4, further comprising instructions such that the automatically generating and outputting of the representations of segments of the cardiac signal includes graphically presenting the cardiac signal with points marked on the cardiac signal, and wherein the points are defined by fractional-phase transition points of each of the plurality of frequency components. 10. The computer-readable medium of claim 4, further comprising instructions such that the automatically generating and outputting of representations of segments of the cardiac signal includes graphically presenting the cardiac signal with a plurality of points marked on the cardiac signal, and wherein the plurality of points are defined by fractional-phase transitions of each of a plurality of the frequency components, and wherein each of the plurality of points is labeled with a vector representation of one of the series of states. 11. The computer-readable medium of claim 4, further comprising instructions such that the automatically generating and outputting of representations of segments of the cardiac signal includes graphically presenting the cardiac signal with a plurality of points marked by vertical lines on the cardiac signal, and wherein the plurality of points are defined by fractional-phase transitions of each of a plurality of the frequency components. 12. The apparatus of claim 1, wherein the automatic segment generator creates graphical representations of the segments of the cardiac signal. 13. The apparatus of claim 1, wherein the automatic segment generator creates matrix-table representations of the segments of the cardiac signal. 14. The computer-implemented method of claim 2, wherein the automatically generating and outputting representations of segments of the cardiac signal includes creating graphical representations of the segments of the cardiac signal. 15. The computer-implemented method of claim 2, wherein the automatically generating and outputting representations of the segments of the cardiac signal includes creating matrix-table representations of the segments of the cardiac signal. 16. The computer-implemented method of claim 2, wherein the automatically generating and outputting representations of segments of the cardiac signal includes creating a composite representation of each respective segment of the cardiac signal in each cell of a matrix-table graphical representations of the segments of the cardiac signal. 17. The computer-implemented method of claim 2, wherein the generating of the series of states includes generating a resolution of sixteen (16) states. 18. The computer-implemented method of claim 2, wherein the generating of the series of states includes generating a resolution of sixty-four (64) states. 19. The computer-implemented method of claim 2, wherein the generating of the series of states includes generating a resolution of two-hundred-and-fifty-six (256) states. 20. The computer-implemented method of claim 2, wherein the generating of the series of states further includes: frequency filtering the cardiac signal into a plurality of frequency bands of a plurality of different frequencies, wherein each one of the plurality of frequency bands corresponds to one of the plurality of frequency components; generating a plurality of fractional phase representations, wherein each one of the plurality of fractional-phase representations corresponds to one of the plurality of frequency bands; and determining the series of states based on a sequence of phases in the plurality of fractional-phase representations, wherein each of the plurality of fractional phase representations includes a plurality of fractional phases, and wherein each fractional phase is associated with a phase label and includes one or more values representing at least one of an abscissa and an ordinate for each of at least one local cycle of the cardiac signal. 21. The computer-implemented method of claim 2, wherein the generating of the series of states further includes: frequency filtering the cardiac signal into a plurality of frequency bands of a plurality of different frequencies, wherein each one of the plurality of frequency bands corresponds to one of the plurality of frequency components, generating a plurality of fractional phase representations, wherein each one of the plurality of fractional-phase representations corresponds to one of the plurality of frequency bands, and determining the series of states based on a sequence of phases in the plurality of fractional-phase representations; and wherein the method further includes: determining a phase-adjustment value for a respective one of the plurality of fractional-phase representations such that a linear combination of values derived from the respective fractional-phase representation with values derived from one or more other fractional-phase representations obtained from other frequency bands substantially reconstructs the quasi-periodic waveform. 22. The computer-implemented method of claim 2, wherein the generating of the series of states further includes: frequency filtering the cardiac signal into a plurality of frequency bands of a plurality of different frequencies, wherein each one of the plurality of frequency bands corresponds to one of the plurality of frequency components; generating a plurality of fractional-phase representations, wherein each one of the plurality of fractional-phase representations corresponds to one of the plurality of frequency bands; and determining the series of states based on a sequence of phases in the plurality of fractional-phase representations, wherein the series of digital values include complex numbers, and wherein each one of the plurality of fractional-phase representations is associated with an angular range of a complex argument of the cardiac signal. 23. The computer-implemented method of claim 2, wherein the automatically generating and outputting of the representations of segments of the cardiac signal includes graphically presenting the cardiac signal with points marked on the cardiac signal, and wherein the points are defined by fractional-phase transition points of each of the plurality of frequency components. 24. The computer-implemented method of claim 2, wherein the automatically generating and outputting of representations of segments of the cardiac signal includes graphically presenting the cardiac signal with a plurality of points marked on the cardiac signal, and wherein the plurality of points are defined by fractional-phase transitions of each of a plurality of the frequency components, and wherein each of the plurality of points is labeled with a vector representation of one of the series of states. 25. The computer-implemented method of claim 2, wherein the automatically generating and outputting of representations of segments of the cardiac signal includes graphically presenting the cardiac signal with a plurality of points marked by vertical lines on the cardiac signal, and wherein the plurality of points are defined by fractional-phase transitions of each of a plurality of the frequency components. 26. The computer-implemented method of claim 2, wherein the automatically generating and outputting of representations of segments of the cardiac signal includes graphically presenting the cardiac signal with end points of the segments being defined by quarter-phase transition points on each of a plurality of frequency components.
The present invention provides a system and method for representing quasi-periodic (“qp”) waveforms comprising, representing a plurality of limited decompositions of the qp waveform, wherein each decomposition includes a first and second amplitude value and at least one time value. In some embodiments, each of the decompositions is phase adjusted such that the arithmetic sum of the plurality of limited decompositions reconstructs the qp waveform. These decompositions are stored into a data structure having a plurality of attributes. Optionally, these attributes are used to reconstruct the qp waveform, or patterns or features of the qp wave can be determined by using various pattern-recognition techniques. Some embodiments provide a system that uses software, embedded hardware or firmware to carry out the above-described method. Some embodiments use a computer-readable medium to store the data structure and/or instructions to execute the method.1. An apparatus comprising: a computer having a storage unit; a receiver operatively coupled to the computer and configured to obtain a digitized signal having a series of digital values of a quasi-periodic waveform and to store the series of digital values in the storage unit, wherein the quasi-periodic waveform includes a cardiac signal; a state generator operative coupled to generate a series of states defined by phase relationships between a plurality of frequency components of the cardiac signal based on the stored series of digital values; and an automatic segment generator configured to automatically output representations of segments of the cardiac signal based on the series of states. 2. A computer-implemented method comprising: obtaining a digitized signal having a series of digital values of a quasi-periodic waveform, wherein the quasi-periodic waveform includes a cardiac signal; generating a series of states defined by phase relationships between a plurality of frequency components of the cardiac signal based on the series of digital values; and automatically generating and outputting representations of segments of the cardiac signal based on the series of states. 3. The computer-implemented method of claim 2, wherein the generating of the series of states further includes: frequency filtering the cardiac signal into a plurality of frequency bands of a plurality of different frequencies, wherein each one of the plurality of frequency bands corresponds to one of the plurality of frequency components; generating a plurality of fractional phase representations, wherein each one of the plurality of fractional-phase representations corresponds to one of the plurality of frequency bands; and determining the series of states based on a sequence of phases in the plurality of fractional-phase representations. 4. A non-transitory computer-readable medium having instructions stored thereon, wherein the instructions when executed on a suitable information processor, perform a method comprising: obtaining a digitized signal having a series of digital values of a quasi-periodic waveform, wherein the quasi-periodic waveform includes a cardiac signal; generating a series of states defined by phase relationships between a plurality of frequency components of the cardiac signal based on the series of digital values; and automatically generating and outputting representations of segments of the cardiac signal based on the series of states. 5. The computer-readable medium of claim 4, further comprising instructions such that the generating of the series of states further includes: frequency filtering the cardiac signal into a plurality of frequency bands of a plurality of different frequencies, wherein each one of the plurality of frequency bands corresponds to one of the plurality of frequency components; generating a plurality of fractional phase representations, wherein each one of the plurality of fractional-phase representations corresponds to one of the plurality of frequency bands; and determining the series of states based on a sequence of phases in the plurality of fractional-phase representations. 6. The computer-readable medium of claim 4, further comprising instructions such that the automatically generating and outputting representations of segments of the cardiac signal includes creating graphical representations of the segments of the cardiac signal. 7. The computer-readable medium of claim 4, further comprising instructions such that the generating of the series of states further includes: frequency filtering the cardiac signal into a plurality of frequency bands of a plurality of different frequencies, wherein each one of the plurality of frequency bands corresponds to one of the plurality of frequency components; generating a plurality of fractional phase representations, wherein each one of the plurality of fractional-phase representations corresponds to one of the plurality of frequency bands; determining the series of states based on a sequence of phases in the plurality of fractional-phase representations, wherein each of the plurality of fractional phase representations includes a plurality of fractional phases, wherein each fractional phase is associated with a phase label and includes one or more values representing at least one of an abscissa and an ordinate for each of one or more local cycles of the cardiac signal. 8. The computer-readable medium of claim 4, further comprising instructions such that the generating of the series of states further includes: frequency filtering the cardiac signal into a plurality of frequency bands of a plurality of different frequencies, wherein each one of the plurality of frequency bands corresponds to one of the plurality of frequency components; generating a plurality of fractional phase representations, wherein each one of the plurality of fractional-phase representations corresponds to one of the plurality of frequency bands; determining the series of states based on a sequence of phases in the plurality of fractional-phase representations, wherein the series of digital values are complex numbers, and wherein each one of the plurality of fractional-phase representations is associated with an angular range of a complex argument of the cardiac signal. 9. The computer-readable medium of claim 4, further comprising instructions such that the automatically generating and outputting of the representations of segments of the cardiac signal includes graphically presenting the cardiac signal with points marked on the cardiac signal, and wherein the points are defined by fractional-phase transition points of each of the plurality of frequency components. 10. The computer-readable medium of claim 4, further comprising instructions such that the automatically generating and outputting of representations of segments of the cardiac signal includes graphically presenting the cardiac signal with a plurality of points marked on the cardiac signal, and wherein the plurality of points are defined by fractional-phase transitions of each of a plurality of the frequency components, and wherein each of the plurality of points is labeled with a vector representation of one of the series of states. 11. The computer-readable medium of claim 4, further comprising instructions such that the automatically generating and outputting of representations of segments of the cardiac signal includes graphically presenting the cardiac signal with a plurality of points marked by vertical lines on the cardiac signal, and wherein the plurality of points are defined by fractional-phase transitions of each of a plurality of the frequency components. 12. The apparatus of claim 1, wherein the automatic segment generator creates graphical representations of the segments of the cardiac signal. 13. The apparatus of claim 1, wherein the automatic segment generator creates matrix-table representations of the segments of the cardiac signal. 14. The computer-implemented method of claim 2, wherein the automatically generating and outputting representations of segments of the cardiac signal includes creating graphical representations of the segments of the cardiac signal. 15. The computer-implemented method of claim 2, wherein the automatically generating and outputting representations of the segments of the cardiac signal includes creating matrix-table representations of the segments of the cardiac signal. 16. The computer-implemented method of claim 2, wherein the automatically generating and outputting representations of segments of the cardiac signal includes creating a composite representation of each respective segment of the cardiac signal in each cell of a matrix-table graphical representations of the segments of the cardiac signal. 17. The computer-implemented method of claim 2, wherein the generating of the series of states includes generating a resolution of sixteen (16) states. 18. The computer-implemented method of claim 2, wherein the generating of the series of states includes generating a resolution of sixty-four (64) states. 19. The computer-implemented method of claim 2, wherein the generating of the series of states includes generating a resolution of two-hundred-and-fifty-six (256) states. 20. The computer-implemented method of claim 2, wherein the generating of the series of states further includes: frequency filtering the cardiac signal into a plurality of frequency bands of a plurality of different frequencies, wherein each one of the plurality of frequency bands corresponds to one of the plurality of frequency components; generating a plurality of fractional phase representations, wherein each one of the plurality of fractional-phase representations corresponds to one of the plurality of frequency bands; and determining the series of states based on a sequence of phases in the plurality of fractional-phase representations, wherein each of the plurality of fractional phase representations includes a plurality of fractional phases, and wherein each fractional phase is associated with a phase label and includes one or more values representing at least one of an abscissa and an ordinate for each of at least one local cycle of the cardiac signal. 21. The computer-implemented method of claim 2, wherein the generating of the series of states further includes: frequency filtering the cardiac signal into a plurality of frequency bands of a plurality of different frequencies, wherein each one of the plurality of frequency bands corresponds to one of the plurality of frequency components, generating a plurality of fractional phase representations, wherein each one of the plurality of fractional-phase representations corresponds to one of the plurality of frequency bands, and determining the series of states based on a sequence of phases in the plurality of fractional-phase representations; and wherein the method further includes: determining a phase-adjustment value for a respective one of the plurality of fractional-phase representations such that a linear combination of values derived from the respective fractional-phase representation with values derived from one or more other fractional-phase representations obtained from other frequency bands substantially reconstructs the quasi-periodic waveform. 22. The computer-implemented method of claim 2, wherein the generating of the series of states further includes: frequency filtering the cardiac signal into a plurality of frequency bands of a plurality of different frequencies, wherein each one of the plurality of frequency bands corresponds to one of the plurality of frequency components; generating a plurality of fractional-phase representations, wherein each one of the plurality of fractional-phase representations corresponds to one of the plurality of frequency bands; and determining the series of states based on a sequence of phases in the plurality of fractional-phase representations, wherein the series of digital values include complex numbers, and wherein each one of the plurality of fractional-phase representations is associated with an angular range of a complex argument of the cardiac signal. 23. The computer-implemented method of claim 2, wherein the automatically generating and outputting of the representations of segments of the cardiac signal includes graphically presenting the cardiac signal with points marked on the cardiac signal, and wherein the points are defined by fractional-phase transition points of each of the plurality of frequency components. 24. The computer-implemented method of claim 2, wherein the automatically generating and outputting of representations of segments of the cardiac signal includes graphically presenting the cardiac signal with a plurality of points marked on the cardiac signal, and wherein the plurality of points are defined by fractional-phase transitions of each of a plurality of the frequency components, and wherein each of the plurality of points is labeled with a vector representation of one of the series of states. 25. The computer-implemented method of claim 2, wherein the automatically generating and outputting of representations of segments of the cardiac signal includes graphically presenting the cardiac signal with a plurality of points marked by vertical lines on the cardiac signal, and wherein the plurality of points are defined by fractional-phase transitions of each of a plurality of the frequency components. 26. The computer-implemented method of claim 2, wherein the automatically generating and outputting of representations of segments of the cardiac signal includes graphically presenting the cardiac signal with end points of the segments being defined by quarter-phase transition points on each of a plurality of frequency components.
2,600
10,322
10,322
15,189,371
2,612
A display system is provided for a vehicle equipped with a camera for supplying streamed video images of a scene rearward of the vehicle. The display system includes an image processing unit for receiving the streamed video images and processing the streamed video images, and a display for displaying the processed streamed video images. To perform processing of the streamed video images, the image processing unit is configured to: detect amplitude-modulated light sources in the streamed video images, classify the detected amplitude-modulated light sources into one of several possible classifications, select the streamed video images in which an amplitude-modulated light source is detected that flickers based upon the classification of the amplitude-modulated light source, and modify the selected streamed video images to correct for flicker of any amplitude-modulated light sources in the selected streamed video images.
1. A display system for a vehicle equipped with a camera for supplying streamed video images of a scene rearward of the vehicle, the display system comprising: an image processing unit for receiving the streamed video images and processing the streamed video images; and a display for displaying the processed streamed video images, wherein to perform processing of the streamed video images, the image processing unit is configured to: detect amplitude-modulated light sources in the streamed video images, classify the detected amplitude-modulated light sources into one of several possible classifications, select the streamed video images in which an amplitude-modulated light source is detected that flickers based upon the classification of the amplitude-modulated light source, and modify the selected streamed video images to correct for flicker of any amplitude-modulated light sources in the selected streamed video images. 2. The display system of claim 1, wherein the image processing unit modifies the selected streamed video images such that the pixels representing each of the detected amplitude-modulated light sources are maintained at a state so that when the processed streamed video images are displayed, each of the detected amplitude-modulated light sources that is represented by the pixels appears to have no perceivable flicker. 3. The display system of claim 2, wherein each of the detected amplitude-modulated light sources are maintained by substituting low pixel values from off periods with higher pixel values from on periods. 4. The display system of claim 1, wherein the image processing unit is further configured to track the detected amplitude-modulated light sources through image frames of the streamed video images. 5. The display system of claim 4, wherein the image processing unit modifies the selected streamed video images such that the pixels representing each of the detected amplitude-modulated light sources are maintained at a state so that when the processed streamed video images are displayed, each of the detected amplitude-modulated light sources that is represented by the pixels appears to have no perceivable flicker and appears at the expected locations in the images based upon the tracking of each of the detected amplitude-modulated light sources. 6. The display system of claim 1, wherein the image processing unit does not modify the streamed video images to correct for flicker from light sources classified as a turn signal or emergency vehicle light. 7. The display system of claim 1, wherein the image processing unit classifies the detected amplitude-modulated light sources into at least two classes where a first class of detected amplitude-modulated light sources has a flicker not perceivable by a human when viewed directly by the human, and a second class of detected amplitude-modulated light sources has a flicker that is perceivable by a human when viewed directly by the human. 8. The display system of claim 7, wherein the streamed video images in which an amplitude-modulated light source is detected that is classified in the first class is modified by substituting pixels representing each of the detected amplitude-modulated light sources that is classified in the first class such that the pixels representing each of the detected amplitude-modulated light sources are always at a state so that when the processed streamed video images are displayed, the detected amplitude-modulated light source that is classified in the first class appears to have no perceivable flicker. 9. The display system of claim 7, wherein the image processing unit classifies the detected amplitude-modulated light sources into the first class when a frequency of the flicker in the light sources is above a threshold frequency and classifies the detected amplitude-modulated light sources into the second class when a frequency of the flicker in the light sources is below the threshold frequency. 10. A rearview assembly for mounting to the vehicle, the rearview assembly comprising the display system of claim 1. 11. A display system comprising: an image processing unit for receiving streamed video images and processing the streamed video images; and a display for displaying the processed streamed video images, wherein to perform processing of the streamed video images, the image processing unit is configured to: detect amplitude-modulated light sources in the streamed video images, classify the detected amplitude-modulated light sources into at least two classes where a first class of detected amplitude-modulated light sources having a flicker not perceivable by a human when viewed directly by the human, and a second class of detected amplitude-modulated light sources having a flicker that is perceivable by a human when viewed directly by the human, track the detected amplitude-modulated light sources through image frames of the streamed video images, and modify the streamed video images in which an amplitude-modulated light source is detected that is classified in the first class by substituting pixels representing each of the detected amplitude-modulated light sources that is classified in the first class such that the pixels representing the detected amplitude-modulated light source are always at a state so that when the processed streamed video images are displayed, each of the detected amplitude-modulated light sources that is classified in the first class appears to have no perceivable flicker. 12. The display system of claim 11, wherein the image processing unit classifies the detected amplitude-modulated light sources into the first class when a frequency of the flicker in the light sources is above a threshold frequency and classifies the detected amplitude-modulated light sources into the second class when a frequency of the flicker in the light sources is below the threshold frequency. 13. The display system of claim 12, wherein each of the detected amplitude-modulated light sources are maintained by substituting low pixel values from off periods with higher pixel values from on periods. 14. The display system of claim 11, wherein the image processing unit does not modify the streamed video images to correct for flicker from light sources classified in the second class. 15. The display system of claim 14, wherein light sources classified in the second class include turn signals and emergency vehicle lights. 16. A rearview assembly for mounting to the vehicle, the rearview assembly comprising the display system of claim 11. 17. A method of processing streamed video images comprising: detecting amplitude-modulated light sources in the streamed video images; classifying the detected amplitude-modulated light sources into at least two classes where a first class of detected amplitude-modulated light sources having a flicker not perceivable by a human when viewed directly by the human, and a second class of detected amplitude-modulated light sources having a flicker that is perceivable by a human when viewed directly by the human; tracking the detected amplitude-modulated light sources through image frames of the streamed video images; and modifying the streamed video images in which an amplitude-modulated light source is detected that is classified in the first class by substituting pixels representing each of the detected amplitude-modulated light sources that is classified in the first class such that the pixels representing the detected amplitude-modulated light source are always at a state so that when the processed streamed video images are displayed, each of the detected amplitude-modulated light sources that is classified in the first class appears to have no perceivable flicker. 18. The method of claim 17, wherein the detected amplitude-modulated light sources are classified into the first class when a frequency of the flicker in the light sources is above a threshold frequency and are classified into the second class when a frequency of the flicker in the light sources is below the threshold frequency. 19. The method of claim 17, wherein the light sources classified in the second class are not corrected for flicker. 20. The method of claim 19, wherein light sources classified in the second class include turn signals and emergency vehicle lights.
A display system is provided for a vehicle equipped with a camera for supplying streamed video images of a scene rearward of the vehicle. The display system includes an image processing unit for receiving the streamed video images and processing the streamed video images, and a display for displaying the processed streamed video images. To perform processing of the streamed video images, the image processing unit is configured to: detect amplitude-modulated light sources in the streamed video images, classify the detected amplitude-modulated light sources into one of several possible classifications, select the streamed video images in which an amplitude-modulated light source is detected that flickers based upon the classification of the amplitude-modulated light source, and modify the selected streamed video images to correct for flicker of any amplitude-modulated light sources in the selected streamed video images.1. A display system for a vehicle equipped with a camera for supplying streamed video images of a scene rearward of the vehicle, the display system comprising: an image processing unit for receiving the streamed video images and processing the streamed video images; and a display for displaying the processed streamed video images, wherein to perform processing of the streamed video images, the image processing unit is configured to: detect amplitude-modulated light sources in the streamed video images, classify the detected amplitude-modulated light sources into one of several possible classifications, select the streamed video images in which an amplitude-modulated light source is detected that flickers based upon the classification of the amplitude-modulated light source, and modify the selected streamed video images to correct for flicker of any amplitude-modulated light sources in the selected streamed video images. 2. The display system of claim 1, wherein the image processing unit modifies the selected streamed video images such that the pixels representing each of the detected amplitude-modulated light sources are maintained at a state so that when the processed streamed video images are displayed, each of the detected amplitude-modulated light sources that is represented by the pixels appears to have no perceivable flicker. 3. The display system of claim 2, wherein each of the detected amplitude-modulated light sources are maintained by substituting low pixel values from off periods with higher pixel values from on periods. 4. The display system of claim 1, wherein the image processing unit is further configured to track the detected amplitude-modulated light sources through image frames of the streamed video images. 5. The display system of claim 4, wherein the image processing unit modifies the selected streamed video images such that the pixels representing each of the detected amplitude-modulated light sources are maintained at a state so that when the processed streamed video images are displayed, each of the detected amplitude-modulated light sources that is represented by the pixels appears to have no perceivable flicker and appears at the expected locations in the images based upon the tracking of each of the detected amplitude-modulated light sources. 6. The display system of claim 1, wherein the image processing unit does not modify the streamed video images to correct for flicker from light sources classified as a turn signal or emergency vehicle light. 7. The display system of claim 1, wherein the image processing unit classifies the detected amplitude-modulated light sources into at least two classes where a first class of detected amplitude-modulated light sources has a flicker not perceivable by a human when viewed directly by the human, and a second class of detected amplitude-modulated light sources has a flicker that is perceivable by a human when viewed directly by the human. 8. The display system of claim 7, wherein the streamed video images in which an amplitude-modulated light source is detected that is classified in the first class is modified by substituting pixels representing each of the detected amplitude-modulated light sources that is classified in the first class such that the pixels representing each of the detected amplitude-modulated light sources are always at a state so that when the processed streamed video images are displayed, the detected amplitude-modulated light source that is classified in the first class appears to have no perceivable flicker. 9. The display system of claim 7, wherein the image processing unit classifies the detected amplitude-modulated light sources into the first class when a frequency of the flicker in the light sources is above a threshold frequency and classifies the detected amplitude-modulated light sources into the second class when a frequency of the flicker in the light sources is below the threshold frequency. 10. A rearview assembly for mounting to the vehicle, the rearview assembly comprising the display system of claim 1. 11. A display system comprising: an image processing unit for receiving streamed video images and processing the streamed video images; and a display for displaying the processed streamed video images, wherein to perform processing of the streamed video images, the image processing unit is configured to: detect amplitude-modulated light sources in the streamed video images, classify the detected amplitude-modulated light sources into at least two classes where a first class of detected amplitude-modulated light sources having a flicker not perceivable by a human when viewed directly by the human, and a second class of detected amplitude-modulated light sources having a flicker that is perceivable by a human when viewed directly by the human, track the detected amplitude-modulated light sources through image frames of the streamed video images, and modify the streamed video images in which an amplitude-modulated light source is detected that is classified in the first class by substituting pixels representing each of the detected amplitude-modulated light sources that is classified in the first class such that the pixels representing the detected amplitude-modulated light source are always at a state so that when the processed streamed video images are displayed, each of the detected amplitude-modulated light sources that is classified in the first class appears to have no perceivable flicker. 12. The display system of claim 11, wherein the image processing unit classifies the detected amplitude-modulated light sources into the first class when a frequency of the flicker in the light sources is above a threshold frequency and classifies the detected amplitude-modulated light sources into the second class when a frequency of the flicker in the light sources is below the threshold frequency. 13. The display system of claim 12, wherein each of the detected amplitude-modulated light sources are maintained by substituting low pixel values from off periods with higher pixel values from on periods. 14. The display system of claim 11, wherein the image processing unit does not modify the streamed video images to correct for flicker from light sources classified in the second class. 15. The display system of claim 14, wherein light sources classified in the second class include turn signals and emergency vehicle lights. 16. A rearview assembly for mounting to the vehicle, the rearview assembly comprising the display system of claim 11. 17. A method of processing streamed video images comprising: detecting amplitude-modulated light sources in the streamed video images; classifying the detected amplitude-modulated light sources into at least two classes where a first class of detected amplitude-modulated light sources having a flicker not perceivable by a human when viewed directly by the human, and a second class of detected amplitude-modulated light sources having a flicker that is perceivable by a human when viewed directly by the human; tracking the detected amplitude-modulated light sources through image frames of the streamed video images; and modifying the streamed video images in which an amplitude-modulated light source is detected that is classified in the first class by substituting pixels representing each of the detected amplitude-modulated light sources that is classified in the first class such that the pixels representing the detected amplitude-modulated light source are always at a state so that when the processed streamed video images are displayed, each of the detected amplitude-modulated light sources that is classified in the first class appears to have no perceivable flicker. 18. The method of claim 17, wherein the detected amplitude-modulated light sources are classified into the first class when a frequency of the flicker in the light sources is above a threshold frequency and are classified into the second class when a frequency of the flicker in the light sources is below the threshold frequency. 19. The method of claim 17, wherein the light sources classified in the second class are not corrected for flicker. 20. The method of claim 19, wherein light sources classified in the second class include turn signals and emergency vehicle lights.
2,600
10,323
10,323
15,255,541
2,699
In a communication system, a reception of an image forming device receives image data from a mobile terminal. The image forming device prints an image based on the received image data. The image forming device transmits an address of the reception section to the mobile terminal and receives image data transmitted with the address by the mobile terminal in wireless communication. The mobile terminal accepts a selection of wireless printing application program from plural application programs and displays, side by side, thumbnails of plural items of image data stored in a storage section when the wireless printing application program is started. The mobile terminal further accepts a selection of image data from plural displayed thumbnails and wirelessly transmits the selected image data with the address to the image forming device.
1. A communication method with a mobile terminal and an image forming device, the image forming device receiving image data from the mobile terminal and printing an image based on received image data, comprising: performing a first process of executing a wireless connection between the mobile terminal and the image forming device based on an establishment of a non-contact communication by approaching the mobile terminal to the image forming device; performing a second process of executing a series of procedures of a wireless connection between the mobile terminal and the image forming device, a transmission of selected image data from the mobile terminal to the image forming device, and a printing of image based on received image data by the image forming device, based on an establishment of a non-contact communication by approaching the mobile terminal to the image forming device after an image based on the selected image data is displayed; and switching the first process and the second process each other by receiving an operation of an user. 2. The communication method according to claim 1, wherein the mobile terminal is approached to the image forming device in a state that the image based on the image data selected by the user is displayed, in the second process. 3. The communication method according to claim 1, wherein a reception of selecting image data by the user is possible after switching to the second process. 4. An image forming device receiving image data from a mobile terminal and printing an image based on received image data, wherein a first process of executing a wireless connection with the mobile terminal is performed based on an establishment of a non-contact communication by approaching the mobile terminal to the image forming device; and wherein a second process of executing a series of procedures of a wireless connection with the mobile terminal, a reception of selected image data from the mobile terminal, and a printing of image based on received image data is performed, based on an establishment of a non-contact communication by approaching the mobile terminal to the image forming device after an image based on the selected image data is displayed. 5. A mobile terminal transmitting image data to an image forming device that prints an image based on received image data, wherein a first process of executing a wireless connection with the image forming device is performed based on an establishment of a non-contact communication by approaching the mobile terminal to the image forming device; wherein a second process of executing a series of procedures of a wireless connection with the image forming device, a transmission of selected image data to the image forming device, and a printing of image based on received image data by the image forming device is performed, based on an establishment of a non-contact communication by approaching the mobile terminal to the image forming device after an image based on the selected image data is displayed; and wherein the first process and the second process are switched each other by receiving an operation of an user. 6. A non-transitory recording medium that records program executable by a mobile terminal transmitting image data to an image forming device that prints an image based on received image data, comprising: performing a first process of executing a wireless connection between the mobile terminal and the image forming device based on an establishment of a non-contact communication by approaching the mobile terminal to the image forming device; performing a second process of executing a series of procedures of a wireless connection between the mobile terminal and the image forming device, a transmission of selected image data from the mobile terminal to the image forming device, and a printing of image based on received image data by the image forming device, based on an establishment of a non-contact communication by approaching the mobile terminal to the image forming device after an image based on the selected image data is displayed; and switching the first process and the second process each other by receiving an operation of an user.
In a communication system, a reception of an image forming device receives image data from a mobile terminal. The image forming device prints an image based on the received image data. The image forming device transmits an address of the reception section to the mobile terminal and receives image data transmitted with the address by the mobile terminal in wireless communication. The mobile terminal accepts a selection of wireless printing application program from plural application programs and displays, side by side, thumbnails of plural items of image data stored in a storage section when the wireless printing application program is started. The mobile terminal further accepts a selection of image data from plural displayed thumbnails and wirelessly transmits the selected image data with the address to the image forming device.1. A communication method with a mobile terminal and an image forming device, the image forming device receiving image data from the mobile terminal and printing an image based on received image data, comprising: performing a first process of executing a wireless connection between the mobile terminal and the image forming device based on an establishment of a non-contact communication by approaching the mobile terminal to the image forming device; performing a second process of executing a series of procedures of a wireless connection between the mobile terminal and the image forming device, a transmission of selected image data from the mobile terminal to the image forming device, and a printing of image based on received image data by the image forming device, based on an establishment of a non-contact communication by approaching the mobile terminal to the image forming device after an image based on the selected image data is displayed; and switching the first process and the second process each other by receiving an operation of an user. 2. The communication method according to claim 1, wherein the mobile terminal is approached to the image forming device in a state that the image based on the image data selected by the user is displayed, in the second process. 3. The communication method according to claim 1, wherein a reception of selecting image data by the user is possible after switching to the second process. 4. An image forming device receiving image data from a mobile terminal and printing an image based on received image data, wherein a first process of executing a wireless connection with the mobile terminal is performed based on an establishment of a non-contact communication by approaching the mobile terminal to the image forming device; and wherein a second process of executing a series of procedures of a wireless connection with the mobile terminal, a reception of selected image data from the mobile terminal, and a printing of image based on received image data is performed, based on an establishment of a non-contact communication by approaching the mobile terminal to the image forming device after an image based on the selected image data is displayed. 5. A mobile terminal transmitting image data to an image forming device that prints an image based on received image data, wherein a first process of executing a wireless connection with the image forming device is performed based on an establishment of a non-contact communication by approaching the mobile terminal to the image forming device; wherein a second process of executing a series of procedures of a wireless connection with the image forming device, a transmission of selected image data to the image forming device, and a printing of image based on received image data by the image forming device is performed, based on an establishment of a non-contact communication by approaching the mobile terminal to the image forming device after an image based on the selected image data is displayed; and wherein the first process and the second process are switched each other by receiving an operation of an user. 6. A non-transitory recording medium that records program executable by a mobile terminal transmitting image data to an image forming device that prints an image based on received image data, comprising: performing a first process of executing a wireless connection between the mobile terminal and the image forming device based on an establishment of a non-contact communication by approaching the mobile terminal to the image forming device; performing a second process of executing a series of procedures of a wireless connection between the mobile terminal and the image forming device, a transmission of selected image data from the mobile terminal to the image forming device, and a printing of image based on received image data by the image forming device, based on an establishment of a non-contact communication by approaching the mobile terminal to the image forming device after an image based on the selected image data is displayed; and switching the first process and the second process each other by receiving an operation of an user.
2,600
10,324
10,324
14,673,927
2,613
A system and method provides for generating an output display on a display device based on at least one ambient condition. The method and system includes a display device or a component associated with a display device that provides for detecting an ambient condition using one or more environmental sensors. The method and system includes analyzing the ambient condition to determine ambient condition factors and retrieving visual display content from at least one visual content database using the ambient condition factors. Therein, the method and system provides the visual display content to the display device so the output display compliments the ambient condition.
1. A method for generating an output display on a display device based on at least one ambient condition, the method comprising: detecting an ambient condition using an environmental sensor; analyzing the ambient condition to determine ambient condition factors; using the ambient condition factors, retrieving visual display content from at least one visual content database; and providing the visual display content to the display device so the output display compliments the ambient condition. 2. The method of claim 1 further comprising: engaging in network communication with the visual content database; and downloading the visual display content to a local memory device for providing the visual display content to the output display. 3. The method of claim 1, wherein the environmental sensor includes a light sensor for detecting a brightness level as an ambient condition and the visual display content reflects the bright level. 4. The method of claim 1, wherein the environmental sensor includes an audio sensor detecting ambient noise. 5. The method of claim 4 further comprising: providing an audio output via at least one audio device, the audio output based on the detected ambient noise. 6. The method of claim 5 further comprising: analyzing the ambient noise to detect at least one ambient condition; accessing an audio database having audio content stored therein and selecting audio content based the at least one ambient condition; and providing the audio content for the audio output. 7. The method of claim 1, wherein the environmental sensor includes a motion detector detecting motion about the display device, the method further comprising: detecting a commotion level using the motion detector; and retrieving the visual display content based at least in part on the commotion level. 8. The method of claim 1 further comprising: detecting at least one connected computing device; accessing a user profile from the at least one connected computing device; referencing at least one social media network using the user profile to detect character data; and retrieving visual display content based at least on the character data. 9. The method of claim 1 further comprising: recognizing a person within a proximity to the display device; accessing at least one image from an online data storage location; and providing the at least one image as output on the display device. 10. The method of claim 1, wherein the at least one image is retrieved from a social media web location. 11. A system for generating an output display on a display device based on at least one ambient condition, the method comprising: at least one environmental sensor detecting an ambient condition; a processing device, in response to executable instructions, operative to analyze the ambient condition to determine ambient condition factors; at least one visual content database having visual content stored therein, the processing device further operative to retrieve visual display content therefrom; and the display device operative to provide the visual display content so the output display compliments the ambient condition. 12. The system of claim 11 further comprising: a network communication device for communicating with the visual content database; and a local memory device receiving a download of the visual display content for providing the visual display content to the output display. 13. The system of claim 11, wherein the environmental sensor includes a light sensor for detecting a brightness level as an ambient condition and the visual display content reflects the bright level. 14. The system of claim 11, wherein the environmental sensor includes an audio sensor detecting ambient noise. 15. The system of claim 14 further comprising: at least one audio device providing an audio output, the audio output based on the detected ambient noise. 16. The system of claim 15 further comprising: the processing device further operative to analyze the ambient noise to detect at least one ambient condition; and an audio database having audio content stored therein, the processing device operative to access the audio database and select audio content based the at least one ambient condition. 17. The system of claim 11, wherein the environmental sensor includes a motion detector detecting motion about the display device, the system further comprising: the motion detector detecting a commotion level; and the processing device further operative to retrieve the visual display content based at least in part on the commotion level. 18. The system of claim 11 further comprising: at least one environmental sensor operative to detect at least one connected computing device; and the processing device operative to: access a user profile from the at least one connected computing device; reference at least one social media network using the user profile to detect character data; and retrieve visual display content based at least on the character data. 19. The system of claim 11 further comprising: at least one environment sensor operative to recognize a person within a proximity to the display device; and the processing device further operative to: access at least one image from an online data storage location; and provide the at least one image as output on the display device. 20. The system of claim 11, wherein the at least one image is retrieved from a social media web location.
A system and method provides for generating an output display on a display device based on at least one ambient condition. The method and system includes a display device or a component associated with a display device that provides for detecting an ambient condition using one or more environmental sensors. The method and system includes analyzing the ambient condition to determine ambient condition factors and retrieving visual display content from at least one visual content database using the ambient condition factors. Therein, the method and system provides the visual display content to the display device so the output display compliments the ambient condition.1. A method for generating an output display on a display device based on at least one ambient condition, the method comprising: detecting an ambient condition using an environmental sensor; analyzing the ambient condition to determine ambient condition factors; using the ambient condition factors, retrieving visual display content from at least one visual content database; and providing the visual display content to the display device so the output display compliments the ambient condition. 2. The method of claim 1 further comprising: engaging in network communication with the visual content database; and downloading the visual display content to a local memory device for providing the visual display content to the output display. 3. The method of claim 1, wherein the environmental sensor includes a light sensor for detecting a brightness level as an ambient condition and the visual display content reflects the bright level. 4. The method of claim 1, wherein the environmental sensor includes an audio sensor detecting ambient noise. 5. The method of claim 4 further comprising: providing an audio output via at least one audio device, the audio output based on the detected ambient noise. 6. The method of claim 5 further comprising: analyzing the ambient noise to detect at least one ambient condition; accessing an audio database having audio content stored therein and selecting audio content based the at least one ambient condition; and providing the audio content for the audio output. 7. The method of claim 1, wherein the environmental sensor includes a motion detector detecting motion about the display device, the method further comprising: detecting a commotion level using the motion detector; and retrieving the visual display content based at least in part on the commotion level. 8. The method of claim 1 further comprising: detecting at least one connected computing device; accessing a user profile from the at least one connected computing device; referencing at least one social media network using the user profile to detect character data; and retrieving visual display content based at least on the character data. 9. The method of claim 1 further comprising: recognizing a person within a proximity to the display device; accessing at least one image from an online data storage location; and providing the at least one image as output on the display device. 10. The method of claim 1, wherein the at least one image is retrieved from a social media web location. 11. A system for generating an output display on a display device based on at least one ambient condition, the method comprising: at least one environmental sensor detecting an ambient condition; a processing device, in response to executable instructions, operative to analyze the ambient condition to determine ambient condition factors; at least one visual content database having visual content stored therein, the processing device further operative to retrieve visual display content therefrom; and the display device operative to provide the visual display content so the output display compliments the ambient condition. 12. The system of claim 11 further comprising: a network communication device for communicating with the visual content database; and a local memory device receiving a download of the visual display content for providing the visual display content to the output display. 13. The system of claim 11, wherein the environmental sensor includes a light sensor for detecting a brightness level as an ambient condition and the visual display content reflects the bright level. 14. The system of claim 11, wherein the environmental sensor includes an audio sensor detecting ambient noise. 15. The system of claim 14 further comprising: at least one audio device providing an audio output, the audio output based on the detected ambient noise. 16. The system of claim 15 further comprising: the processing device further operative to analyze the ambient noise to detect at least one ambient condition; and an audio database having audio content stored therein, the processing device operative to access the audio database and select audio content based the at least one ambient condition. 17. The system of claim 11, wherein the environmental sensor includes a motion detector detecting motion about the display device, the system further comprising: the motion detector detecting a commotion level; and the processing device further operative to retrieve the visual display content based at least in part on the commotion level. 18. The system of claim 11 further comprising: at least one environmental sensor operative to detect at least one connected computing device; and the processing device operative to: access a user profile from the at least one connected computing device; reference at least one social media network using the user profile to detect character data; and retrieve visual display content based at least on the character data. 19. The system of claim 11 further comprising: at least one environment sensor operative to recognize a person within a proximity to the display device; and the processing device further operative to: access at least one image from an online data storage location; and provide the at least one image as output on the display device. 20. The system of claim 11, wherein the at least one image is retrieved from a social media web location.
2,600
10,325
10,325
14,495,926
2,619
Systems and methods may provide for conducting a region determination of whether one or more domain points associated with a tessellated patch are shared between multiple region sets of the tessellated patch. If the one or more domain points are not shared between multiple region sets of the tessellated patch, an intra-region cache may be automatically interrogated for non-shared shading data. If the one or more domain points are shared between multiple region sets of the tessellated patch, an inter-region cache may be automatically interrogated for shared shading data. In one example, one or more references to the shared shading data is generated and associated with the one or more domain points when cache hits occur in the inter-region cache.
1. A computing system comprising: a data interface including one or more of a network controller, a memory controller or a bus, the data interface to obtain an untessellated patch and one or more tessellation factors associated with a three dimensional (3D) scene; a tessellator to generate a tessellated patch and one or more domain points based on the untessellated patch and the one or more tessellation factors; and a domain shader including: an intra-region cache, an inter-region cache, and a cache controller coupled to the intra-region cache and the inter-region cache, the cache controller to conduct a region determination of whether the one or more domain points are shared between multiple region sets of the tessellated patch, interrogate the intra-region cache for non-shared shading data if the one or more domain points are not shared between multiple region sets of the tessellated patch, and interrogate the inter-region cache for shared shading data if the one or more domain points are shared between multiple region sets of the tessellated patch. 2. The system of claim 1, wherein the domain shader further includes an accelerator to generate one or more references to the shared shading data when cache hits occur in the inter-region cache and associate the one or more references with the one or more domain points. 3. The system of claim 1, wherein the domain shader further includes an accelerator to generate one or more references to the non-shared shading data when cache hits occur in the intra-region cache and associate the one or more references with the one or more domain points. 4. The system of claim 1, wherein the domain shader further includes shading logic to shade the one or more domain points when cache hits do not occur in either the inter-region cache or the intra-region cache. 5. The system of claim 1, wherein the tessellator is to associate one or more tags with the one or more domain points, and wherein the domain shader further includes a tag handler to identify the one or more tags associated with the one or more domain points, wherein the region determination is to be conducted based on the one or more tags. 6. The system of claim 1, wherein the inter-region cache is sized to hold approximately twice a maximum number of domain points along a region edge in the tessellated patch. 7. The system of claim 1, wherein the tessellator includes a region sequencer to maximize a likelihood of shared domain points being encountered across regions of the tessellated patch. 8. A method of operating a domain shader, comprising: conducting a region determination of whether one or more domain points associated with a tessellated patch are shared between multiple region sets of the tessellated patch; interrogating an intra-region cache for non-shared shading data if the one or more domain points are not shared between multiple region sets of the tessellated patch; and interrogating an inter-region cache for shared shading data if the one or more domain points are shared between multiple region sets of the tessellated patch. 9. The method of claim 8, further including: generating one or more references to the shared shading data when cache hits occur in the inter-region cache; and associating the one or more references with the one or more domain points. 10. The method of claim 8, further including: generating one or more references to the non-shared shading data when cache hits occur in the intra-region cache; and associating the one or more references with the one or more domain points. 11. The method of claim 8, further including shading the one or more domain points when cache hits do not occur in either the inter-region cache or the intra-region cache. 12. The method of claim 8, further including: receiving the one or more domain points from a tessellator; and identifying one or more tags associated with the one or more domain points, wherein the region determination is conducted based on the one or more tags. 13. At least one computer readable storage medium comprising a set of instructions which, when executed by a computing platform, cause the computing platform to: conduct a region determination of whether one or more domain points associated with a tessellated patch are shared between multiple region sets of the tessellated patch; interrogate an intra-region cache for non-shared shading data if the one or more domain points are not shared between multiple region sets of the tessellated patch; and interrogate an inter-region cache for shared shading data if the one or more domain points are shared between multiple region sets of the tessellated patch. 14. The at least one computer readable storage medium of claim 13, wherein the instructions, when executed, cause a computing system to: generate one or more references to the shared shading data when cache hits occur in the inter-region cache; and associate the one or more references with the one or more domain points. 15. The at least one computer readable storage medium of claim 13, wherein the instructions, when executed, cause a computing system to: generate one or more references to the non-shared shading data when cache hits occur in the intra-region cache; and associate the one or more references with the one or more domain points. 16. The at least one computer readable storage medium of claim 13, wherein the instructions, when executed, cause a computing system to shade the one or more domain points when cache hits do not occur in ether the inter-region cache or the intra-region cache. 17. The at least one computer readable storage medium of claim 13, wherein the instructions, when executed, cause a computing system to: receive the one or more domain points from a tessellator; and identify one or more tags associated with the one or more domain points, wherein the region determination is to be conducted based on the one or more tags. 18. A domain shader comprising: an intra-region cache; an inter-region cache; and a cache controller coupled to the intra-region cache and the inter-region cache, the cache controller to conduct a region determination of whether one or more domain points associated with a tessellated patch are shared between multiple region sets of the tessellated patch, interrogate the intra-region cache for non-shared shading data if the one or more domain points are not shared between multiple region sets of the tessellated patch, and interrogate the inter-region cache for shared shading data if the one or more domain points are shared between multiple region sets of the tessellated patch. 19. The domain shader of claim 18, further including an accelerator to generate one or more references to the shared shading data when cache hits occur in the inter-region cache and associate the one or more references with the one or more domain points. 20. The domain shader of claim 18, further including an accelerator to generate one or more references to the non-shared shading data when cache hits occur in the intra-region cache and associate the one or more references with the one or more domain points. 21. The domain shader of claim 18, further including shading logic to shade the one or more domain points when cache hits do not occur in either the inter-region cache or the intra-region cache. 22. The domain shader of claim 18, further including a tag handler to receive the one or more domain points from a tessellator and identify one or more tags associated with the one or more domain points, wherein the region determination is to be conducted based on the one or more tags. 23. The domain shader of claim 18, wherein the inter-region cache is sized to hold approximately twice a maximum number of domain points along a region edge in the tessellated patch.
Systems and methods may provide for conducting a region determination of whether one or more domain points associated with a tessellated patch are shared between multiple region sets of the tessellated patch. If the one or more domain points are not shared between multiple region sets of the tessellated patch, an intra-region cache may be automatically interrogated for non-shared shading data. If the one or more domain points are shared between multiple region sets of the tessellated patch, an inter-region cache may be automatically interrogated for shared shading data. In one example, one or more references to the shared shading data is generated and associated with the one or more domain points when cache hits occur in the inter-region cache.1. A computing system comprising: a data interface including one or more of a network controller, a memory controller or a bus, the data interface to obtain an untessellated patch and one or more tessellation factors associated with a three dimensional (3D) scene; a tessellator to generate a tessellated patch and one or more domain points based on the untessellated patch and the one or more tessellation factors; and a domain shader including: an intra-region cache, an inter-region cache, and a cache controller coupled to the intra-region cache and the inter-region cache, the cache controller to conduct a region determination of whether the one or more domain points are shared between multiple region sets of the tessellated patch, interrogate the intra-region cache for non-shared shading data if the one or more domain points are not shared between multiple region sets of the tessellated patch, and interrogate the inter-region cache for shared shading data if the one or more domain points are shared between multiple region sets of the tessellated patch. 2. The system of claim 1, wherein the domain shader further includes an accelerator to generate one or more references to the shared shading data when cache hits occur in the inter-region cache and associate the one or more references with the one or more domain points. 3. The system of claim 1, wherein the domain shader further includes an accelerator to generate one or more references to the non-shared shading data when cache hits occur in the intra-region cache and associate the one or more references with the one or more domain points. 4. The system of claim 1, wherein the domain shader further includes shading logic to shade the one or more domain points when cache hits do not occur in either the inter-region cache or the intra-region cache. 5. The system of claim 1, wherein the tessellator is to associate one or more tags with the one or more domain points, and wherein the domain shader further includes a tag handler to identify the one or more tags associated with the one or more domain points, wherein the region determination is to be conducted based on the one or more tags. 6. The system of claim 1, wherein the inter-region cache is sized to hold approximately twice a maximum number of domain points along a region edge in the tessellated patch. 7. The system of claim 1, wherein the tessellator includes a region sequencer to maximize a likelihood of shared domain points being encountered across regions of the tessellated patch. 8. A method of operating a domain shader, comprising: conducting a region determination of whether one or more domain points associated with a tessellated patch are shared between multiple region sets of the tessellated patch; interrogating an intra-region cache for non-shared shading data if the one or more domain points are not shared between multiple region sets of the tessellated patch; and interrogating an inter-region cache for shared shading data if the one or more domain points are shared between multiple region sets of the tessellated patch. 9. The method of claim 8, further including: generating one or more references to the shared shading data when cache hits occur in the inter-region cache; and associating the one or more references with the one or more domain points. 10. The method of claim 8, further including: generating one or more references to the non-shared shading data when cache hits occur in the intra-region cache; and associating the one or more references with the one or more domain points. 11. The method of claim 8, further including shading the one or more domain points when cache hits do not occur in either the inter-region cache or the intra-region cache. 12. The method of claim 8, further including: receiving the one or more domain points from a tessellator; and identifying one or more tags associated with the one or more domain points, wherein the region determination is conducted based on the one or more tags. 13. At least one computer readable storage medium comprising a set of instructions which, when executed by a computing platform, cause the computing platform to: conduct a region determination of whether one or more domain points associated with a tessellated patch are shared between multiple region sets of the tessellated patch; interrogate an intra-region cache for non-shared shading data if the one or more domain points are not shared between multiple region sets of the tessellated patch; and interrogate an inter-region cache for shared shading data if the one or more domain points are shared between multiple region sets of the tessellated patch. 14. The at least one computer readable storage medium of claim 13, wherein the instructions, when executed, cause a computing system to: generate one or more references to the shared shading data when cache hits occur in the inter-region cache; and associate the one or more references with the one or more domain points. 15. The at least one computer readable storage medium of claim 13, wherein the instructions, when executed, cause a computing system to: generate one or more references to the non-shared shading data when cache hits occur in the intra-region cache; and associate the one or more references with the one or more domain points. 16. The at least one computer readable storage medium of claim 13, wherein the instructions, when executed, cause a computing system to shade the one or more domain points when cache hits do not occur in ether the inter-region cache or the intra-region cache. 17. The at least one computer readable storage medium of claim 13, wherein the instructions, when executed, cause a computing system to: receive the one or more domain points from a tessellator; and identify one or more tags associated with the one or more domain points, wherein the region determination is to be conducted based on the one or more tags. 18. A domain shader comprising: an intra-region cache; an inter-region cache; and a cache controller coupled to the intra-region cache and the inter-region cache, the cache controller to conduct a region determination of whether one or more domain points associated with a tessellated patch are shared between multiple region sets of the tessellated patch, interrogate the intra-region cache for non-shared shading data if the one or more domain points are not shared between multiple region sets of the tessellated patch, and interrogate the inter-region cache for shared shading data if the one or more domain points are shared between multiple region sets of the tessellated patch. 19. The domain shader of claim 18, further including an accelerator to generate one or more references to the shared shading data when cache hits occur in the inter-region cache and associate the one or more references with the one or more domain points. 20. The domain shader of claim 18, further including an accelerator to generate one or more references to the non-shared shading data when cache hits occur in the intra-region cache and associate the one or more references with the one or more domain points. 21. The domain shader of claim 18, further including shading logic to shade the one or more domain points when cache hits do not occur in either the inter-region cache or the intra-region cache. 22. The domain shader of claim 18, further including a tag handler to receive the one or more domain points from a tessellator and identify one or more tags associated with the one or more domain points, wherein the region determination is to be conducted based on the one or more tags. 23. The domain shader of claim 18, wherein the inter-region cache is sized to hold approximately twice a maximum number of domain points along a region edge in the tessellated patch.
2,600
10,326
10,326
14,985,355
2,647
An multistep guided system for mobile devices that facilitates the creation and dissemination of multistep guided activities from a source computer/device to a plurality of other recipient mobile devices, wherein the multistep guided activities is disseminated to the recipient mobile devices in a form that is compatible with the capabilities of the respective recipient mobile devices. The audio guided system comprises the source computer/device, the plurality of other recipient mobile devices and a server.
1. A method comprising: accessing, by at least one processor, an electronic questionnaire comprising a task tree, wherein the task tree comprises a plurality of tasks that are mapped to one or more subtasks; providing, to a client device associated with a user, a task from the plurality of tasks, wherein the task comprises a first question of the electronic questionnaire; receiving, from the client device, an indication of user input with respect to the task; based on the indication of user input with respect to the task, determining, by the at least one processor, whether to provide the user a subtask that is mapped to the task within the task tree; and providing, to the client device, the subtask based on a determination that the subtask is to be provided to the user, wherein the subtask comprises a second question of the electronic questionnaire. 2. The method of claim 1, further comprising preventing to provide the subtask to the client device based on a determination that the subtask is not to be provided to the user. 3. The method of claim 2, wherein determining whether to provide the user a subtask that is mapped to the task within the task tree comprises identifying a user response to the first question within the received indication of user input. 4. The method of claim 3, wherein determining whether to provide the user a subtask that is mapped to the task within the task tree further comprises determining the second question is applicable based on the user response. 5. The method of claim 3, wherein determining whether to provide the user a subtask that is mapped to the task within the task tree further comprises determining the second question is inapplicable based on the user response. 6. The method of claim 1, further comprising receiving the task tree from a computing device associated with a questionnaire manager, wherein the questionnaire manager provides input to define the mapping between the plurality of tasks and the one or more subtasks. 7. The method of claim 6, further comprising providing a webpage to allow the questionnaire manager to create the questionnaire, wherein the input to define the mapping between the plurality of tasks and the one or more subtasks is provided via the webpage. 8. The method of claim 1, wherein the first question within the task comprises a first level of detail about a topic, and wherein the second question within the subtask comprises a second level of detail about the topic. 9. The method of claim 1, wherein providing the task from the plurality of tasks comprises sending a webpage to the client device for presentation to the user via a browser on the client device. 10. The method of claim 9, wherein providing the subtask based on a determination that the subtask is to be provided to the user comprises sending a second webpage to the client device for presentation to the user via a browser on the client device. 11. A system, comprising: at least one processor; and at least one non-transitory computer readable storage medium storing instructions thereon that, when executed by the at least one processor, cause the system to: access an electronic questionnaire comprising a task tree, wherein the task tree comprises a plurality of tasks that are mapped to one or more subtasks; provide, to a client device associated with a user, a task from the plurality of tasks, wherein the task comprises a first question of the electronic questionnaire; receive, from the client device, an indication of user input with respect to the task; based on the indication of user input with respect to the task, determine whether to provide the user a subtask that is mapped to the task within the task tree; and provide, to the client device, the subtask based on a determination that the subtask is to be provided to the user, wherein the subtask comprises a second question of the electronic questionnaire. 12. The system of claim 11, further comprising instructions that, when executed by the at least one processor, cause the system to receive, from the client device, an indication of user input with respect to the subtask. 13. The system of claim 12, further comprising instructions that, when executed by the at least one processor, cause the system to provide an additional subtask to the client device based on identifying a response to the second question within the indication of user input within respect to the subtask, wherein the additional subtask is mapped to the subtask and task within the task tree. 14. The system of claim 12, further comprising instructions that, when executed by the at least one processor, cause the system to provide an additional task to the client device based on identifying a response to the second question within the indication of user input with respect to the subtask, wherein the additional task is not mapped to the subtask. 15. The system of claim 11, wherein each task of the plurality of tasks within the task tree comprise a question for the electronic questionnaire. 16. The system of claim 15, wherein each subtask of the one or more subtasks within the task tree comprises an optional question that is conditionally provided based on user input. 17. A non-transitory computer readable storage media storing instructions thereon that, when executed by a processor, cause a computer system to: access an electronic questionnaire comprising a task tree, wherein the task tree comprises a plurality of tasks that are mapped to one or more subtasks; provide, to a client device associated with a user, a task from the plurality of tasks, wherein the task comprises a first question of the electronic questionnaire; receive, from the client device, an indication of user input with respect to the task; based on the indication of user input with respect to the task, determine whether to provide the user a subtask that is mapped to the task within the task tree; and provide, to the client device, the subtask based on a determination that the subtask is to be provided to the user, wherein the subtask comprises a second question of the electronic questionnaire. 18. The non-transitory computer readable storage media of claim 17, wherein determining whether to provide the user the subtask that is mapped to the task within the task tree comprises determining whether the indication of user input is associated with the subtask. 19. The non-transitory computer readable storage media of claim 17, wherein determining whether to provide the user the subtask that is mapped to the task within the task tree comprises identifying a response to the first question of the electronic questionnaire based on the indication of user input. 20. The non-transitory computer readable storage media of claim 17, wherein determining whether to provide the user the subtask comprises selecting between the subtask an additional subtask, wherein the subtask and the additional subtask are mapped to the task within the task tree.
An multistep guided system for mobile devices that facilitates the creation and dissemination of multistep guided activities from a source computer/device to a plurality of other recipient mobile devices, wherein the multistep guided activities is disseminated to the recipient mobile devices in a form that is compatible with the capabilities of the respective recipient mobile devices. The audio guided system comprises the source computer/device, the plurality of other recipient mobile devices and a server.1. A method comprising: accessing, by at least one processor, an electronic questionnaire comprising a task tree, wherein the task tree comprises a plurality of tasks that are mapped to one or more subtasks; providing, to a client device associated with a user, a task from the plurality of tasks, wherein the task comprises a first question of the electronic questionnaire; receiving, from the client device, an indication of user input with respect to the task; based on the indication of user input with respect to the task, determining, by the at least one processor, whether to provide the user a subtask that is mapped to the task within the task tree; and providing, to the client device, the subtask based on a determination that the subtask is to be provided to the user, wherein the subtask comprises a second question of the electronic questionnaire. 2. The method of claim 1, further comprising preventing to provide the subtask to the client device based on a determination that the subtask is not to be provided to the user. 3. The method of claim 2, wherein determining whether to provide the user a subtask that is mapped to the task within the task tree comprises identifying a user response to the first question within the received indication of user input. 4. The method of claim 3, wherein determining whether to provide the user a subtask that is mapped to the task within the task tree further comprises determining the second question is applicable based on the user response. 5. The method of claim 3, wherein determining whether to provide the user a subtask that is mapped to the task within the task tree further comprises determining the second question is inapplicable based on the user response. 6. The method of claim 1, further comprising receiving the task tree from a computing device associated with a questionnaire manager, wherein the questionnaire manager provides input to define the mapping between the plurality of tasks and the one or more subtasks. 7. The method of claim 6, further comprising providing a webpage to allow the questionnaire manager to create the questionnaire, wherein the input to define the mapping between the plurality of tasks and the one or more subtasks is provided via the webpage. 8. The method of claim 1, wherein the first question within the task comprises a first level of detail about a topic, and wherein the second question within the subtask comprises a second level of detail about the topic. 9. The method of claim 1, wherein providing the task from the plurality of tasks comprises sending a webpage to the client device for presentation to the user via a browser on the client device. 10. The method of claim 9, wherein providing the subtask based on a determination that the subtask is to be provided to the user comprises sending a second webpage to the client device for presentation to the user via a browser on the client device. 11. A system, comprising: at least one processor; and at least one non-transitory computer readable storage medium storing instructions thereon that, when executed by the at least one processor, cause the system to: access an electronic questionnaire comprising a task tree, wherein the task tree comprises a plurality of tasks that are mapped to one or more subtasks; provide, to a client device associated with a user, a task from the plurality of tasks, wherein the task comprises a first question of the electronic questionnaire; receive, from the client device, an indication of user input with respect to the task; based on the indication of user input with respect to the task, determine whether to provide the user a subtask that is mapped to the task within the task tree; and provide, to the client device, the subtask based on a determination that the subtask is to be provided to the user, wherein the subtask comprises a second question of the electronic questionnaire. 12. The system of claim 11, further comprising instructions that, when executed by the at least one processor, cause the system to receive, from the client device, an indication of user input with respect to the subtask. 13. The system of claim 12, further comprising instructions that, when executed by the at least one processor, cause the system to provide an additional subtask to the client device based on identifying a response to the second question within the indication of user input within respect to the subtask, wherein the additional subtask is mapped to the subtask and task within the task tree. 14. The system of claim 12, further comprising instructions that, when executed by the at least one processor, cause the system to provide an additional task to the client device based on identifying a response to the second question within the indication of user input with respect to the subtask, wherein the additional task is not mapped to the subtask. 15. The system of claim 11, wherein each task of the plurality of tasks within the task tree comprise a question for the electronic questionnaire. 16. The system of claim 15, wherein each subtask of the one or more subtasks within the task tree comprises an optional question that is conditionally provided based on user input. 17. A non-transitory computer readable storage media storing instructions thereon that, when executed by a processor, cause a computer system to: access an electronic questionnaire comprising a task tree, wherein the task tree comprises a plurality of tasks that are mapped to one or more subtasks; provide, to a client device associated with a user, a task from the plurality of tasks, wherein the task comprises a first question of the electronic questionnaire; receive, from the client device, an indication of user input with respect to the task; based on the indication of user input with respect to the task, determine whether to provide the user a subtask that is mapped to the task within the task tree; and provide, to the client device, the subtask based on a determination that the subtask is to be provided to the user, wherein the subtask comprises a second question of the electronic questionnaire. 18. The non-transitory computer readable storage media of claim 17, wherein determining whether to provide the user the subtask that is mapped to the task within the task tree comprises determining whether the indication of user input is associated with the subtask. 19. The non-transitory computer readable storage media of claim 17, wherein determining whether to provide the user the subtask that is mapped to the task within the task tree comprises identifying a response to the first question of the electronic questionnaire based on the indication of user input. 20. The non-transitory computer readable storage media of claim 17, wherein determining whether to provide the user the subtask comprises selecting between the subtask an additional subtask, wherein the subtask and the additional subtask are mapped to the task within the task tree.
2,600
10,327
10,327
15,411,394
2,619
A map application is provided that allows a user to create layers that can be used to modify a map. A layer may include features such as points of interest, routes, and polygons that are created or selected by the user. When the user enables a created layer, the generated features are displayed on the map until the user chooses to disable the layer. The features that are displayed in a layer are independent of any features currently being displayed on the map. The features associated with a layer may be static or may be dynamic. Layers may be shared with other users, and any changes made to a layer by an owner or creator of the layer may be pushed or provided to some or all of the users of the layer.
1. A system for generating a layer for a map, and for providing the generated layer, the system comprising: at least one computing device; and a map engine adapted to: receive a selection of a map; receive, from a user computing device, an indication of at least one route on the map, an indication of at least one point of interest on the map, and an indication of at least one polygon on the map; generate a layer for the selected map comprising a plurality of static features and dynamic features, wherein the static features and the dynamic features comprise the at least one route, the at least one point of interest, and the at least one polygon; provide the generated layer to a plurality of other user computing devices; receive an update to one of the dynamic features; change the generated layer responsive to the update; and provide the changed generated layer to the user computing device and to the plurality of other user computing devices. 2. The system of claim 1, wherein the map engine is further adapted to: display the map; receive a selection of the layer; and in response to the selection, display the selected layer on top of the displayed map. 3. The system of claim 2, wherein the map engine adapted to display the map comprises the map engine adapted to display the map including one or more features of the map, and wherein the map engine adapted to display the selected layer on top of the displayed map comprises the map engine adapted to display the selected layer on top of the one or more features of the map. 4. The system of claim 1, wherein the indication of the at least one polygon comprises user input drawing the at least one polygon on a display associated with the at least one computing device. 5. The system of claim 1, wherein the map engine adapted to receive the indication of at least one point of interest on the map comprises the map engine adapted to receive results of a query, and to determine the at least one point of interest based on the received results of the query. 6. The system of claim 1, wherein the generated layer is provided to a plurality of users. 7. The system of claim 6, wherein the map engine is further adapted to: receive a change to the generated layer, and in response to the received change, provide an update to each user of the plurality of users that was provided the generated layer. 8. The system of claim 7, wherein providing the update to each user comprises pushing the update to each user. 9. A system for selecting a layer for a map, and for displaying the selected layer on the map, the system comprising: at least one computing device; and a map engine adapted to: receive a selection of a map comprising a first plurality of features; display the first plurality of features of the map; receive a selection of a layer comprising a second plurality of features, wherein the second plurality of features includes at least one dynamic feature received from a user computing device; in response to the selection of the layer: request content for the at least one dynamic feature; receive the content for the at least one dynamic feature; and display the second plurality of features on the map with the first plurality of features, wherein the second plurality of features includes the content for the at least one dynamic feature; provide the layer to a plurality of other user computing devices; receive an update to the at least one dynamic feature; change the layer responsive to the update; and provide the changed layer to the user computing device and to the plurality of other user computing devices. 10. The system of claim 9, wherein the map engine is further adapted to: receive the layer from a user; receive a change to at least one feature of the second plurality of features from the user; and in response to the received change, display the second plurality of features on the map with the received change. 11. The system of claim 9, wherein the map engine adapted to request content for the at least one dynamic feature comprises the map engine adapted to run a query associated with the at least one dynamic feature. 12. The system of claim 9, wherein the second plurality of features comprises one or more of a route, a point of interest, and a polygon. 13. The system of claim 9, wherein the map engine is further adapted to: receive a feature; add the received feature to the second plurality of features; and provide the added feature to one or more users associated with the selected layer. 14. The system of claim 13, wherein the received feature comprises a route, and the route is based on user input drawing the route on a display associated with the at least one computing device. 15. A method for generating a layer for a map, and for displaying the layer on the map, the method comprising: receiving, from a user computing device, indications of a plurality of static features and dynamic features on a map by a computing device; generating a layer for the map comprising the plurality of static features and dynamic features by the computing device; providing the generated layer to a plurality of other user computing devices by the computing device; receiving an update to one of the dynamic features by the computing device; changing the generated layer responsive to the update; and in response to the received update, providing the changed generated layer to the user computing device and to the plurality of other user computing devices. 16. The method of claim 15, further comprising: displaying the map; receiving a selection of the layer; and in response to the selection, displaying the selected layer on top of the displayed map. 17. The method of claim 15, wherein receiving the indications of the plurality of static features and dynamic features comprises: receiving an indication of at least one route on the map; receiving an indication of at least one point of interest on the map; and receiving an indication of at least one polygon on the map. 18. The method of claim 17, wherein receiving the indication of the at least one polygon comprises receiving user input drawing the at least one polygon on a display associated with the computing device. 19. The method of claim 17, wherein receiving the indication of the at least one point of interest on the map comprises receiving results of a query, and determining the at least one point of interest based on the received results of the query. 20. The method of claim 17, further comprising receiving an annotation for the indication of the at least one polygon, and generating the layer for the map comprising the received annotation.
A map application is provided that allows a user to create layers that can be used to modify a map. A layer may include features such as points of interest, routes, and polygons that are created or selected by the user. When the user enables a created layer, the generated features are displayed on the map until the user chooses to disable the layer. The features that are displayed in a layer are independent of any features currently being displayed on the map. The features associated with a layer may be static or may be dynamic. Layers may be shared with other users, and any changes made to a layer by an owner or creator of the layer may be pushed or provided to some or all of the users of the layer.1. A system for generating a layer for a map, and for providing the generated layer, the system comprising: at least one computing device; and a map engine adapted to: receive a selection of a map; receive, from a user computing device, an indication of at least one route on the map, an indication of at least one point of interest on the map, and an indication of at least one polygon on the map; generate a layer for the selected map comprising a plurality of static features and dynamic features, wherein the static features and the dynamic features comprise the at least one route, the at least one point of interest, and the at least one polygon; provide the generated layer to a plurality of other user computing devices; receive an update to one of the dynamic features; change the generated layer responsive to the update; and provide the changed generated layer to the user computing device and to the plurality of other user computing devices. 2. The system of claim 1, wherein the map engine is further adapted to: display the map; receive a selection of the layer; and in response to the selection, display the selected layer on top of the displayed map. 3. The system of claim 2, wherein the map engine adapted to display the map comprises the map engine adapted to display the map including one or more features of the map, and wherein the map engine adapted to display the selected layer on top of the displayed map comprises the map engine adapted to display the selected layer on top of the one or more features of the map. 4. The system of claim 1, wherein the indication of the at least one polygon comprises user input drawing the at least one polygon on a display associated with the at least one computing device. 5. The system of claim 1, wherein the map engine adapted to receive the indication of at least one point of interest on the map comprises the map engine adapted to receive results of a query, and to determine the at least one point of interest based on the received results of the query. 6. The system of claim 1, wherein the generated layer is provided to a plurality of users. 7. The system of claim 6, wherein the map engine is further adapted to: receive a change to the generated layer, and in response to the received change, provide an update to each user of the plurality of users that was provided the generated layer. 8. The system of claim 7, wherein providing the update to each user comprises pushing the update to each user. 9. A system for selecting a layer for a map, and for displaying the selected layer on the map, the system comprising: at least one computing device; and a map engine adapted to: receive a selection of a map comprising a first plurality of features; display the first plurality of features of the map; receive a selection of a layer comprising a second plurality of features, wherein the second plurality of features includes at least one dynamic feature received from a user computing device; in response to the selection of the layer: request content for the at least one dynamic feature; receive the content for the at least one dynamic feature; and display the second plurality of features on the map with the first plurality of features, wherein the second plurality of features includes the content for the at least one dynamic feature; provide the layer to a plurality of other user computing devices; receive an update to the at least one dynamic feature; change the layer responsive to the update; and provide the changed layer to the user computing device and to the plurality of other user computing devices. 10. The system of claim 9, wherein the map engine is further adapted to: receive the layer from a user; receive a change to at least one feature of the second plurality of features from the user; and in response to the received change, display the second plurality of features on the map with the received change. 11. The system of claim 9, wherein the map engine adapted to request content for the at least one dynamic feature comprises the map engine adapted to run a query associated with the at least one dynamic feature. 12. The system of claim 9, wherein the second plurality of features comprises one or more of a route, a point of interest, and a polygon. 13. The system of claim 9, wherein the map engine is further adapted to: receive a feature; add the received feature to the second plurality of features; and provide the added feature to one or more users associated with the selected layer. 14. The system of claim 13, wherein the received feature comprises a route, and the route is based on user input drawing the route on a display associated with the at least one computing device. 15. A method for generating a layer for a map, and for displaying the layer on the map, the method comprising: receiving, from a user computing device, indications of a plurality of static features and dynamic features on a map by a computing device; generating a layer for the map comprising the plurality of static features and dynamic features by the computing device; providing the generated layer to a plurality of other user computing devices by the computing device; receiving an update to one of the dynamic features by the computing device; changing the generated layer responsive to the update; and in response to the received update, providing the changed generated layer to the user computing device and to the plurality of other user computing devices. 16. The method of claim 15, further comprising: displaying the map; receiving a selection of the layer; and in response to the selection, displaying the selected layer on top of the displayed map. 17. The method of claim 15, wherein receiving the indications of the plurality of static features and dynamic features comprises: receiving an indication of at least one route on the map; receiving an indication of at least one point of interest on the map; and receiving an indication of at least one polygon on the map. 18. The method of claim 17, wherein receiving the indication of the at least one polygon comprises receiving user input drawing the at least one polygon on a display associated with the computing device. 19. The method of claim 17, wherein receiving the indication of the at least one point of interest on the map comprises receiving results of a query, and determining the at least one point of interest based on the received results of the query. 20. The method of claim 17, further comprising receiving an annotation for the indication of the at least one polygon, and generating the layer for the map comprising the received annotation.
2,600
10,328
10,328
15,408,171
2,657
According to one embodiment, a system for voice assistant tracking and activation includes a tracking component, a wake component, a listening component, and a link component. The tracking component is configured to track availability of a plurality of voice assistant services. The wake component is configured to determine a plurality of wake words, each plurality of wake words corresponding to a specific voice assistant service of the plurality of voice assistant services. The listening component is configured to receive audio and detect a first wake word of the plurality of wake words that corresponds to a first voice assistant service of the plurality of voice assistant services. The link component is configured to establish a voice link with the first voice assistant service for voice input by the user.
1. A method comprising: tracking by a computing system integrated in a vehicle the availability of a plurality of voice assistant services within the vehicle; receiving audio from a microphone integrated in the vehicle; determining a plurality of wake words corresponding to the plurality of voice assistant services, the plurality of wake words comprising a first wake word for a first voice assistant service provided by a first device and a second wake word for a second voice assistant service provided by a second device; detecting the first wake word in the audio corresponding to the first voice assistant service of the plurality of voice assistant services; and in response to detecting the first wake word, establishing an audio link between the microphone and the first voice assistant service for voice input by the user; wherein at least one voice assistant service of the plurality of voice assistant services comprises a voice assistant service provided by a computing device not integrated in the vehicle. 2. The method of claim 1, wherein the first wake word comprises a first unique wake word and wherein the second wake word comprises a second unique wake word, wherein determining a plurality of wake words corresponding to the plurality of voice assistant services comprises determining the first unique wake word for an integrated voice assistant service integrated in the vehicle and the second unique wake word for a non-integrated voice assistant service that is not integrated in the vehicle. 3. The method of claim 1, wherein the plurality of voice assistant services comprises one or more of: a voice control application provided by a computing system integrated in the vehicle; a hands-free profile on a mobile computing device not integrated in the vehicle; and a voice assistant application running on a mobile computing device. 4. The method of claim 1, wherein tracking availability comprises maintaining a list of available voice assistant services on devices in communication with the computing system integrated in the vehicle. 5. The method of claim 4, further comprising pairing with a computing device and identifying one or more voice assistant services on the device, and adding the one or more voice assistant services to the list of voice assistant services. 6. The method of claim 1, further comprising: detecting the second wake word in the audio corresponding to the second voice assistant service of the plurality of voice assistant services; and in response to detecting the second wake word, establishing an audio link between the microphone and the second voice assistant service for voice input from the user. 7. The method of claim 1, where establishing the voice link comprises providing audio following the wake word to the first voice assistant service and providing audio from the first voice assistant service to one or more speakers integrated into the vehicle. 8. A system comprising: a tracking component integrated in a vehicle and configured to track availability of a plurality of voice assistant services; a wake component integrated in the vehicle and configured to determine a plurality of wake words, each plurality of wake words corresponding to a specific voice assistant service of the plurality of voice assistant services, wherein the plurality of wake words comprises a first wake word for a first voice assistant service provided by a first device and a second wake word for a second voice assistant service provided by a second device; a listening component integrated in the vehicle and configured to receive audio and detect a first wake word of the plurality of wake words that corresponds to a first voice assistant service of the plurality of voice assistant services; and a link component integrated in the vehicle and configured to establish a voice link with the first voice assistant service for voice input by the user; wherein at least one voice assistant service of the plurality of voice assistant services comprises a voice assistance service provided by a computing device not integrated in the vehicle. 9. The system of claim 8, wherein the first wake word comprises a first unique wake word and wherein the second wake word comprise a second unique wake word. 10. The system of claim 8, wherein the tracking component is configured to track availability of voice assistant services comprising one or more of: a voice control application provided by a computing system integrated in the vehicle; a hands-free profile on a mobile computing device not integrated in the vehicle; and a voice assistant application running on a mobile computing device. 11. The system of claim 8, wherein the tracking component is configured to track availability by maintaining a list of available voice assistant services on devices in communication with the computing system integrated in the vehicle. 12. The system of claim 11, wherein the tracking component is configured to pair with a computing device and identify one or more voice assistant services on the device, and add the one or more voice assistant services to the list of voice assistant services. 13. The system of claim 8, wherein the wake component determines a plurality of wake words for a single voice assistant, the link component is configured to, in response to detecting a specific wake word, establish a voice link and provide an indication of the specific wake word to the single voice assistant, and wherein the single voice assistant is configured to activate a different function based on the specific wake word. 14. The system of claim 8, wherein the link component is configured to establish the voice link by providing audio following the wake word to the first voice assistant and providing audio from the first voice assistant to one or more speakers. 15. Computer readable storage media storing instructions that, when executed by one or more processors, cause the one or more processors to: track availability of a plurality of voice assistant services within a vehicle; receive audio from a microphone integrated in the vehicle; determine a plurality of wake words corresponding to the plurality of voice assistant services, the plurality of wake words comprising a first wake word for a first voice assistant service provided by a first device and a second wake word for a second voice assistant service provided by a second device; detect the first wake word in the audio corresponding to the first voice assistant service of the plurality of voice assistant services; and in response to detecting the first wake word, establish an audio link between the microphone and the first voice assistant service for voice input by the user; wherein at least one voice assistant service of the plurality of voice assistant services comprises a voice assistance service provided by a computing device not integrated in the vehicle. 16. The computer readable storage media of claim 15, wherein the instructions further cause the one or more processors to track availability of voice assistant services comprising one or more of: a voice control application provided by a computing system integrated in the vehicle; a hands-free profile on a mobile computing device not integrated in the vehicle; and a voice assistant application running on a mobile computing device. 17. The computer readable storage media of claim 15, wherein the instructions cause the one or more processors to track availability by maintaining a list of available voice assistant services on devices in communication with the computing system integrated in the vehicle. 18. The computer readable storage media of claim 15, wherein the instructions further cause the one or more processors to pair with a computing device and identify one or more voice assistant services on the device, and add the one or more voice assistant services to the list of voice assistant services. 19. The computer readable storage media of claim 15, wherein the instructions further cause the one or more processors to: detect the second wake word in the audio corresponding to the second voice assistant service of the plurality of voice assistant services; and in response to detecting the second wake word, establish an audio link between the microphone and the second voice assistant service for voice input from the user. 20. The computer readable storage media of claim 15, wherein the instructions further cause the one or more processors to establish the voice link by providing audio following the wake word to the first voice assistant and providing audio from the first voice assistant to one or more speakers.
According to one embodiment, a system for voice assistant tracking and activation includes a tracking component, a wake component, a listening component, and a link component. The tracking component is configured to track availability of a plurality of voice assistant services. The wake component is configured to determine a plurality of wake words, each plurality of wake words corresponding to a specific voice assistant service of the plurality of voice assistant services. The listening component is configured to receive audio and detect a first wake word of the plurality of wake words that corresponds to a first voice assistant service of the plurality of voice assistant services. The link component is configured to establish a voice link with the first voice assistant service for voice input by the user.1. A method comprising: tracking by a computing system integrated in a vehicle the availability of a plurality of voice assistant services within the vehicle; receiving audio from a microphone integrated in the vehicle; determining a plurality of wake words corresponding to the plurality of voice assistant services, the plurality of wake words comprising a first wake word for a first voice assistant service provided by a first device and a second wake word for a second voice assistant service provided by a second device; detecting the first wake word in the audio corresponding to the first voice assistant service of the plurality of voice assistant services; and in response to detecting the first wake word, establishing an audio link between the microphone and the first voice assistant service for voice input by the user; wherein at least one voice assistant service of the plurality of voice assistant services comprises a voice assistant service provided by a computing device not integrated in the vehicle. 2. The method of claim 1, wherein the first wake word comprises a first unique wake word and wherein the second wake word comprises a second unique wake word, wherein determining a plurality of wake words corresponding to the plurality of voice assistant services comprises determining the first unique wake word for an integrated voice assistant service integrated in the vehicle and the second unique wake word for a non-integrated voice assistant service that is not integrated in the vehicle. 3. The method of claim 1, wherein the plurality of voice assistant services comprises one or more of: a voice control application provided by a computing system integrated in the vehicle; a hands-free profile on a mobile computing device not integrated in the vehicle; and a voice assistant application running on a mobile computing device. 4. The method of claim 1, wherein tracking availability comprises maintaining a list of available voice assistant services on devices in communication with the computing system integrated in the vehicle. 5. The method of claim 4, further comprising pairing with a computing device and identifying one or more voice assistant services on the device, and adding the one or more voice assistant services to the list of voice assistant services. 6. The method of claim 1, further comprising: detecting the second wake word in the audio corresponding to the second voice assistant service of the plurality of voice assistant services; and in response to detecting the second wake word, establishing an audio link between the microphone and the second voice assistant service for voice input from the user. 7. The method of claim 1, where establishing the voice link comprises providing audio following the wake word to the first voice assistant service and providing audio from the first voice assistant service to one or more speakers integrated into the vehicle. 8. A system comprising: a tracking component integrated in a vehicle and configured to track availability of a plurality of voice assistant services; a wake component integrated in the vehicle and configured to determine a plurality of wake words, each plurality of wake words corresponding to a specific voice assistant service of the plurality of voice assistant services, wherein the plurality of wake words comprises a first wake word for a first voice assistant service provided by a first device and a second wake word for a second voice assistant service provided by a second device; a listening component integrated in the vehicle and configured to receive audio and detect a first wake word of the plurality of wake words that corresponds to a first voice assistant service of the plurality of voice assistant services; and a link component integrated in the vehicle and configured to establish a voice link with the first voice assistant service for voice input by the user; wherein at least one voice assistant service of the plurality of voice assistant services comprises a voice assistance service provided by a computing device not integrated in the vehicle. 9. The system of claim 8, wherein the first wake word comprises a first unique wake word and wherein the second wake word comprise a second unique wake word. 10. The system of claim 8, wherein the tracking component is configured to track availability of voice assistant services comprising one or more of: a voice control application provided by a computing system integrated in the vehicle; a hands-free profile on a mobile computing device not integrated in the vehicle; and a voice assistant application running on a mobile computing device. 11. The system of claim 8, wherein the tracking component is configured to track availability by maintaining a list of available voice assistant services on devices in communication with the computing system integrated in the vehicle. 12. The system of claim 11, wherein the tracking component is configured to pair with a computing device and identify one or more voice assistant services on the device, and add the one or more voice assistant services to the list of voice assistant services. 13. The system of claim 8, wherein the wake component determines a plurality of wake words for a single voice assistant, the link component is configured to, in response to detecting a specific wake word, establish a voice link and provide an indication of the specific wake word to the single voice assistant, and wherein the single voice assistant is configured to activate a different function based on the specific wake word. 14. The system of claim 8, wherein the link component is configured to establish the voice link by providing audio following the wake word to the first voice assistant and providing audio from the first voice assistant to one or more speakers. 15. Computer readable storage media storing instructions that, when executed by one or more processors, cause the one or more processors to: track availability of a plurality of voice assistant services within a vehicle; receive audio from a microphone integrated in the vehicle; determine a plurality of wake words corresponding to the plurality of voice assistant services, the plurality of wake words comprising a first wake word for a first voice assistant service provided by a first device and a second wake word for a second voice assistant service provided by a second device; detect the first wake word in the audio corresponding to the first voice assistant service of the plurality of voice assistant services; and in response to detecting the first wake word, establish an audio link between the microphone and the first voice assistant service for voice input by the user; wherein at least one voice assistant service of the plurality of voice assistant services comprises a voice assistance service provided by a computing device not integrated in the vehicle. 16. The computer readable storage media of claim 15, wherein the instructions further cause the one or more processors to track availability of voice assistant services comprising one or more of: a voice control application provided by a computing system integrated in the vehicle; a hands-free profile on a mobile computing device not integrated in the vehicle; and a voice assistant application running on a mobile computing device. 17. The computer readable storage media of claim 15, wherein the instructions cause the one or more processors to track availability by maintaining a list of available voice assistant services on devices in communication with the computing system integrated in the vehicle. 18. The computer readable storage media of claim 15, wherein the instructions further cause the one or more processors to pair with a computing device and identify one or more voice assistant services on the device, and add the one or more voice assistant services to the list of voice assistant services. 19. The computer readable storage media of claim 15, wherein the instructions further cause the one or more processors to: detect the second wake word in the audio corresponding to the second voice assistant service of the plurality of voice assistant services; and in response to detecting the second wake word, establish an audio link between the microphone and the second voice assistant service for voice input from the user. 20. The computer readable storage media of claim 15, wherein the instructions further cause the one or more processors to establish the voice link by providing audio following the wake word to the first voice assistant and providing audio from the first voice assistant to one or more speakers.
2,600
10,329
10,329
15,321,216
2,642
Method for requesting services at a mobile network for a User Equipment (UE), wherein a geographical home zone for the UE is provided in the mobile network in which location based services are offered to the UE, the method comprising the steps of receiving location information regarding the UE, determining whether the UE is located in or outside the geographical home zone based on the received location information, determining that the UE has entered or left the geographical home zone, transmitting, triggered by the determination that the UE has entered or left the geographical home zone, a spatial trigger message for indicating that the UE has respectively entered or left the geographical home zone, and requesting services for the UE at the mobile network corresponding to whether the UE is located in or outside the geographical home zone.
1-18. (canceled) 19. A method for requesting services at a mobile network for one of a plurality of mobile User Equipment (UE), wherein one or more geographical home zones for the UE are provided in the mobile network in which one or more location based Home Zone services are offered to the UE, the method comprising: receiving, by a location server, location information regarding the UE in the mobile network; determining, by the location server, whether the UE is located in or outside the one or more geographical home zones based on the received location information; determining, by the location server, that the UE has entered or left the one or more geographical home zones; transmitting, by the location server and triggered by the determination that the UE has entered or left the one or more geographical home zones, a spatial trigger message to an application server for indicating that the UE has respectively entered or left the one or more geographical home zones; and requesting, by the application server, services for the UE at the mobile network corresponding to whether the UE is located in or outside the one or more geographical home zones. 20. The method of claim 19, wherein a geographical home zone is any of: at least one cell area covered by a base station of the mobile network; at least one sector of the cell area covered by the base station; at least one sub-sector of the sector of the cell area covered by the base station. 21. The method of claim 19, wherein the location based Home Zone services comprise an increased or decreased data transfer bandwidth setting for the UE. 22. The method of claim 19, wherein the requesting uses an Internet Protocol (IP) Multimedia Subsystem (IMS) Rx interface. 23. A method for indicating a location of one of a plurality of mobile User Equipment (UE) in a mobile network, wherein one or more geographical home zones for the UE are provided in the mobile network in which one or more location based Home Zone services are offered to the UE, the method comprising: receiving, by a location server, location information regarding the UE in the mobile network; determining, by the location server, whether the UE is located in or outside the one or more geographical home zones based on the received location information; determining, by the location server, that the UE has entered or left the one or more geographical home zones; transmitting, by the location server and in response to determining that the UE has entered or left the one or more geographical home zones, a spatial trigger message for indicating that the UE has respectively entered or left the one or more geographical home zones. 24. The method of claim 23, further comprising receiving, by the location server, a subscription message having Home Zone area parameters for subscribing the UE to the location server. 25. The method of claim 23: wherein the location server is configured to maintain a location database, the location database comprising information whether the UE is located in the one or more home zones; wherein the determining that the UE has entered or left the one or more geographical home zones comprises comparing, by the location server, the determined location of the UE with the information in the location database. 26. The method of claim 23, wherein the receiving location information comprises receiving, by the location server, location information in relation to the UE. 27. The method of claim 23, wherein the receiving location information comprises retrieving, by the location server, location information in relation to the UE and received by the location server. 28. A location server configured to indicate a location of one of a plurality of mobile User Equipment (UE) in a mobile network, wherein one or more geographical home zones for the UE are provided in the mobile network in which one or more location based Home Zone services are offered to the UE, the location server comprising: processing circuitry; memory containing instructions executable by the processing circuitry whereby the location server is operable to: receive location information regarding the UE in the mobile network; determine whether the UE is located in or outside the one or more geographical home zones based on the received location information; determine that the UE has entered or left the one or more geographical home zones; transmit, in response to a determination that the UE has entered or left the one or more geographical home zones, a spatial trigger message to an application server for indicating that the UE has respectively entered or left the one or more geographical home zones. 29. The location server of claim 28, wherein the instructions are such that the location server is operable to receive a subscription message having Home Zone area parameters for subscribing the UE to the location server. 30. The location server of claim 28: wherein the location server is configured to maintain a location database, the location database comprising information whether the UE is located in the one or more home zones; wherein the instructions are such that the location server is operable to compare the determined location of the UE with the information in the location database. 31. The location server of claim 28, wherein the instructions are such that the location server is operable to retrieve location information in relation to the UE and received by the location server. 32. The location server of claim 28, wherein the location based Home Zone services comprise an increased or decreased data transfer bandwidth setting for the UE. 33. A non-transitory computer readable recording medium storing a computer program product for indicating a location of one of a plurality of mobile User Equipment (UE) in a mobile network, wherein one or more geographical home zones for the UE are provided in the mobile network in which one or more location based Home Zone services are offered to the UE, the computer program product comprising software instructions which, when run on processing circuitry of a location server, causes the location server to: receive location information regarding the UE in the mobile network; determine whether the UE is located in or outside the one or more geographical home zones based on the received location information; determine that the UE has entered or left the one or more geographical home zones; transmit, in response to determining that the UE has entered or left the one or more geographical home zones, a spatial trigger message for indicating that the UE has respectively entered or left the one or more geographical home zones. 34. A communication system for requesting services at a mobile network for one of a plurality of mobile User Equipment (UE), wherein one or more geographical home zones for a UE are provided in the mobile network in which one or more location based Home Zone services are offered to the UE, the communication system comprising: a location server; the location server comprising: processing circuitry; memory containing instructions executable by the processing circuitry whereby the location server is operable to: receive location information regarding the UE in the mobile network; determine whether the UE is located in or outside the one or more geographical home zones based on the received location information; determine that the UE has entered or left the one or more geographical home zones; transmit, in response to a determination that the UE has entered or left the one or more geographical home zones, a spatial trigger message to an application server for indicating that the UE has respectively entered or left the one or more geographical home zones; an application server configured to request services for the UE at the mobile network corresponding to whether the UE is located in or outside the one or more geographical home zones.
Method for requesting services at a mobile network for a User Equipment (UE), wherein a geographical home zone for the UE is provided in the mobile network in which location based services are offered to the UE, the method comprising the steps of receiving location information regarding the UE, determining whether the UE is located in or outside the geographical home zone based on the received location information, determining that the UE has entered or left the geographical home zone, transmitting, triggered by the determination that the UE has entered or left the geographical home zone, a spatial trigger message for indicating that the UE has respectively entered or left the geographical home zone, and requesting services for the UE at the mobile network corresponding to whether the UE is located in or outside the geographical home zone.1-18. (canceled) 19. A method for requesting services at a mobile network for one of a plurality of mobile User Equipment (UE), wherein one or more geographical home zones for the UE are provided in the mobile network in which one or more location based Home Zone services are offered to the UE, the method comprising: receiving, by a location server, location information regarding the UE in the mobile network; determining, by the location server, whether the UE is located in or outside the one or more geographical home zones based on the received location information; determining, by the location server, that the UE has entered or left the one or more geographical home zones; transmitting, by the location server and triggered by the determination that the UE has entered or left the one or more geographical home zones, a spatial trigger message to an application server for indicating that the UE has respectively entered or left the one or more geographical home zones; and requesting, by the application server, services for the UE at the mobile network corresponding to whether the UE is located in or outside the one or more geographical home zones. 20. The method of claim 19, wherein a geographical home zone is any of: at least one cell area covered by a base station of the mobile network; at least one sector of the cell area covered by the base station; at least one sub-sector of the sector of the cell area covered by the base station. 21. The method of claim 19, wherein the location based Home Zone services comprise an increased or decreased data transfer bandwidth setting for the UE. 22. The method of claim 19, wherein the requesting uses an Internet Protocol (IP) Multimedia Subsystem (IMS) Rx interface. 23. A method for indicating a location of one of a plurality of mobile User Equipment (UE) in a mobile network, wherein one or more geographical home zones for the UE are provided in the mobile network in which one or more location based Home Zone services are offered to the UE, the method comprising: receiving, by a location server, location information regarding the UE in the mobile network; determining, by the location server, whether the UE is located in or outside the one or more geographical home zones based on the received location information; determining, by the location server, that the UE has entered or left the one or more geographical home zones; transmitting, by the location server and in response to determining that the UE has entered or left the one or more geographical home zones, a spatial trigger message for indicating that the UE has respectively entered or left the one or more geographical home zones. 24. The method of claim 23, further comprising receiving, by the location server, a subscription message having Home Zone area parameters for subscribing the UE to the location server. 25. The method of claim 23: wherein the location server is configured to maintain a location database, the location database comprising information whether the UE is located in the one or more home zones; wherein the determining that the UE has entered or left the one or more geographical home zones comprises comparing, by the location server, the determined location of the UE with the information in the location database. 26. The method of claim 23, wherein the receiving location information comprises receiving, by the location server, location information in relation to the UE. 27. The method of claim 23, wherein the receiving location information comprises retrieving, by the location server, location information in relation to the UE and received by the location server. 28. A location server configured to indicate a location of one of a plurality of mobile User Equipment (UE) in a mobile network, wherein one or more geographical home zones for the UE are provided in the mobile network in which one or more location based Home Zone services are offered to the UE, the location server comprising: processing circuitry; memory containing instructions executable by the processing circuitry whereby the location server is operable to: receive location information regarding the UE in the mobile network; determine whether the UE is located in or outside the one or more geographical home zones based on the received location information; determine that the UE has entered or left the one or more geographical home zones; transmit, in response to a determination that the UE has entered or left the one or more geographical home zones, a spatial trigger message to an application server for indicating that the UE has respectively entered or left the one or more geographical home zones. 29. The location server of claim 28, wherein the instructions are such that the location server is operable to receive a subscription message having Home Zone area parameters for subscribing the UE to the location server. 30. The location server of claim 28: wherein the location server is configured to maintain a location database, the location database comprising information whether the UE is located in the one or more home zones; wherein the instructions are such that the location server is operable to compare the determined location of the UE with the information in the location database. 31. The location server of claim 28, wherein the instructions are such that the location server is operable to retrieve location information in relation to the UE and received by the location server. 32. The location server of claim 28, wherein the location based Home Zone services comprise an increased or decreased data transfer bandwidth setting for the UE. 33. A non-transitory computer readable recording medium storing a computer program product for indicating a location of one of a plurality of mobile User Equipment (UE) in a mobile network, wherein one or more geographical home zones for the UE are provided in the mobile network in which one or more location based Home Zone services are offered to the UE, the computer program product comprising software instructions which, when run on processing circuitry of a location server, causes the location server to: receive location information regarding the UE in the mobile network; determine whether the UE is located in or outside the one or more geographical home zones based on the received location information; determine that the UE has entered or left the one or more geographical home zones; transmit, in response to determining that the UE has entered or left the one or more geographical home zones, a spatial trigger message for indicating that the UE has respectively entered or left the one or more geographical home zones. 34. A communication system for requesting services at a mobile network for one of a plurality of mobile User Equipment (UE), wherein one or more geographical home zones for a UE are provided in the mobile network in which one or more location based Home Zone services are offered to the UE, the communication system comprising: a location server; the location server comprising: processing circuitry; memory containing instructions executable by the processing circuitry whereby the location server is operable to: receive location information regarding the UE in the mobile network; determine whether the UE is located in or outside the one or more geographical home zones based on the received location information; determine that the UE has entered or left the one or more geographical home zones; transmit, in response to a determination that the UE has entered or left the one or more geographical home zones, a spatial trigger message to an application server for indicating that the UE has respectively entered or left the one or more geographical home zones; an application server configured to request services for the UE at the mobile network corresponding to whether the UE is located in or outside the one or more geographical home zones.
2,600
10,330
10,330
14,464,760
2,626
Provided are systems and methods for providing a stabilized color management system in a solid state lighting panel. Methods according to some embodiments include receiving, in the microcontroller, a color management reference value corresponding to a color characteristic of the solid state lighting panel and adjusting a control mode of the microcontroller responsive to the color management reference value.
1. A lighting panel system, comprising: a lighting panel including a plurality of solid state lighting devices; and a multi-mode color management system that is configured to control the lighting panel and that is further configured to selectively operate in a closed loop control mode that performs control operations of the solid state lighting panel based on a feedback signal or an open loop control mode that performs control operations of the solid state lighting panel without the feedback signal, responsive to a dynamic input signal value, wherein the multi-mode color management system comprises a mode selection module that is configured to estimate a color management change value, compare the color management change value to a threshold value, and set a microcontroller to either the closed loop control mode or the open loop control mode dependent on the color management change value. 2. The system of claim 1, wherein the multi-mode color management system comprises a color management unit that is configured to receive sensor input from a plurality of lighting panel sensors, the color management unit configured to generate color management information to control light output of the plurality of solid state lighting devices. 3. The system of claim 1, wherein the microcontroller is configured to receive color management information from a color management unit and the dynamic input signal value from a user input, wherein the dynamic input signal value corresponds to a color characteristic of the lighting panel. 4. The system of claim 3, wherein the color characteristic of the lighting panel comprises a solid state lighting panel luminance output. 5. The system of claim 3, wherein the color characteristic of the lighting panel comprises a solid state lighting panel chromaticity value. 6. The system of claim 1, wherein the mode selection module is further configured to set the microcontroller to the closed loop control mode if the color management change value is greater than the threshold value. 7. The system of claim 6, wherein the mode selection module is further configured to set the microcontroller to the open loop control mode if the color management change value is less than the threshold value. 8. The system of claim 1, wherein the color management change value comprises a difference between the dynamic input signal value and a current color management value. 9. The system of claim 1, further comprising an increment module that is configured to estimate a plurality of increment values between the dynamic input signal value and a current color management value. 10. A backlit display device configured to utilize the lighting panel system of claim 1. 11. A lighting panel system, comprising: a lighting panel including a plurality of solid state lighting devices; and a multi-mode color management system that is configured to control the lighting panel and that is further configured to selectively operate in a closed loop control mode that performs control operations of the solid state lighting panel based on a feedback signal or an open loop control mode that performs control operations of the solid state lighting panel without the feedback signal, wherein the multi-mode color management system comprises a mode selection module that is configured to estimate a color management change value, compare the color management change value to a threshold value, and to initially set a microcontroller to operate in the open loop control mode for a change in color that is larger than a color reference value and then to set the microcontroller in the closed loop control mode after the change in color that is larger than the color reference value is performed in the open loop control mode. 12. The system of claim 11, wherein the multi-mode color management system comprises a color management unit that is configured to receive sensor input from a plurality of lighting panel sensors, the color management unit configured to generate color management information to control light output of the plurality of solid state lighting devices. 13. The system of claim 11, wherein the microcontroller is configured to receive color management information from a color management unit and the dynamic input signal value from a user input, wherein the dynamic input signal value corresponds to a color characteristic of the lighting panel. 14. The system of claim 13, wherein the color characteristic of the lighting panel comprises a solid state lighting panel luminance output. 15. The system of claim 13, wherein the color characteristic of the lighting panel comprises a solid state lighting panel chromaticity value. 16. The system of claim 11, wherein the mode selection module is further configured to set the microcontroller to the closed loop control mode if the color management change value is greater than the threshold value. 17. The system of claim 16, wherein the mode selection module is further configured to set the microcontroller to the open loop control mode if the color management change value is less than the threshold value. 18. The system of claim 11, wherein the color management change value comprises a difference between the dynamic input signal value and a current color management value. 19. The system of claim 11, further comprising an increment module that is configured to estimate a plurality of increment values between the dynamic input signal value and a current color management value. 20. A backlit display device configured to utilize the lighting panel system of claim 11.
Provided are systems and methods for providing a stabilized color management system in a solid state lighting panel. Methods according to some embodiments include receiving, in the microcontroller, a color management reference value corresponding to a color characteristic of the solid state lighting panel and adjusting a control mode of the microcontroller responsive to the color management reference value.1. A lighting panel system, comprising: a lighting panel including a plurality of solid state lighting devices; and a multi-mode color management system that is configured to control the lighting panel and that is further configured to selectively operate in a closed loop control mode that performs control operations of the solid state lighting panel based on a feedback signal or an open loop control mode that performs control operations of the solid state lighting panel without the feedback signal, responsive to a dynamic input signal value, wherein the multi-mode color management system comprises a mode selection module that is configured to estimate a color management change value, compare the color management change value to a threshold value, and set a microcontroller to either the closed loop control mode or the open loop control mode dependent on the color management change value. 2. The system of claim 1, wherein the multi-mode color management system comprises a color management unit that is configured to receive sensor input from a plurality of lighting panel sensors, the color management unit configured to generate color management information to control light output of the plurality of solid state lighting devices. 3. The system of claim 1, wherein the microcontroller is configured to receive color management information from a color management unit and the dynamic input signal value from a user input, wherein the dynamic input signal value corresponds to a color characteristic of the lighting panel. 4. The system of claim 3, wherein the color characteristic of the lighting panel comprises a solid state lighting panel luminance output. 5. The system of claim 3, wherein the color characteristic of the lighting panel comprises a solid state lighting panel chromaticity value. 6. The system of claim 1, wherein the mode selection module is further configured to set the microcontroller to the closed loop control mode if the color management change value is greater than the threshold value. 7. The system of claim 6, wherein the mode selection module is further configured to set the microcontroller to the open loop control mode if the color management change value is less than the threshold value. 8. The system of claim 1, wherein the color management change value comprises a difference between the dynamic input signal value and a current color management value. 9. The system of claim 1, further comprising an increment module that is configured to estimate a plurality of increment values between the dynamic input signal value and a current color management value. 10. A backlit display device configured to utilize the lighting panel system of claim 1. 11. A lighting panel system, comprising: a lighting panel including a plurality of solid state lighting devices; and a multi-mode color management system that is configured to control the lighting panel and that is further configured to selectively operate in a closed loop control mode that performs control operations of the solid state lighting panel based on a feedback signal or an open loop control mode that performs control operations of the solid state lighting panel without the feedback signal, wherein the multi-mode color management system comprises a mode selection module that is configured to estimate a color management change value, compare the color management change value to a threshold value, and to initially set a microcontroller to operate in the open loop control mode for a change in color that is larger than a color reference value and then to set the microcontroller in the closed loop control mode after the change in color that is larger than the color reference value is performed in the open loop control mode. 12. The system of claim 11, wherein the multi-mode color management system comprises a color management unit that is configured to receive sensor input from a plurality of lighting panel sensors, the color management unit configured to generate color management information to control light output of the plurality of solid state lighting devices. 13. The system of claim 11, wherein the microcontroller is configured to receive color management information from a color management unit and the dynamic input signal value from a user input, wherein the dynamic input signal value corresponds to a color characteristic of the lighting panel. 14. The system of claim 13, wherein the color characteristic of the lighting panel comprises a solid state lighting panel luminance output. 15. The system of claim 13, wherein the color characteristic of the lighting panel comprises a solid state lighting panel chromaticity value. 16. The system of claim 11, wherein the mode selection module is further configured to set the microcontroller to the closed loop control mode if the color management change value is greater than the threshold value. 17. The system of claim 16, wherein the mode selection module is further configured to set the microcontroller to the open loop control mode if the color management change value is less than the threshold value. 18. The system of claim 11, wherein the color management change value comprises a difference between the dynamic input signal value and a current color management value. 19. The system of claim 11, further comprising an increment module that is configured to estimate a plurality of increment values between the dynamic input signal value and a current color management value. 20. A backlit display device configured to utilize the lighting panel system of claim 11.
2,600
10,331
10,331
12,893,350
2,648
A network traffic associated with a communication request within a computing device can be identified. The device can comprise of a first and second communication stack which can addresses a first and a second network interface within the computing device. The first network interface can be associated with a mobile broadband network and the second network interface can be associated with a computing network. A first and second portion of the network traffic associated with the communication request can be programmatically determined to be conveyed to the first and second network interfaces. The first and second portions of network traffic can be conveyed simultaneously to the mobile broadband network associated with the first network interface and the computing network associated with the second network interface.
1. A method for mobile broadband interface aggregation comprising: identifying a network traffic associated with a communication request within a computing device, wherein the computing device comprises of a first communication stack and a second communication stack each with different network access protocols, wherein the first communication stack addresses a first network interface and the second communication stack addresses a second network interface within the computing device, wherein the first network interface is associated with a mobile broadband network and the second network interface is associated with a computing network, wherein the mobile broadband network is associated with a mobile phone network; programmatically determining a first portion of the network traffic associated with the communication request to be conveyed to the first network interface and a second portion of the network traffic associated with the communication request to be conveyed to second network interface; and simultaneously conveying the first portion of network traffic to the mobile broadband network associated with the first network interface and the second portion of the network traffic associated with the second network interface. 2. The method of claim 1, wherein the mobile phone network conforms to at least one of a Global System for Mobile Communications (GSM) network, Code Division Multiple Access 2000 (CDMA2000) network, 802.11 network, 802.16 network, 802.20 network, and a Wireless Universal Serial Bus (WUSB) network and the computing network is a network conforming to at least one of a Global System for Mobile Communications (GSM) network, Code Division Multiple Access 2000 (CDMA2000) network, 802.11 network, 802.16 network, 802.20 network, and a Wireless Universal Serial Bus (WUSB) network. 3. The method of claim 1, wherein the first network interface is a wireless mobile modem associated with a mobile computing device, wherein the mobile computing device is at least one of a mobile phone, a laptop, a netbook, a tablet computer, a portable multi-media device, and a portable digital assistant (PDA). 4. The method of claim 1, wherein the determining is performed by at least one of a load balancing algorithm and a multi-path routing algorithm. 5. The method of claim 1, wherein the first and the second networks is associated with a first and second encryption technologies, wherein the first portion of the network traffic is encrypted with a first encryption technology associated with the first network and the second portion of the network traffic is encrypted with a second encryption technology associated with the second network. 6. The method of claim 1, further comprising: collecting a metric associated with the first and second networks; analyzing the metric to determine a preferred network between the first and second networks; and prioritizing the preferred network for conveying network traffic responsive to the analyzing. 7. The method of claim 1, wherein the network traffic is network traffic received from a proximate computing device. 8. The method of claim 1, wherein the first and second network interface is a fourth generation (4G) and third generation (3G) network interface and the network traffic is at least one of a plurality of data traffic generated by a mobile application executing within the computing device. 9. A method for mobile broadband interface aggregation comprising: identifying a plurality of network interfaces within a computing device, wherein the plurality of network interfaces comprises of at least one mobile broadband network interface, wherein the plurality of network interfaces are physically distinct network interfaces, wherein the mobile broadband network is associated with a mobile phone network, wherein at least two of the plurality of network interfaces is associated with a different network access protocol; aggregating the plurality of network interfaces into a logical network interface, wherein the logical interface is associated with at least a network layer, wherein the network layer permits communication with a data link layer and an application layer, wherein the network layer, data link layer, and application layer are layers conforming to an Open Systems Interconnect (OSI) communication model; and conveying a networking traffic to the logical network interface, wherein the logical network interface transmits at least a portion of the network traffic to the at least two network interfaces comprising the plurality of network interfaces. 10. The method of claim 9, further comprising: determining a state change of a network interface comprising the plurality of network interfaces; when the state change results in the network interface becoming unresponsive, automatically disassociating the network interface from the logical network interface; and when the state change results in a network interface becoming responsive, automatically associating the network interface to the logical network interface. 11. The method of claim 9, wherein the plurality of network interfaces comprises of a mobile broadband network interface and at least one of a wireless broadband network interface and a wired broadband network interface. 12. The method of claim 9, further comprising: encoding a communication request with a network access protocol destined to an interface having a different network access protocol; translating the communication request into the different network access protocol; and conveying the communication request over the interface via the different network access protocol. 13. The method of claim 9, wherein the different access protocol is at least one of a Wireless Application Protocol (WAP) and a Transport Control Protocol/Internet Protocol (TCP/IP). 14. A system for mobile broadband interface aggregation comprising: a processor; a volatile memory; a bus connecting said processor, non-volatile memory, and volatile memory to each other, wherein the volatile memory comprises computer usable program code execute-able by said processor, said computer usable program code comprising: a fusion engine able to route a network traffic associated with a communication stack over a plurality of network links, wherein the plurality of network links comprises of a mobile broadband network associated with a first protocol and at least one of a wireless broadband network interface associated with a second protocol and a wired broadband network interface associated with a third protocol, wherein the mobile broadband network is a network associated with a mobile phone network; and a ruleset configured to selectively convert network access protocols associated with the network traffic protocols between the plurality of network interfaces. 15. The system of claim 14, further comprising: a network interface manager configured to manage a plurality of network interfaces associated with the plurality of network links; a data composer able to assemble and disassemble a communication request associated with the network traffic, wherein the network traffic is associated with the plurality of network interfaces; a session handler capable of establishing a communication session between a source and at least one destination entity associated with the network traffic, wherein the communication session comprises of the communication request; a flow controller configured to moderate the transmission speed of the communication session associated with the network traffic transmitted over the plurality of network links; and a routing engine able to convey at least a portion of the communication request to the source entity and the at least one destination entity utilizing the plurality of network links. 16. The system of claim 14, wherein the ruleset is at least one of a user setting, a manufacturer determined setting, and an automatically determined setting. 17. The system of claim 14, wherein the fusion engine is executing within a gateway routing device. 18. The system of claim 14, wherein the fusion engine is a component of a hardware implemented communication stack. 19. The system of claim 14, wherein the fusion engine is a network driver residing within a operating system, wherein the operating system is at least one of a software and a firmware. 20. The system of claim 14, wherein the fusion engine is a component of a hardware abstraction layer. 21. A computer program product comprising a tangible, non-transitory computer readable storage medium having computer usable program code embodied therewith, the computer usable program code comprising: computer usable program code stored on a tangible storage medium that when executed by a processor is operable to identify a network traffic associated with a communication request within a computing device, wherein the computing device comprises of a first communication stack and a second communication stack each with different network access protocols, wherein the first communication stack addresses a first network interface and the second communication stack addresses a second network interface within the computing device, wherein the first network interface is associated with a mobile broadband network and the second network interface is associated with a computing network, wherein the mobile broadband network is associated with a mobile phone network; computer usable program code stored on a tangible storage medium that when executed by a processor is operable to determine a first portion of the network traffic associated with the communication request to be conveyed to the first network interface and a second portion of the network traffic associated with the communication request to be conveyed to second network interface; and computer usable program code stored on a tangible storage medium that when executed by a processor is operable to simultaneously convey the first portion of network traffic to the mobile broadband network associated with the first network interface and the second portion of the network traffic associated with the second network interface. 22. The computer program product of claim 21, wherein the first and the second networks is associated with a first and second encryption technologies, wherein the first portion of the network traffic is encrypted with a first encryption technology associated with the first network and the second portion of the network traffic is encrypted with a second encryption technology associated with the second network. 23. The computer program product of claim 21, further comprising: computer usable program code stored on a tangible storage medium that when executed by a processor is operable to collect a metric associated with the first and second networks; computer usable program code stored on a tangible storage medium that when executed by a processor is operable to analyze the metric to determine a preferred network between the first and second networks; and computer usable program code stored on a tangible storage medium that when executed by a processor is operable to prioritize the preferred network for conveying network traffic responsive to the analyzing. 24. A computer program product comprising a tangible, non-transitory computer readable storage medium having computer usable program code embodied therewith, the computer usable program code comprising: computer usable program code stored on a tangible storage medium that when executed by a processor is operable to identify a plurality of network interfaces within a computing device, wherein the plurality of network interfaces comprises of at least one mobile broadband network interface, wherein the plurality of network interfaces are physically distinct network interfaces, wherein the mobile broadband network is associated with a mobile phone network, wherein at least two of the plurality of network interfaces is associated with a different network access protocol; computer usable program code stored on a tangible storage medium that when executed by a processor is operable to aggregate the plurality of network interfaces into a logical network interface, wherein the logical interface is associated with at least a network layer, wherein the network layer permits communication with a data link layer and an application layer, wherein the network layer, data link layer, and application layer are layers conforming to an Open Systems Interconnect (OSI) communication model; and computer usable program code stored on a tangible storage medium that when executed by a processor is operable to convey a networking traffic to the logical network interface, wherein the logical network interface transmits at least a portion of the network traffic to the at least two network interfaces comprising the plurality of network interfaces. 25. The computer program product of claim 24, further comprising: computer usable program code stored on a tangible storage medium that when executed by a processor is operable to determine a state change of a network interface comprising the plurality of network interfaces; computer usable program code stored on a tangible storage medium that when executed by a processor is operable to, when the state change results in the network interface becoming unresponsive, automatically disassociate the network interface from the logical network interface; and computer usable program code stored on a tangible storage medium that when executed by a processor is operable to when the state change results in a network interface becoming responsive, automatically associate the network interface to the logical network interface.
A network traffic associated with a communication request within a computing device can be identified. The device can comprise of a first and second communication stack which can addresses a first and a second network interface within the computing device. The first network interface can be associated with a mobile broadband network and the second network interface can be associated with a computing network. A first and second portion of the network traffic associated with the communication request can be programmatically determined to be conveyed to the first and second network interfaces. The first and second portions of network traffic can be conveyed simultaneously to the mobile broadband network associated with the first network interface and the computing network associated with the second network interface.1. A method for mobile broadband interface aggregation comprising: identifying a network traffic associated with a communication request within a computing device, wherein the computing device comprises of a first communication stack and a second communication stack each with different network access protocols, wherein the first communication stack addresses a first network interface and the second communication stack addresses a second network interface within the computing device, wherein the first network interface is associated with a mobile broadband network and the second network interface is associated with a computing network, wherein the mobile broadband network is associated with a mobile phone network; programmatically determining a first portion of the network traffic associated with the communication request to be conveyed to the first network interface and a second portion of the network traffic associated with the communication request to be conveyed to second network interface; and simultaneously conveying the first portion of network traffic to the mobile broadband network associated with the first network interface and the second portion of the network traffic associated with the second network interface. 2. The method of claim 1, wherein the mobile phone network conforms to at least one of a Global System for Mobile Communications (GSM) network, Code Division Multiple Access 2000 (CDMA2000) network, 802.11 network, 802.16 network, 802.20 network, and a Wireless Universal Serial Bus (WUSB) network and the computing network is a network conforming to at least one of a Global System for Mobile Communications (GSM) network, Code Division Multiple Access 2000 (CDMA2000) network, 802.11 network, 802.16 network, 802.20 network, and a Wireless Universal Serial Bus (WUSB) network. 3. The method of claim 1, wherein the first network interface is a wireless mobile modem associated with a mobile computing device, wherein the mobile computing device is at least one of a mobile phone, a laptop, a netbook, a tablet computer, a portable multi-media device, and a portable digital assistant (PDA). 4. The method of claim 1, wherein the determining is performed by at least one of a load balancing algorithm and a multi-path routing algorithm. 5. The method of claim 1, wherein the first and the second networks is associated with a first and second encryption technologies, wherein the first portion of the network traffic is encrypted with a first encryption technology associated with the first network and the second portion of the network traffic is encrypted with a second encryption technology associated with the second network. 6. The method of claim 1, further comprising: collecting a metric associated with the first and second networks; analyzing the metric to determine a preferred network between the first and second networks; and prioritizing the preferred network for conveying network traffic responsive to the analyzing. 7. The method of claim 1, wherein the network traffic is network traffic received from a proximate computing device. 8. The method of claim 1, wherein the first and second network interface is a fourth generation (4G) and third generation (3G) network interface and the network traffic is at least one of a plurality of data traffic generated by a mobile application executing within the computing device. 9. A method for mobile broadband interface aggregation comprising: identifying a plurality of network interfaces within a computing device, wherein the plurality of network interfaces comprises of at least one mobile broadband network interface, wherein the plurality of network interfaces are physically distinct network interfaces, wherein the mobile broadband network is associated with a mobile phone network, wherein at least two of the plurality of network interfaces is associated with a different network access protocol; aggregating the plurality of network interfaces into a logical network interface, wherein the logical interface is associated with at least a network layer, wherein the network layer permits communication with a data link layer and an application layer, wherein the network layer, data link layer, and application layer are layers conforming to an Open Systems Interconnect (OSI) communication model; and conveying a networking traffic to the logical network interface, wherein the logical network interface transmits at least a portion of the network traffic to the at least two network interfaces comprising the plurality of network interfaces. 10. The method of claim 9, further comprising: determining a state change of a network interface comprising the plurality of network interfaces; when the state change results in the network interface becoming unresponsive, automatically disassociating the network interface from the logical network interface; and when the state change results in a network interface becoming responsive, automatically associating the network interface to the logical network interface. 11. The method of claim 9, wherein the plurality of network interfaces comprises of a mobile broadband network interface and at least one of a wireless broadband network interface and a wired broadband network interface. 12. The method of claim 9, further comprising: encoding a communication request with a network access protocol destined to an interface having a different network access protocol; translating the communication request into the different network access protocol; and conveying the communication request over the interface via the different network access protocol. 13. The method of claim 9, wherein the different access protocol is at least one of a Wireless Application Protocol (WAP) and a Transport Control Protocol/Internet Protocol (TCP/IP). 14. A system for mobile broadband interface aggregation comprising: a processor; a volatile memory; a bus connecting said processor, non-volatile memory, and volatile memory to each other, wherein the volatile memory comprises computer usable program code execute-able by said processor, said computer usable program code comprising: a fusion engine able to route a network traffic associated with a communication stack over a plurality of network links, wherein the plurality of network links comprises of a mobile broadband network associated with a first protocol and at least one of a wireless broadband network interface associated with a second protocol and a wired broadband network interface associated with a third protocol, wherein the mobile broadband network is a network associated with a mobile phone network; and a ruleset configured to selectively convert network access protocols associated with the network traffic protocols between the plurality of network interfaces. 15. The system of claim 14, further comprising: a network interface manager configured to manage a plurality of network interfaces associated with the plurality of network links; a data composer able to assemble and disassemble a communication request associated with the network traffic, wherein the network traffic is associated with the plurality of network interfaces; a session handler capable of establishing a communication session between a source and at least one destination entity associated with the network traffic, wherein the communication session comprises of the communication request; a flow controller configured to moderate the transmission speed of the communication session associated with the network traffic transmitted over the plurality of network links; and a routing engine able to convey at least a portion of the communication request to the source entity and the at least one destination entity utilizing the plurality of network links. 16. The system of claim 14, wherein the ruleset is at least one of a user setting, a manufacturer determined setting, and an automatically determined setting. 17. The system of claim 14, wherein the fusion engine is executing within a gateway routing device. 18. The system of claim 14, wherein the fusion engine is a component of a hardware implemented communication stack. 19. The system of claim 14, wherein the fusion engine is a network driver residing within a operating system, wherein the operating system is at least one of a software and a firmware. 20. The system of claim 14, wherein the fusion engine is a component of a hardware abstraction layer. 21. A computer program product comprising a tangible, non-transitory computer readable storage medium having computer usable program code embodied therewith, the computer usable program code comprising: computer usable program code stored on a tangible storage medium that when executed by a processor is operable to identify a network traffic associated with a communication request within a computing device, wherein the computing device comprises of a first communication stack and a second communication stack each with different network access protocols, wherein the first communication stack addresses a first network interface and the second communication stack addresses a second network interface within the computing device, wherein the first network interface is associated with a mobile broadband network and the second network interface is associated with a computing network, wherein the mobile broadband network is associated with a mobile phone network; computer usable program code stored on a tangible storage medium that when executed by a processor is operable to determine a first portion of the network traffic associated with the communication request to be conveyed to the first network interface and a second portion of the network traffic associated with the communication request to be conveyed to second network interface; and computer usable program code stored on a tangible storage medium that when executed by a processor is operable to simultaneously convey the first portion of network traffic to the mobile broadband network associated with the first network interface and the second portion of the network traffic associated with the second network interface. 22. The computer program product of claim 21, wherein the first and the second networks is associated with a first and second encryption technologies, wherein the first portion of the network traffic is encrypted with a first encryption technology associated with the first network and the second portion of the network traffic is encrypted with a second encryption technology associated with the second network. 23. The computer program product of claim 21, further comprising: computer usable program code stored on a tangible storage medium that when executed by a processor is operable to collect a metric associated with the first and second networks; computer usable program code stored on a tangible storage medium that when executed by a processor is operable to analyze the metric to determine a preferred network between the first and second networks; and computer usable program code stored on a tangible storage medium that when executed by a processor is operable to prioritize the preferred network for conveying network traffic responsive to the analyzing. 24. A computer program product comprising a tangible, non-transitory computer readable storage medium having computer usable program code embodied therewith, the computer usable program code comprising: computer usable program code stored on a tangible storage medium that when executed by a processor is operable to identify a plurality of network interfaces within a computing device, wherein the plurality of network interfaces comprises of at least one mobile broadband network interface, wherein the plurality of network interfaces are physically distinct network interfaces, wherein the mobile broadband network is associated with a mobile phone network, wherein at least two of the plurality of network interfaces is associated with a different network access protocol; computer usable program code stored on a tangible storage medium that when executed by a processor is operable to aggregate the plurality of network interfaces into a logical network interface, wherein the logical interface is associated with at least a network layer, wherein the network layer permits communication with a data link layer and an application layer, wherein the network layer, data link layer, and application layer are layers conforming to an Open Systems Interconnect (OSI) communication model; and computer usable program code stored on a tangible storage medium that when executed by a processor is operable to convey a networking traffic to the logical network interface, wherein the logical network interface transmits at least a portion of the network traffic to the at least two network interfaces comprising the plurality of network interfaces. 25. The computer program product of claim 24, further comprising: computer usable program code stored on a tangible storage medium that when executed by a processor is operable to determine a state change of a network interface comprising the plurality of network interfaces; computer usable program code stored on a tangible storage medium that when executed by a processor is operable to, when the state change results in the network interface becoming unresponsive, automatically disassociate the network interface from the logical network interface; and computer usable program code stored on a tangible storage medium that when executed by a processor is operable to when the state change results in a network interface becoming responsive, automatically associate the network interface to the logical network interface.
2,600
10,332
10,332
15,185,501
2,626
The present invention discloses a flat panel device and a method for operating a flat panel device. The flat panel device has a reset switch. The reset switch is used to disconnect a battery of the flat panel device from a load so as to force to turn off the flat panel device when a touch control panel of the flat panel device is disabled.
1. A flat panel device, comprising: a touch control panel; a battery; and a reset switch for disconnecting the battery from a load of the flat panel device so as to force to turn off the flat panel device when the touch control panel is disabled. 2. The flat panel device of claim 1, further comprising: an enclosure providing a first opening corresponding to the reset switch for a user to toggle the reset switch. 3. The flat panel device of claim 2, further comprising: a groove structure, and wherein the first opening is opened at the groove structure. 4. The flat panel device of claim 3, further comprising: a groove outer cap movably covering the groove structure. 5. The flat panel device of claim 3, further comprising: a transmission line interface to be connected to a power source to charge the battery, wherein the enclosure further provides a second opening corresponding to the transmission line interface; and the second opening is opened at the groove structure. 6. The flat panel device of claim 5, further comprising: a groove outer cap movably covering the groove structure. 7. The flat panel device of claim 1, further comprising: an enclosure; and a battery outer cap assembled to the enclosure to cover the battery and the reset switch disposed within the enclosure. 8. The flat panel device of claim 1, the flat panel device restarting along with re-connection of the disconnected reset switch. 9. The flat panel device of claim 1, further comprising: a power source key, wherein after the re-connection of the disconnected reset switch, the flat panel device restarts along with a press applied to the power source key. 10. A method for operating a flat panel device, comprising: disposing a reset switch on a flat panel device to control a conducting condition of a battery of the flat panel device and a load of the flat panel device; and switching the reset switch to disconnect the battery from the load of the flat panel device so as to force to turn off the flat panel device when a touch control panel of the flat panel device is disabled. 11. The method for operating a flat panel device of claim 10, further comprising: after switching the reset switch to disconnect the battery from the load of the flat panel device, re-switching the reset switch to connect the battery to the load of the flat panel device, wherein the flat panel device restarts along with the re-connection of the disconnected reset switch. 12. The method for operating a flat panel device of claim 10, further comprising: after switching the reset switch to disconnect the battery from the load of the flat panel device, re-switching the reset switch to connect the battery to the load of the flat panel device, wherein after the re-connection of the disconnected reset switch, the flat panel device restarts along with a press applied to a power source key of the flat panel device.
The present invention discloses a flat panel device and a method for operating a flat panel device. The flat panel device has a reset switch. The reset switch is used to disconnect a battery of the flat panel device from a load so as to force to turn off the flat panel device when a touch control panel of the flat panel device is disabled.1. A flat panel device, comprising: a touch control panel; a battery; and a reset switch for disconnecting the battery from a load of the flat panel device so as to force to turn off the flat panel device when the touch control panel is disabled. 2. The flat panel device of claim 1, further comprising: an enclosure providing a first opening corresponding to the reset switch for a user to toggle the reset switch. 3. The flat panel device of claim 2, further comprising: a groove structure, and wherein the first opening is opened at the groove structure. 4. The flat panel device of claim 3, further comprising: a groove outer cap movably covering the groove structure. 5. The flat panel device of claim 3, further comprising: a transmission line interface to be connected to a power source to charge the battery, wherein the enclosure further provides a second opening corresponding to the transmission line interface; and the second opening is opened at the groove structure. 6. The flat panel device of claim 5, further comprising: a groove outer cap movably covering the groove structure. 7. The flat panel device of claim 1, further comprising: an enclosure; and a battery outer cap assembled to the enclosure to cover the battery and the reset switch disposed within the enclosure. 8. The flat panel device of claim 1, the flat panel device restarting along with re-connection of the disconnected reset switch. 9. The flat panel device of claim 1, further comprising: a power source key, wherein after the re-connection of the disconnected reset switch, the flat panel device restarts along with a press applied to the power source key. 10. A method for operating a flat panel device, comprising: disposing a reset switch on a flat panel device to control a conducting condition of a battery of the flat panel device and a load of the flat panel device; and switching the reset switch to disconnect the battery from the load of the flat panel device so as to force to turn off the flat panel device when a touch control panel of the flat panel device is disabled. 11. The method for operating a flat panel device of claim 10, further comprising: after switching the reset switch to disconnect the battery from the load of the flat panel device, re-switching the reset switch to connect the battery to the load of the flat panel device, wherein the flat panel device restarts along with the re-connection of the disconnected reset switch. 12. The method for operating a flat panel device of claim 10, further comprising: after switching the reset switch to disconnect the battery from the load of the flat panel device, re-switching the reset switch to connect the battery to the load of the flat panel device, wherein after the re-connection of the disconnected reset switch, the flat panel device restarts along with a press applied to a power source key of the flat panel device.
2,600
10,333
10,333
15,323,004
2,622
A vehicular display apparatus that displays an image on a windshield of a vehicle has an attention target detector configured to detect an attention target to which attention of a driver of the vehicle needs to be drawn, and calculate a distance from the attention target to the vehicle, a display controller configured to perform display control that displays an attention mark on the windshield in a superimposed manner such that from a point of view of the driver, the attention mark is displayed close to the attention target detected by the attention target detector, the attention mark being displayed to draw the attention of the driver to the attention target, and a display modulator configured to change, according to the distance, a highlight level for highlighted display of the attention mark, the display control for which is performed by the display controller.
1. A vehicular display apparatus that displays an image on a windshield of a vehicle, the vehicular display apparatus comprising: an attention target detector configured to detect an attention target to which attention of a driver of the vehicle needs to be drawn, and calculate a distance from the attention target to the vehicle; a display controller configured to perform display control that displays an attention mark on the windshield in a superimposed manner such that from a point of view of the driver, the attention mark is displayed close to the attention target detected by the attention target detector, the attention mark being displayed to draw the attention of the driver to the attention target; and a display modulator configured to change, according to the distance, a highlight level for highlighted display of the attention mark, the display control for which is performed by the display controller, wherein the display controller changes a display size of the attention mark according to the distance. 2. The vehicular display apparatus according to claim 1, wherein the display modulator changes luminance of the attention mark according to the distance. 3. The vehicular display apparatus according to claim 1, wherein the display modulator changes spatial frequency of the attention mark according to the distance. 4. The vehicular display apparatus according to claim 1, wherein the display modulator changes time frequency of the attention mark according to the distance. 5. (canceled) 6. The vehicular display apparatus according to claim 1, wherein the display controller corrects a display position of the attention mark according to a time difference between detection of the attention target and display of the attention mark. 7. A vehicular display method performed by a vehicular display apparatus that displays an image on a windshield of a vehicle, the vehicular display method comprising: detecting an attention target to which attention of a driver of the vehicle needs to be drawn, and calculating a distance from the attention target to the vehicle; performing display control for displaying an attention mark on the windshield in a superimposed manner such that from a point of view of the driver, the attention mark is displayed close to the attention target, the attention mark being displayed to draw the attention of the driver to the attention target; changing a highlight level for highlighted display of the attention mark according to the distance; and changing a display size of the attention mark according to the distance.
A vehicular display apparatus that displays an image on a windshield of a vehicle has an attention target detector configured to detect an attention target to which attention of a driver of the vehicle needs to be drawn, and calculate a distance from the attention target to the vehicle, a display controller configured to perform display control that displays an attention mark on the windshield in a superimposed manner such that from a point of view of the driver, the attention mark is displayed close to the attention target detected by the attention target detector, the attention mark being displayed to draw the attention of the driver to the attention target, and a display modulator configured to change, according to the distance, a highlight level for highlighted display of the attention mark, the display control for which is performed by the display controller.1. A vehicular display apparatus that displays an image on a windshield of a vehicle, the vehicular display apparatus comprising: an attention target detector configured to detect an attention target to which attention of a driver of the vehicle needs to be drawn, and calculate a distance from the attention target to the vehicle; a display controller configured to perform display control that displays an attention mark on the windshield in a superimposed manner such that from a point of view of the driver, the attention mark is displayed close to the attention target detected by the attention target detector, the attention mark being displayed to draw the attention of the driver to the attention target; and a display modulator configured to change, according to the distance, a highlight level for highlighted display of the attention mark, the display control for which is performed by the display controller, wherein the display controller changes a display size of the attention mark according to the distance. 2. The vehicular display apparatus according to claim 1, wherein the display modulator changes luminance of the attention mark according to the distance. 3. The vehicular display apparatus according to claim 1, wherein the display modulator changes spatial frequency of the attention mark according to the distance. 4. The vehicular display apparatus according to claim 1, wherein the display modulator changes time frequency of the attention mark according to the distance. 5. (canceled) 6. The vehicular display apparatus according to claim 1, wherein the display controller corrects a display position of the attention mark according to a time difference between detection of the attention target and display of the attention mark. 7. A vehicular display method performed by a vehicular display apparatus that displays an image on a windshield of a vehicle, the vehicular display method comprising: detecting an attention target to which attention of a driver of the vehicle needs to be drawn, and calculating a distance from the attention target to the vehicle; performing display control for displaying an attention mark on the windshield in a superimposed manner such that from a point of view of the driver, the attention mark is displayed close to the attention target, the attention mark being displayed to draw the attention of the driver to the attention target; changing a highlight level for highlighted display of the attention mark according to the distance; and changing a display size of the attention mark according to the distance.
2,600
10,334
10,334
14,634,687
2,652
A first digital signal processor (DSP) associated with a first user outputs audio to the first user via first headphones, and captures speech from the first user via a first microphone. Similarly, a second DSP associated with a second user outputs audio to the second user via second headphones, and captures speech from the second user via a second microphone. The first DSP is coupled to the second DSP in order to allow the first and second users to share music and communicate with one another. The first user may speak into the first microphone, and the first and second DSPs may then interoperate to output that speech to the second user without substantially disrupting audio output to the second user. Each of the first and second users may also select between first and second audio sources that may be coupled to the first and second DSPs, respectively.
1. A computer-implemented method for generating an audio signal, the method comprising: receiving a first signal from a first audio source; transmitting the first signal to an output element associated with the first user for output; receiving a second signal from an input element associated with a second user; combining the first signal with the second signal to produce a combined signal; and transmitting the combined signal to the output element associated with the first user for output. 2. The computer-implemented method of claim 1, wherein combining the first signal with the second signal comprises: ducking the first signal in response to the second signal to generate a ducked first signal; and adding the ducked first signal to the second signal. 3. The computer-implemented method of claim 2, wherein ducking the first signal comprises adjusting an amplitude that is associated with the first signal and corresponds to a first frequency range based on an amplitude that is associated with the second signal and corresponds to a second frequency range. 4. The computer-implemented method of claim 3, wherein the first frequency range is substantially the same as the second frequency range. 5. The computer-implemented method of claim 1, further comprising pre-processing the second signal to filter out at least a portion of a third signal that is input to the input element associated with the second user by an output element associated with the second user. 6. The computer-implemented method of claim 1, further comprising pre-processing the second signal to filter out at least a portion of a fourth signal that is input to an input element associated with the first user. 7. The computer-implemented method of claim 1, wherein the first audio source is associated with the first user, and further comprising causing the output element associated with the first user to output the first signal to the first user in response to the first user selecting the first audio source. 8. The computer-implemented method of claim 1, wherein the first audio source is associated with the second user, and further comprising causing the output element associated with the first user to output the first signal to the first user in response to the first user selecting the first audio source. 9. The computer-implemented method of claim 1, wherein the first audio source is associated with either the first user or the second user, and further comprising causing the output element associated with the first user to output the first signal to the first user in response to determining that a second audio source is currently inactive. 10. A system for generating an audio signal, comprising: a first output element associated with a first user and configured to produce audio signals; a first input element associated with the first user and configured to receive audio signals; a first audio source configured to generate audio signals; a first circuit element coupled to the first output element, the first input element, and the first audio source and configured to: receive a first signal from the first audio source; transmit the first signal to the first output element for output; receive a second signal from a second input element associated with a second user; combine the first signal with the second signal to produce a combined signal; and transmit the combined signal to the first output element for output. 11. The system of claim 10, wherein the first circuit element includes a ducker coupled to the first input element and configured to duck the first signal based on the audio signals received by the first input element. 12. The system of claim 10, wherein the first circuit element includes a ducker coupled to the first input element and to the first audio source and configured to duck the first signal based on the second signal to generate a ducked first signal, wherein ducking the first signal comprises adjusting an amplitude that is associated with the first signal and corresponds to a first frequency range based on an amplitude that is associated with the second signal and corresponds to a second frequency range. 13. The system of claim 12, wherein the first circuit element further includes a sum unit coupled to the second input element and configured to combine audio signals, wherein the sum unit adds the ducked first signal to the second signal to generate the combined signal. 14. The system of claim 12, wherein the first circuit element includes a filter coupled to the first input element, wherein the filter is configured to filter out at least a portion of the first signal or the ducked first signal from a third signal received by the first input element to generate a fourth signal, and wherein the filter comprises an adaptive filter or a spectral subtractor. 15. The system of claim 14, wherein the first circuit element further includes a first adaptive echo cancellation unit configured to be coupled to a second adaptive cancellation unit within a second processing unit associated with the second user. 16. The system of claim 15, wherein an output associated with the first adaptive echo cancellation unit is coupled to an input associated with the second adaptive echo cancellation unit, and an output associated with the second adaptive echo cancellation unit is coupled to an input associated with the first adaptive echo cancellation unit. 17. The system of claim 16, wherein the first adaptive echo cancellation unit is configured to filter the fourth signal based on a fifth signal output by the second adaptive echo cancellation unit to reduce echo associated the fourth signal, and wherein the fifth signal is received by the second input element. 18. The system of claim 10, wherein the first circuit element further includes a routing circuit configured to: couple the first audio source to the first output element and to a second output element when operating in a first configuration; and couple a second audio source to the first output element when operating in a second configuration. 19. The system of claim 18, further comprising a control circuit coupled to the routing circuit and configured to: cause the routing circuit to operate in the first configuration when the first audio source is active and the second audio source is not active; and cause the routing circuit to operate in the second configuration when the first audio source is not active and the second audio source is active. 20. The system of claim 18, wherein the routing circuit is further configured to operate in the first configuration or the second configuration based on the state of a switch within the routing circuit. 21. A non-transitory computer-readable medium storing program instructions that, when executed by a processing unit, cause the processing unit to generate an audio signal by performing the steps of: receiving a first signal from a first audio source; transmitting the first signal to an output element associated with the first user for output; receiving a second signal from an input element associated with a second user; combining the first signal with the second signal to produce a combined signal; and transmitting the combined signal to the output element associated with the first user for output. 22. The non-transitory computer-readable medium of claim 19, wherein the step of combining the first signal with the second signal comprises: adjusting an amplitude associated with the first signal and corresponding to a first frequency range based on an amplitude associated with the second audio signal and corresponding to a second frequency range to generate a ducked first signal; and summing the ducked first signal with the second signal. 23. A system for accessing audio signals from two different audio sources, the system comprising: a first routing circuit coupled to a first audio source; and a first output element coupled to the first routing circuit, wherein, in a first state, the first routing circuit is configured to route an audio signal from the first audio source to a first output element for output, and wherein, in a second state, the first routing circuit is configured to route an audio signal from a second audio source to the first output element for output. 24. The system of claim 23, wherein the first routing circuit includes a first multiplexor, and, in the first state, the first multiplexor is configured in the first state to pass the audio signal from the first audio source to the first output element. 25. The system of claim 24, wherein the first routing circuit further includes a second multiplexor, and, in the second state, the second multiplexor is configured to pass the audio signal from the second audio source to the first multiplexor, and, in the second state, the first multiplexor is configured to pass the audio signal from the second audio source to the first output element and is configured to not pass the audio signal from the first audio source to the first output element. 26. The system of claim 25, wherein the first routing circuit further includes a switch that controls the configurations of the first multiplexor and the second multiplexor as well as how the audio signal from the first audio source and the audio signal from the second audio source are routed through the first routing circuit.
A first digital signal processor (DSP) associated with a first user outputs audio to the first user via first headphones, and captures speech from the first user via a first microphone. Similarly, a second DSP associated with a second user outputs audio to the second user via second headphones, and captures speech from the second user via a second microphone. The first DSP is coupled to the second DSP in order to allow the first and second users to share music and communicate with one another. The first user may speak into the first microphone, and the first and second DSPs may then interoperate to output that speech to the second user without substantially disrupting audio output to the second user. Each of the first and second users may also select between first and second audio sources that may be coupled to the first and second DSPs, respectively.1. A computer-implemented method for generating an audio signal, the method comprising: receiving a first signal from a first audio source; transmitting the first signal to an output element associated with the first user for output; receiving a second signal from an input element associated with a second user; combining the first signal with the second signal to produce a combined signal; and transmitting the combined signal to the output element associated with the first user for output. 2. The computer-implemented method of claim 1, wherein combining the first signal with the second signal comprises: ducking the first signal in response to the second signal to generate a ducked first signal; and adding the ducked first signal to the second signal. 3. The computer-implemented method of claim 2, wherein ducking the first signal comprises adjusting an amplitude that is associated with the first signal and corresponds to a first frequency range based on an amplitude that is associated with the second signal and corresponds to a second frequency range. 4. The computer-implemented method of claim 3, wherein the first frequency range is substantially the same as the second frequency range. 5. The computer-implemented method of claim 1, further comprising pre-processing the second signal to filter out at least a portion of a third signal that is input to the input element associated with the second user by an output element associated with the second user. 6. The computer-implemented method of claim 1, further comprising pre-processing the second signal to filter out at least a portion of a fourth signal that is input to an input element associated with the first user. 7. The computer-implemented method of claim 1, wherein the first audio source is associated with the first user, and further comprising causing the output element associated with the first user to output the first signal to the first user in response to the first user selecting the first audio source. 8. The computer-implemented method of claim 1, wherein the first audio source is associated with the second user, and further comprising causing the output element associated with the first user to output the first signal to the first user in response to the first user selecting the first audio source. 9. The computer-implemented method of claim 1, wherein the first audio source is associated with either the first user or the second user, and further comprising causing the output element associated with the first user to output the first signal to the first user in response to determining that a second audio source is currently inactive. 10. A system for generating an audio signal, comprising: a first output element associated with a first user and configured to produce audio signals; a first input element associated with the first user and configured to receive audio signals; a first audio source configured to generate audio signals; a first circuit element coupled to the first output element, the first input element, and the first audio source and configured to: receive a first signal from the first audio source; transmit the first signal to the first output element for output; receive a second signal from a second input element associated with a second user; combine the first signal with the second signal to produce a combined signal; and transmit the combined signal to the first output element for output. 11. The system of claim 10, wherein the first circuit element includes a ducker coupled to the first input element and configured to duck the first signal based on the audio signals received by the first input element. 12. The system of claim 10, wherein the first circuit element includes a ducker coupled to the first input element and to the first audio source and configured to duck the first signal based on the second signal to generate a ducked first signal, wherein ducking the first signal comprises adjusting an amplitude that is associated with the first signal and corresponds to a first frequency range based on an amplitude that is associated with the second signal and corresponds to a second frequency range. 13. The system of claim 12, wherein the first circuit element further includes a sum unit coupled to the second input element and configured to combine audio signals, wherein the sum unit adds the ducked first signal to the second signal to generate the combined signal. 14. The system of claim 12, wherein the first circuit element includes a filter coupled to the first input element, wherein the filter is configured to filter out at least a portion of the first signal or the ducked first signal from a third signal received by the first input element to generate a fourth signal, and wherein the filter comprises an adaptive filter or a spectral subtractor. 15. The system of claim 14, wherein the first circuit element further includes a first adaptive echo cancellation unit configured to be coupled to a second adaptive cancellation unit within a second processing unit associated with the second user. 16. The system of claim 15, wherein an output associated with the first adaptive echo cancellation unit is coupled to an input associated with the second adaptive echo cancellation unit, and an output associated with the second adaptive echo cancellation unit is coupled to an input associated with the first adaptive echo cancellation unit. 17. The system of claim 16, wherein the first adaptive echo cancellation unit is configured to filter the fourth signal based on a fifth signal output by the second adaptive echo cancellation unit to reduce echo associated the fourth signal, and wherein the fifth signal is received by the second input element. 18. The system of claim 10, wherein the first circuit element further includes a routing circuit configured to: couple the first audio source to the first output element and to a second output element when operating in a first configuration; and couple a second audio source to the first output element when operating in a second configuration. 19. The system of claim 18, further comprising a control circuit coupled to the routing circuit and configured to: cause the routing circuit to operate in the first configuration when the first audio source is active and the second audio source is not active; and cause the routing circuit to operate in the second configuration when the first audio source is not active and the second audio source is active. 20. The system of claim 18, wherein the routing circuit is further configured to operate in the first configuration or the second configuration based on the state of a switch within the routing circuit. 21. A non-transitory computer-readable medium storing program instructions that, when executed by a processing unit, cause the processing unit to generate an audio signal by performing the steps of: receiving a first signal from a first audio source; transmitting the first signal to an output element associated with the first user for output; receiving a second signal from an input element associated with a second user; combining the first signal with the second signal to produce a combined signal; and transmitting the combined signal to the output element associated with the first user for output. 22. The non-transitory computer-readable medium of claim 19, wherein the step of combining the first signal with the second signal comprises: adjusting an amplitude associated with the first signal and corresponding to a first frequency range based on an amplitude associated with the second audio signal and corresponding to a second frequency range to generate a ducked first signal; and summing the ducked first signal with the second signal. 23. A system for accessing audio signals from two different audio sources, the system comprising: a first routing circuit coupled to a first audio source; and a first output element coupled to the first routing circuit, wherein, in a first state, the first routing circuit is configured to route an audio signal from the first audio source to a first output element for output, and wherein, in a second state, the first routing circuit is configured to route an audio signal from a second audio source to the first output element for output. 24. The system of claim 23, wherein the first routing circuit includes a first multiplexor, and, in the first state, the first multiplexor is configured in the first state to pass the audio signal from the first audio source to the first output element. 25. The system of claim 24, wherein the first routing circuit further includes a second multiplexor, and, in the second state, the second multiplexor is configured to pass the audio signal from the second audio source to the first multiplexor, and, in the second state, the first multiplexor is configured to pass the audio signal from the second audio source to the first output element and is configured to not pass the audio signal from the first audio source to the first output element. 26. The system of claim 25, wherein the first routing circuit further includes a switch that controls the configurations of the first multiplexor and the second multiplexor as well as how the audio signal from the first audio source and the audio signal from the second audio source are routed through the first routing circuit.
2,600
10,335
10,335
15,546,042
2,642
Systems and methods relating to adjusting uplink coverage in a cellular communications network are disclosed. In some embodiments, a method of operation of a network node to adjust uplink coverage for one or more cells in a cellular communications network comprises determining that there is a need to adjust uplink beam transformations for one or more cells of a plurality of cells in a cellular communications network. For each cell of the one or more cells, the uplink beam transformation for the cell is a transformation of received uplink signals for the cell from an antenna domain to a beam domain. The method further comprises, upon determining that there is a need to adjust the uplink beam transformations for the one or more cells, determining new uplink beam transformations for the one or more cells and applying the new uplink beam transformations for the one or more cells.
1. A method of operation of a network node to adjust uplink coverage for one or more cells in a cellular communications network, comprising: determining that there is a need to adjust uplink beam transformations for one or more cells of a plurality of cells in a cellular communications network, wherein, for each cell of the one or more cells, the uplink beam transformation for the cell is a transformation of received uplink signals for the cell from an antenna domain to a beam domain; and upon determining that there is a need to adjust the uplink beam transformations for the one or more cells: determining new uplink beam transformations for the one or more cells; and applying the new uplink beam transformations for the one or more cells. 2. The method of claim 1 wherein determining that there is a need to adjust the uplink beam transformations for the one or more cells comprises: evaluating a mismatch between an uplink coverage of the one or more cells and a downlink coverage of the one or more cells; and determining that there is a need to adjust the uplink beam transformations for the one or more cells if the mismatch between the uplink coverage of the one or more cells and the downlink coverage of the one or more cells is more than a predefined threshold. 3. The method of claim 2 wherein evaluating the mismatch between the uplink coverage of the one or more cells and the downlink coverage of the one or more cells comprises evaluating the mismatch between the uplink coverage of the one or more cells and the downlink coverage of the one or more cells in response to a change in downlink cell shaping for at least one of the one or more cells. 4. The method of claim 1 wherein determining the new uplink beam transformations for the one or more cells comprises determining the new uplink beam transformations for the one or more cells such that the new uplink beam transformations for the one or more cells reduce or minimize the mismatch between the uplink coverage of the one or more cells and the downlink coverage of the one or more cells. 5. The method of claim 4 wherein determining the new uplink beam transformations for the one or more cells comprises determining the new uplink beam transformations for the one or more cells from a plurality of predetermined uplink beam transformations such that the new uplink beam transformations are uplink beam transformations from the plurality of predetermined uplink beam transformations that provide a best match between the uplink coverage of the one or more cells and the downlink coverage of the one or more cells. 6. The method of claim 5 further comprising, after applying the new uplink beam transformations for the one or more cells: determining that a remaining mismatch between the uplink coverage of the one or more cells and the downlink coverage of the one or more cells is greater than a predefined threshold; and upon determining that the remaining mismatch between the uplink coverage of the one or more cells and the downlink coverage of the one or more cells is greater than a predefined threshold: computing second new uplink beam transformations for the one or more cells that reduce or minimize the remaining mismatch between the uplink coverage of the one or more cells and the downlink coverage of the one or more cells; and applying the second new uplink beam transformations for the one or more cells. 7. The method of claim 4 wherein determining the new uplink beam transformations for the one or more cells comprises computing the new uplink beam transformations for the one or more cells from a plurality of predetermined uplink beam transformations such that the new uplink beam transformations reduce or minimize the mismatch between the uplink coverage of the one or more cells and the downlink coverage of the one or more cells. 8. The method of claim 1 wherein determining that there is a need to adjust the uplink beam transformations for the one or more cells comprises: evaluating a mismatch between the uplink coverage of the one or more cells and a geographical distribution of uplink traffic of the one or more cells; and determining that there is a need to adjust the uplink beam transformations for the one or more cells if the mismatch between the uplink coverage of the one or more cells and the geographical distribution of uplink traffic of the one or more cells is more than a predefined threshold. 9. The method of claim 1 wherein determining the new uplink beam transformations for the one or more cells comprises determining the new uplink beam transformations for the one or more cells such that the new uplink beam transformations for the one or more cells reduce or minimize the mismatch between the uplink coverage of the one or more cells and the geographical distribution of uplink traffic of the one or more cells. 10. The method of claim 9 wherein determining the new uplink beam transformations for the one or more cells comprises determining the new uplink beam transformations for the one or more cells from a plurality of predetermined uplink beam transformations such that the new uplink beam transformations are uplink beam transformations from the plurality of predetermined uplink beam transformations that provide a best match between the uplink coverage of the one or more cells and the geographical distribution of uplink traffic of the one or more cells. 11. The method of claim 10 further comprising, after applying the new uplink beam transformations for the one or more cells: determining that a remaining mismatch between the uplink coverage of the one or more cells and the geographical distribution of uplink traffic of the one or more cells is greater than a predefined threshold; and upon determining that the remaining mismatch between the uplink coverage of the one or more cells and the geographical distribution of uplink traffic of the one or more cells is greater than a predefined threshold: computing second new uplink beam transformations for the one or more cells that reduce or minimize the remaining mismatch between the uplink coverage of the one or more cells and the geographical distribution of uplink traffic of the one or more cells; and applying the second new uplink beam transformations for the one or more cells. 12. The method of claim 9 wherein determining the new uplink beam transformations for the one or more cells comprises computing the new uplink beam transformations for the one or more cells from a plurality of predetermined uplink beam transformations such that the new uplink beam transformations reduce or minimize the mismatch between the uplink coverage of the one or more cells and the geographical distribution of uplink traffic of the one or more cells. 13. The method of claim 8 further comprising: evaluating whether there is a need to perform one or more handovers, one or more Coordinated Multi-Point, CoMP, set changes, and/or one or more carrier aggregation configuration changes for the one or more cells; and performing one or more handovers, one or more CoMP set changes, and/or one or more carrier aggregation configuration changes for the one or more cells upon determining that there is a need to perform one or more handovers, one or more CoMP set changes, and/or one or more carrier aggregation configuration changes for the one or more cells. 14. The method of claim 1 wherein the network node is a core network node of the cellular communications network. 15. The method of claim 14 wherein applying the new uplink beam transformations for the one or more cells comprises configuring the one or more cells to use the new uplink beam transformations when processing uplink signals on the one or more cells. 16. The method of claim 1 wherein the network node is a radio access node of the cellular communications network, and the one or more cells are one or more cells served by the radio access node. 17. The method of claim 16 wherein applying the new uplink beam transformations for the one or more cells comprises applying the new uplink beam transformations locally at the radio access node when processing uplink signals on the one or more cells. 18-20. (canceled) 21. A network node for adjusting uplink coverage for one or more cells in a cellular communications network, comprising: one or more processors; and memory comprising instructions executable by the one or more processors, whereby the network node is operable to: determine that there is a need to adjust uplink beam transformations for one or more cells of a plurality of cells in a cellular communications network, wherein, for each cell of the one or more cells, the uplink beam transformation for the cell is a transformation of received uplink signals for the cell from an antenna domain to a beam domain; and upon determining that there is a need to adjust the uplink beam transformations for the one or more cells: determine new uplink beam transformations for the one or more cells; and apply the new uplink beam transformations for the one or more cells. 22. (canceled)
Systems and methods relating to adjusting uplink coverage in a cellular communications network are disclosed. In some embodiments, a method of operation of a network node to adjust uplink coverage for one or more cells in a cellular communications network comprises determining that there is a need to adjust uplink beam transformations for one or more cells of a plurality of cells in a cellular communications network. For each cell of the one or more cells, the uplink beam transformation for the cell is a transformation of received uplink signals for the cell from an antenna domain to a beam domain. The method further comprises, upon determining that there is a need to adjust the uplink beam transformations for the one or more cells, determining new uplink beam transformations for the one or more cells and applying the new uplink beam transformations for the one or more cells.1. A method of operation of a network node to adjust uplink coverage for one or more cells in a cellular communications network, comprising: determining that there is a need to adjust uplink beam transformations for one or more cells of a plurality of cells in a cellular communications network, wherein, for each cell of the one or more cells, the uplink beam transformation for the cell is a transformation of received uplink signals for the cell from an antenna domain to a beam domain; and upon determining that there is a need to adjust the uplink beam transformations for the one or more cells: determining new uplink beam transformations for the one or more cells; and applying the new uplink beam transformations for the one or more cells. 2. The method of claim 1 wherein determining that there is a need to adjust the uplink beam transformations for the one or more cells comprises: evaluating a mismatch between an uplink coverage of the one or more cells and a downlink coverage of the one or more cells; and determining that there is a need to adjust the uplink beam transformations for the one or more cells if the mismatch between the uplink coverage of the one or more cells and the downlink coverage of the one or more cells is more than a predefined threshold. 3. The method of claim 2 wherein evaluating the mismatch between the uplink coverage of the one or more cells and the downlink coverage of the one or more cells comprises evaluating the mismatch between the uplink coverage of the one or more cells and the downlink coverage of the one or more cells in response to a change in downlink cell shaping for at least one of the one or more cells. 4. The method of claim 1 wherein determining the new uplink beam transformations for the one or more cells comprises determining the new uplink beam transformations for the one or more cells such that the new uplink beam transformations for the one or more cells reduce or minimize the mismatch between the uplink coverage of the one or more cells and the downlink coverage of the one or more cells. 5. The method of claim 4 wherein determining the new uplink beam transformations for the one or more cells comprises determining the new uplink beam transformations for the one or more cells from a plurality of predetermined uplink beam transformations such that the new uplink beam transformations are uplink beam transformations from the plurality of predetermined uplink beam transformations that provide a best match between the uplink coverage of the one or more cells and the downlink coverage of the one or more cells. 6. The method of claim 5 further comprising, after applying the new uplink beam transformations for the one or more cells: determining that a remaining mismatch between the uplink coverage of the one or more cells and the downlink coverage of the one or more cells is greater than a predefined threshold; and upon determining that the remaining mismatch between the uplink coverage of the one or more cells and the downlink coverage of the one or more cells is greater than a predefined threshold: computing second new uplink beam transformations for the one or more cells that reduce or minimize the remaining mismatch between the uplink coverage of the one or more cells and the downlink coverage of the one or more cells; and applying the second new uplink beam transformations for the one or more cells. 7. The method of claim 4 wherein determining the new uplink beam transformations for the one or more cells comprises computing the new uplink beam transformations for the one or more cells from a plurality of predetermined uplink beam transformations such that the new uplink beam transformations reduce or minimize the mismatch between the uplink coverage of the one or more cells and the downlink coverage of the one or more cells. 8. The method of claim 1 wherein determining that there is a need to adjust the uplink beam transformations for the one or more cells comprises: evaluating a mismatch between the uplink coverage of the one or more cells and a geographical distribution of uplink traffic of the one or more cells; and determining that there is a need to adjust the uplink beam transformations for the one or more cells if the mismatch between the uplink coverage of the one or more cells and the geographical distribution of uplink traffic of the one or more cells is more than a predefined threshold. 9. The method of claim 1 wherein determining the new uplink beam transformations for the one or more cells comprises determining the new uplink beam transformations for the one or more cells such that the new uplink beam transformations for the one or more cells reduce or minimize the mismatch between the uplink coverage of the one or more cells and the geographical distribution of uplink traffic of the one or more cells. 10. The method of claim 9 wherein determining the new uplink beam transformations for the one or more cells comprises determining the new uplink beam transformations for the one or more cells from a plurality of predetermined uplink beam transformations such that the new uplink beam transformations are uplink beam transformations from the plurality of predetermined uplink beam transformations that provide a best match between the uplink coverage of the one or more cells and the geographical distribution of uplink traffic of the one or more cells. 11. The method of claim 10 further comprising, after applying the new uplink beam transformations for the one or more cells: determining that a remaining mismatch between the uplink coverage of the one or more cells and the geographical distribution of uplink traffic of the one or more cells is greater than a predefined threshold; and upon determining that the remaining mismatch between the uplink coverage of the one or more cells and the geographical distribution of uplink traffic of the one or more cells is greater than a predefined threshold: computing second new uplink beam transformations for the one or more cells that reduce or minimize the remaining mismatch between the uplink coverage of the one or more cells and the geographical distribution of uplink traffic of the one or more cells; and applying the second new uplink beam transformations for the one or more cells. 12. The method of claim 9 wherein determining the new uplink beam transformations for the one or more cells comprises computing the new uplink beam transformations for the one or more cells from a plurality of predetermined uplink beam transformations such that the new uplink beam transformations reduce or minimize the mismatch between the uplink coverage of the one or more cells and the geographical distribution of uplink traffic of the one or more cells. 13. The method of claim 8 further comprising: evaluating whether there is a need to perform one or more handovers, one or more Coordinated Multi-Point, CoMP, set changes, and/or one or more carrier aggregation configuration changes for the one or more cells; and performing one or more handovers, one or more CoMP set changes, and/or one or more carrier aggregation configuration changes for the one or more cells upon determining that there is a need to perform one or more handovers, one or more CoMP set changes, and/or one or more carrier aggregation configuration changes for the one or more cells. 14. The method of claim 1 wherein the network node is a core network node of the cellular communications network. 15. The method of claim 14 wherein applying the new uplink beam transformations for the one or more cells comprises configuring the one or more cells to use the new uplink beam transformations when processing uplink signals on the one or more cells. 16. The method of claim 1 wherein the network node is a radio access node of the cellular communications network, and the one or more cells are one or more cells served by the radio access node. 17. The method of claim 16 wherein applying the new uplink beam transformations for the one or more cells comprises applying the new uplink beam transformations locally at the radio access node when processing uplink signals on the one or more cells. 18-20. (canceled) 21. A network node for adjusting uplink coverage for one or more cells in a cellular communications network, comprising: one or more processors; and memory comprising instructions executable by the one or more processors, whereby the network node is operable to: determine that there is a need to adjust uplink beam transformations for one or more cells of a plurality of cells in a cellular communications network, wherein, for each cell of the one or more cells, the uplink beam transformation for the cell is a transformation of received uplink signals for the cell from an antenna domain to a beam domain; and upon determining that there is a need to adjust the uplink beam transformations for the one or more cells: determine new uplink beam transformations for the one or more cells; and apply the new uplink beam transformations for the one or more cells. 22. (canceled)
2,600
10,336
10,336
13,594,049
2,646
A method for configuring a mobile communication device to perform transactions using a second communication channel that is different from a first communication channel through which the mobile communication device sends voice data. The method includes attaching a secure element to the mobile communication device. The secure element includes a memory storing an application, a processor configured to execute the application stored in the memory; and a wireless transceiver configured to send transaction data associated with the executed application through the second communication channel to a terminal that is remote from the mobile communication device.
1. A method for conducting a financial transaction between a mobile communications device and a point-of-sale terminal, the method comprising: maintaining a payment application and an identification code in a memory of a mobile communications device, the mobile communications device including a processor and a plurality of wireless interfaces each supporting a different communication protocol; wirelessly transmitting the identification code stored in the memory of the mobile communications device to the point-of-sale terminal using a first wireless communication channel, wherein execution of the payment application facilitates the transfer of the identification code to the point-of-sale terminal connected to a remote server, wherein the identification code received at the point-of-sale terminal is used to identify the user corresponding to the identification code, process the financial transaction, and provide a financial transaction response to the mobile communications device; receiving the financial transaction response at the mobile communications device from the remote server using a second wireless communication channel different from the first wireless communication channel; displaying financial transaction data from the financial transaction response on a display of the mobile communications device. 2. The method of claim 1, wherein the first wireless communication channel is associated with visual display. 3. The method of claim 1, wherein the second wireless communication channel is associated with a cellular radio communication channel. 4. The method of claim 1, wherein the financial transaction response data comprises a transaction amount. 5. The method of claim 1, wherein the financial transaction response data comprises a merchant name. 6. The method of claim 1, wherein the financial transaction response data comprises a payment account balance. 7. The method of claim 1, wherein digital artifacts associated with the financial transaction are received using the second communication channel. 8. The method of claim 7, wherein digital artifacts include coupons. 9. The method of claim 7, wherein digital artifacts include tickets. 10. The method of claim 7, wherein digital artifacts include receipts. 11. A mobile communications device for conducting a financial transaction with a point-of-sale terminal, the mobile communications device comprising: a memory configured to maintain a payment application and an identification code, the memory coupled to a processor and a plurality of wireless interfaces each supporting a different communication protocol in the mobile communications device; a first wireless interface configured to wirelessly transmit the identification code stored in the memory of the mobile communications device to the point-of-sale terminal using a first wireless communication channel, wherein execution of the payment application facilitates the transfer of the identification code to the point-of-sale terminal connected to a remote server, wherein the identification code received at the point-of-sale terminal is used to identify the user corresponding to the identification code, process the financial transaction, and provide a financial transaction response to the mobile communications device; a second wireless interface configured to receive the financial transaction response at the mobile communications device from the remote server using a second wireless communication channel different from the first wireless communication channel; wherein financial transaction data from the financial transaction response is displayed on the mobile communications device. 12. The mobile communications device of claim 11, wherein the first wireless communication channel is associated with visual display. 13. The mobile communications device of claim 11, wherein the second wireless communication channel is associated with a cellular radio communication channel. 14. The mobile communications device of claim 11, wherein the financial transaction response data comprises a transaction amount. 15. The mobile communications device of claim 11, wherein the financial transaction response data comprises a merchant name. 16. The mobile communications device of claim 11, wherein the financial transaction response data comprises a payment account balance. 17. The mobile communications device of claim 11, wherein digital artifacts associated with the financial transaction are received using the second communication channel. 18. The mobile communications device of claim 17, wherein digital artifacts include coupons. 19. The mobile communications device of claim 17, wherein digital artifacts include tickets. 20. A computer readable storage medium comprising: computer code for maintaining a payment application and an identification code in a memory of a mobile communications device, the mobile communications device including a processor and a plurality of wireless interfaces each supporting a different communication protocol; computer code for wirelessly transmitting the identification code stored in the memory of the mobile communications device to the point-of-sale terminal using a first wireless communication channel, wherein execution of the payment application facilitates the transfer of the identification code to the point-of-sale terminal connected to a remote server, wherein the identification code received at the point-of-sale terminal is used to identify the user corresponding to the identification code, process the financial transaction, and provide a financial transaction response to the mobile communications device; computer code for receiving the financial transaction response at the mobile communications device from the remote server using a second wireless communication channel different from the first wireless communication channel; computer code for displaying financial transaction data from the financial transaction response on a display of the mobile communications device.
A method for configuring a mobile communication device to perform transactions using a second communication channel that is different from a first communication channel through which the mobile communication device sends voice data. The method includes attaching a secure element to the mobile communication device. The secure element includes a memory storing an application, a processor configured to execute the application stored in the memory; and a wireless transceiver configured to send transaction data associated with the executed application through the second communication channel to a terminal that is remote from the mobile communication device.1. A method for conducting a financial transaction between a mobile communications device and a point-of-sale terminal, the method comprising: maintaining a payment application and an identification code in a memory of a mobile communications device, the mobile communications device including a processor and a plurality of wireless interfaces each supporting a different communication protocol; wirelessly transmitting the identification code stored in the memory of the mobile communications device to the point-of-sale terminal using a first wireless communication channel, wherein execution of the payment application facilitates the transfer of the identification code to the point-of-sale terminal connected to a remote server, wherein the identification code received at the point-of-sale terminal is used to identify the user corresponding to the identification code, process the financial transaction, and provide a financial transaction response to the mobile communications device; receiving the financial transaction response at the mobile communications device from the remote server using a second wireless communication channel different from the first wireless communication channel; displaying financial transaction data from the financial transaction response on a display of the mobile communications device. 2. The method of claim 1, wherein the first wireless communication channel is associated with visual display. 3. The method of claim 1, wherein the second wireless communication channel is associated with a cellular radio communication channel. 4. The method of claim 1, wherein the financial transaction response data comprises a transaction amount. 5. The method of claim 1, wherein the financial transaction response data comprises a merchant name. 6. The method of claim 1, wherein the financial transaction response data comprises a payment account balance. 7. The method of claim 1, wherein digital artifacts associated with the financial transaction are received using the second communication channel. 8. The method of claim 7, wherein digital artifacts include coupons. 9. The method of claim 7, wherein digital artifacts include tickets. 10. The method of claim 7, wherein digital artifacts include receipts. 11. A mobile communications device for conducting a financial transaction with a point-of-sale terminal, the mobile communications device comprising: a memory configured to maintain a payment application and an identification code, the memory coupled to a processor and a plurality of wireless interfaces each supporting a different communication protocol in the mobile communications device; a first wireless interface configured to wirelessly transmit the identification code stored in the memory of the mobile communications device to the point-of-sale terminal using a first wireless communication channel, wherein execution of the payment application facilitates the transfer of the identification code to the point-of-sale terminal connected to a remote server, wherein the identification code received at the point-of-sale terminal is used to identify the user corresponding to the identification code, process the financial transaction, and provide a financial transaction response to the mobile communications device; a second wireless interface configured to receive the financial transaction response at the mobile communications device from the remote server using a second wireless communication channel different from the first wireless communication channel; wherein financial transaction data from the financial transaction response is displayed on the mobile communications device. 12. The mobile communications device of claim 11, wherein the first wireless communication channel is associated with visual display. 13. The mobile communications device of claim 11, wherein the second wireless communication channel is associated with a cellular radio communication channel. 14. The mobile communications device of claim 11, wherein the financial transaction response data comprises a transaction amount. 15. The mobile communications device of claim 11, wherein the financial transaction response data comprises a merchant name. 16. The mobile communications device of claim 11, wherein the financial transaction response data comprises a payment account balance. 17. The mobile communications device of claim 11, wherein digital artifacts associated with the financial transaction are received using the second communication channel. 18. The mobile communications device of claim 17, wherein digital artifacts include coupons. 19. The mobile communications device of claim 17, wherein digital artifacts include tickets. 20. A computer readable storage medium comprising: computer code for maintaining a payment application and an identification code in a memory of a mobile communications device, the mobile communications device including a processor and a plurality of wireless interfaces each supporting a different communication protocol; computer code for wirelessly transmitting the identification code stored in the memory of the mobile communications device to the point-of-sale terminal using a first wireless communication channel, wherein execution of the payment application facilitates the transfer of the identification code to the point-of-sale terminal connected to a remote server, wherein the identification code received at the point-of-sale terminal is used to identify the user corresponding to the identification code, process the financial transaction, and provide a financial transaction response to the mobile communications device; computer code for receiving the financial transaction response at the mobile communications device from the remote server using a second wireless communication channel different from the first wireless communication channel; computer code for displaying financial transaction data from the financial transaction response on a display of the mobile communications device.
2,600
10,337
10,337
15,503,295
2,628
User interface and method for contactlessly operating a hardware operating element in a 3-D gesture mode. The invention proposes a user interface and a method for contactlessly operating a hardware operating element ( 12 ), called “button” below, of a user interface in a 3-D gesture mode, by means of which the user interface can be operated using gestures freely carried out in space, called 3-D gestures below. The method comprises the steps of:—detecting ( 100 ) a user's hand ( 4 ),—assigning ( 200 ) the hand ( 4 ) to an area of the user interface (I) assigned to the button ( 12 ), and, in response thereto,—outputting ( 300 ) a suggestion ( 14, 16 ) to the user.
1-15. (canceled) 16. A user interface for a vehicle, comprising a processing apparatus; a hardware operating element, operatively coupled to the processing apparatus, wherein the hardware operating element activates and/or controls at least one function of the vehicle; a sensor, operatively coupled to the hardware operating element, the sensor being configured to detect hand gestures that are performed in three-dimensional (3D) space in a detection area in proximity to the sensor; and an evaluation unit, operatively coupled to the sensor, wherein the evaluation unit is configured to recognize at least one of a plurality of detected hand gestures by the sensor, wherein the processing apparatus is configured to generate one or more suggestions in response to the evaluation unit recognizing the at least one of a plurality of detected hand gestures, and wherein the suggestion comprises one or more visual and/or audible indicia relating the hardware element an at least one function. 17. The user interface according to claim 16, wherein the evaluation unit is configured to recognize the at least one of the plurality of detected hand gestures assigned to the hardware operating element, and wherein the processing apparatus is configured to execute the at least one or more functions in response to recognizing the at least one of the plurality of detected hand gestures. 18. The user interface according to claim 16, wherein the processing apparatus comprises a display, and wherein the detection area comprises an edge area of the display. 19. The user interface according to claim 16, wherein the hardware operating element comprises a button located adjacent to the processing apparatus. 20: The user interface according to claim 16, wherein the processing apparatus is configured to generate the one or more suggestions via at least one of: a display screen, an electroacoustic converter, and/or a lighting apparatus of the hardware operating element. 21: The user interface according to claim 20, wherein the processing apparatus is configured to generate the one or more suggestions by fading an optical representation of the hardware element on the display screen. 22: The user interface according to claim 21, wherein the processing apparatus is configured to generate the one or more suggestions by fading the optical representation of the hardware element on the display screen in an edge area of the display screen that is closest in proximity to the hardware element. 23: The user interface according to claim 22, wherein the processing apparatus is configured to generate the one or more suggestions by terminating the fading of the optical representation after a predetermined period of time. 24. A method for operating a user interface for a vehicle, comprising providing a hardware operating element for activating and/or controlling at least one function of the vehicle, wherein the hardware operating element is operatively coupled to a processing apparatus; detecting hand gestures that are performed in three-dimensional (3D) space in a detection area in proximity to a sensor operatively coupled to a processing apparatus; recognizing, via an evaluation unit, at least one of a plurality of hand gestures detected by the sensor; and generating, via a processing apparatus, one or more suggestions in response to the evaluation unit recognizing the at least one of a plurality of detected hand gestures, and wherein the suggestion comprises one or more visual and/or audible indicia relating the hardware element an at least one function. 25. The method according to claim 24, wherein the evaluation unit is configured to recognize the at least one of the plurality of detected hand gestures assigned to the hardware operating element, and wherein the processing apparatus is configured to execute the at least one or more functions in response to recognizing the at least one of the plurality of detected hand gestures. 26. The method according to claim 24, wherein the processing apparatus comprises a display, and wherein the detection area comprises an edge area of the display. 27. The method according to claim 24, wherein the hardware operating element comprises a button located adjacent to the processing apparatus. 28: The method according to claim 24, wherein the processing apparatus is configured to generate the one or more suggestions via at least one of: a display screen, an electroacoustic converter, and/or a lighting apparatus of the hardware operating element. 29: The method according to claim 28, wherein the processing apparatus is configured to generate the one or more suggestions by fading an optical representation of the hardware element on the display screen. 30: The method according to claim 29, wherein the processing apparatus is configured to generate the one or more suggestions by fading the optical representation of the hardware element on the display screen in an edge area of the display screen that is closest in proximity to the hardware element. 31: The method according to claim 30, wherein the processing apparatus is configured to generate the one or more suggestions by terminating the fading of the optical representation after a predetermined period of time. 32. A user interface for a vehicle, comprising a display unit; a hardware operating element, operatively coupled to the processing apparatus, wherein the hardware operating element activates and/or controls at least one function of the vehicle, and wherein the display unit is located in proximity to the display unit; a sensor, operatively coupled to the hardware operating element, the sensor being configured to detect hand gestures that are performed in three-dimensional (3D) space in a detection area in proximity to the sensor; and an evaluation unit, operatively coupled to the sensor, wherein the evaluation unit is configured to recognize at least one of a plurality of detected hand gestures by the sensor, wherein the processing apparatus is configured to generate one or more suggestions in response to the evaluation unit recognizing the at least one of a plurality of detected hand gestures, wherein the suggestion comprises one or more visual indicia relating the hardware element, and wherein the visual indicial is displayed on an edge area of the display unit adjacent to the hardware operating element. 33. The user interface according to claim 32, wherein the evaluation unit is configured to recognize the at least one of the plurality of detected hand gestures assigned to the hardware operating element, and wherein the processing apparatus is configured to execute the at least one or more functions in response to recognizing the at least one of the plurality of detected hand gestures. 34. The user interface according to claim 32, wherein the hardware operating element comprises a button located adjacent to the processing apparatus. 35: The user interface according to claim 16, wherein the processing apparatus is configured to generate the one or more suggestions via a lighting apparatus of the hardware operating element.
User interface and method for contactlessly operating a hardware operating element in a 3-D gesture mode. The invention proposes a user interface and a method for contactlessly operating a hardware operating element ( 12 ), called “button” below, of a user interface in a 3-D gesture mode, by means of which the user interface can be operated using gestures freely carried out in space, called 3-D gestures below. The method comprises the steps of:—detecting ( 100 ) a user's hand ( 4 ),—assigning ( 200 ) the hand ( 4 ) to an area of the user interface (I) assigned to the button ( 12 ), and, in response thereto,—outputting ( 300 ) a suggestion ( 14, 16 ) to the user.1-15. (canceled) 16. A user interface for a vehicle, comprising a processing apparatus; a hardware operating element, operatively coupled to the processing apparatus, wherein the hardware operating element activates and/or controls at least one function of the vehicle; a sensor, operatively coupled to the hardware operating element, the sensor being configured to detect hand gestures that are performed in three-dimensional (3D) space in a detection area in proximity to the sensor; and an evaluation unit, operatively coupled to the sensor, wherein the evaluation unit is configured to recognize at least one of a plurality of detected hand gestures by the sensor, wherein the processing apparatus is configured to generate one or more suggestions in response to the evaluation unit recognizing the at least one of a plurality of detected hand gestures, and wherein the suggestion comprises one or more visual and/or audible indicia relating the hardware element an at least one function. 17. The user interface according to claim 16, wherein the evaluation unit is configured to recognize the at least one of the plurality of detected hand gestures assigned to the hardware operating element, and wherein the processing apparatus is configured to execute the at least one or more functions in response to recognizing the at least one of the plurality of detected hand gestures. 18. The user interface according to claim 16, wherein the processing apparatus comprises a display, and wherein the detection area comprises an edge area of the display. 19. The user interface according to claim 16, wherein the hardware operating element comprises a button located adjacent to the processing apparatus. 20: The user interface according to claim 16, wherein the processing apparatus is configured to generate the one or more suggestions via at least one of: a display screen, an electroacoustic converter, and/or a lighting apparatus of the hardware operating element. 21: The user interface according to claim 20, wherein the processing apparatus is configured to generate the one or more suggestions by fading an optical representation of the hardware element on the display screen. 22: The user interface according to claim 21, wherein the processing apparatus is configured to generate the one or more suggestions by fading the optical representation of the hardware element on the display screen in an edge area of the display screen that is closest in proximity to the hardware element. 23: The user interface according to claim 22, wherein the processing apparatus is configured to generate the one or more suggestions by terminating the fading of the optical representation after a predetermined period of time. 24. A method for operating a user interface for a vehicle, comprising providing a hardware operating element for activating and/or controlling at least one function of the vehicle, wherein the hardware operating element is operatively coupled to a processing apparatus; detecting hand gestures that are performed in three-dimensional (3D) space in a detection area in proximity to a sensor operatively coupled to a processing apparatus; recognizing, via an evaluation unit, at least one of a plurality of hand gestures detected by the sensor; and generating, via a processing apparatus, one or more suggestions in response to the evaluation unit recognizing the at least one of a plurality of detected hand gestures, and wherein the suggestion comprises one or more visual and/or audible indicia relating the hardware element an at least one function. 25. The method according to claim 24, wherein the evaluation unit is configured to recognize the at least one of the plurality of detected hand gestures assigned to the hardware operating element, and wherein the processing apparatus is configured to execute the at least one or more functions in response to recognizing the at least one of the plurality of detected hand gestures. 26. The method according to claim 24, wherein the processing apparatus comprises a display, and wherein the detection area comprises an edge area of the display. 27. The method according to claim 24, wherein the hardware operating element comprises a button located adjacent to the processing apparatus. 28: The method according to claim 24, wherein the processing apparatus is configured to generate the one or more suggestions via at least one of: a display screen, an electroacoustic converter, and/or a lighting apparatus of the hardware operating element. 29: The method according to claim 28, wherein the processing apparatus is configured to generate the one or more suggestions by fading an optical representation of the hardware element on the display screen. 30: The method according to claim 29, wherein the processing apparatus is configured to generate the one or more suggestions by fading the optical representation of the hardware element on the display screen in an edge area of the display screen that is closest in proximity to the hardware element. 31: The method according to claim 30, wherein the processing apparatus is configured to generate the one or more suggestions by terminating the fading of the optical representation after a predetermined period of time. 32. A user interface for a vehicle, comprising a display unit; a hardware operating element, operatively coupled to the processing apparatus, wherein the hardware operating element activates and/or controls at least one function of the vehicle, and wherein the display unit is located in proximity to the display unit; a sensor, operatively coupled to the hardware operating element, the sensor being configured to detect hand gestures that are performed in three-dimensional (3D) space in a detection area in proximity to the sensor; and an evaluation unit, operatively coupled to the sensor, wherein the evaluation unit is configured to recognize at least one of a plurality of detected hand gestures by the sensor, wherein the processing apparatus is configured to generate one or more suggestions in response to the evaluation unit recognizing the at least one of a plurality of detected hand gestures, wherein the suggestion comprises one or more visual indicia relating the hardware element, and wherein the visual indicial is displayed on an edge area of the display unit adjacent to the hardware operating element. 33. The user interface according to claim 32, wherein the evaluation unit is configured to recognize the at least one of the plurality of detected hand gestures assigned to the hardware operating element, and wherein the processing apparatus is configured to execute the at least one or more functions in response to recognizing the at least one of the plurality of detected hand gestures. 34. The user interface according to claim 32, wherein the hardware operating element comprises a button located adjacent to the processing apparatus. 35: The user interface according to claim 16, wherein the processing apparatus is configured to generate the one or more suggestions via a lighting apparatus of the hardware operating element.
2,600
10,338
10,338
15,482,699
2,651
A system and method of providing a virtual avatar to accompany audio signals being broadcast from an electronic device that has a display screen. A virtual avatar model is created. The virtual avatar model is altered in real time in response to audio signals being broadcast from the electronic device. A 3D stereoscopic or auto-stereoscopic video file is created using the virtual avatar model while the virtual avatar model is responding to the audio signals. The 3D video file is played on the display screen of the electronic device. When viewed, the 3D video file shows an avatar that appears, at least in part, to a viewer to be three-dimensional. Furthermore, the avatar appears to extend out from the display screen. The result is a three-dimensional avatar that appears to extend out of a display screen, wherein movements of the avatar are synchronized to audio signals that are being broadcast.
1. A method of providing a virtual avatar to accompany audio signals being broadcast from an electronic device that has a display screen, said method comprising the steps of: creating a virtual avatar model; altering said virtual avatar model in response to said audio signals; creating a stereoscopic video file by imaging said virtual avatar model from two virtual stereoscopic viewpoints while said virtual avatar model is responding to said audio signals; and playing said stereoscopic video file on said display screen of said electronic device, wherein said stereoscopic video file shows an avatar image that, at least in part, appears to a viewer viewing said screen with a stereoscopic image viewer to be three-dimensional and to extend out from said display screen. 2. The method according to claim 1, wherein altering said virtual avatar model in response to said audio signals includes providing said virtual avatar model with a mouth and moving said mouth in response to said audio signals. 3. The method according to claim 1, wherein altering said virtual avatar model in response to said audio signals includes running a word recognition program and moving said virtual avatar model in a preselected manner as certain words are recognized in said audio signals. 4. The method according to claim 3, further including the step of adding supplemental virtual elements to said stereoscopic video file that are shown with said avatar image when certain words are recognized by said word recognition program. 5. The method according to claim 4, wherein creating a virtual avatar model includes selecting a generic avatar model from a database of avatar models and wrapping images of a face onto said generic avatar model. 6. The method according to claim 4, wherein creating a virtual avatar model includes wrapping images of a body onto said generic avatar model. 7. The method according to claim 1, wherein said electronic device is a smart phone and said audio signals are from a phone call received through said smart phone. 8. (canceled) 9. The method according to claim 1, wherein said display screen is an auto-stereoscopic display and said stereoscopic video file is formatted to play on said auto-stereoscopic display. 10. A method of providing a virtual avatar to accompany audio signals of a call being received from a caller on a smart phone with a display screen, said method comprising the steps of: retrieving a virtual avatar model of an avatar that is assigned to said caller when said call is received from said caller; altering said virtual avatar model in response to said audio signals contained in said call; creating a stereoscopic video file by imaging said virtual avatar model from two virtual stereoscopic viewpoints in real time while said virtual avatar model is responding to said audio signals; playing said stereoscopic video file on said display screen of said smart phone. 11. The method according to claim 10, wherein altering said virtual avatar model in response to said audio signals contained within said call includes providing said virtual avatar model with a mouth and moving said mouth in response to said audio signals contained within said call. 12. The method according to claim 10, wherein altering said virtual avatar model in response to said audio signals contained within said call includes running a word recognition program and moving said virtual avatar model in a preselected manner as certain words are recognized in said audio signals contained within said call. 13. The method according to claim 10, wherein retrieving a virtual avatar model includes retrieving said virtual avatar model from a database of avatar models that is accessible by said smart phone. 14. The method according to claim 10, wherein said 3D video file is selected from a group consisting of stereoscopic video files that appear three-dimensional when viewed through 3D glasses and auto-stereoscopic files that appear three dimensional to a naked eye. 15. A method of providing a virtual avatar to accompany audio signals being broadcast from an electronic device that has a display screen, said method comprising the steps of: providing a virtual avatar model; altering said virtual avatar model in response to said audio signals; generating a stereoscopic video file by imaging said virtual avatar model from two virtual stereoscopic viewpoints while said virtual avatar model is responding to said audio signals; playing said stereoscopic video file on said display screen of said electronic device, wherein said stereoscopic video file shows an avatar image that, at least in part, appears to a viewer of said screen to be three-dimensional when viewed through 3D glasses. 16. The method according to claim 15, wherein altering said virtual avatar model in response to said audio signals includes providing said virtual avatar model with a mouth and moving said mouth in response to said audio signals. 17. The method according to claim 15, wherein altering said virtual avatar model in response to said audio signals includes running a word recognition program and moving said virtual avatar model in a preselected manner as certain words are recognized in said audio signals. 18. The method according to claim 15, wherein providing a virtual avatar model includes selecting a generic avatar model from a database of avatar models. 19. The method according to claim 18, wherein providing a virtual avatar model includes customizing said generic avatar model with accessories selected from an accessory database. 20. The method according to claim 15, wherein said electronic device is a smart phone and said audio signals are from a phone call received through said smart phone.
A system and method of providing a virtual avatar to accompany audio signals being broadcast from an electronic device that has a display screen. A virtual avatar model is created. The virtual avatar model is altered in real time in response to audio signals being broadcast from the electronic device. A 3D stereoscopic or auto-stereoscopic video file is created using the virtual avatar model while the virtual avatar model is responding to the audio signals. The 3D video file is played on the display screen of the electronic device. When viewed, the 3D video file shows an avatar that appears, at least in part, to a viewer to be three-dimensional. Furthermore, the avatar appears to extend out from the display screen. The result is a three-dimensional avatar that appears to extend out of a display screen, wherein movements of the avatar are synchronized to audio signals that are being broadcast.1. A method of providing a virtual avatar to accompany audio signals being broadcast from an electronic device that has a display screen, said method comprising the steps of: creating a virtual avatar model; altering said virtual avatar model in response to said audio signals; creating a stereoscopic video file by imaging said virtual avatar model from two virtual stereoscopic viewpoints while said virtual avatar model is responding to said audio signals; and playing said stereoscopic video file on said display screen of said electronic device, wherein said stereoscopic video file shows an avatar image that, at least in part, appears to a viewer viewing said screen with a stereoscopic image viewer to be three-dimensional and to extend out from said display screen. 2. The method according to claim 1, wherein altering said virtual avatar model in response to said audio signals includes providing said virtual avatar model with a mouth and moving said mouth in response to said audio signals. 3. The method according to claim 1, wherein altering said virtual avatar model in response to said audio signals includes running a word recognition program and moving said virtual avatar model in a preselected manner as certain words are recognized in said audio signals. 4. The method according to claim 3, further including the step of adding supplemental virtual elements to said stereoscopic video file that are shown with said avatar image when certain words are recognized by said word recognition program. 5. The method according to claim 4, wherein creating a virtual avatar model includes selecting a generic avatar model from a database of avatar models and wrapping images of a face onto said generic avatar model. 6. The method according to claim 4, wherein creating a virtual avatar model includes wrapping images of a body onto said generic avatar model. 7. The method according to claim 1, wherein said electronic device is a smart phone and said audio signals are from a phone call received through said smart phone. 8. (canceled) 9. The method according to claim 1, wherein said display screen is an auto-stereoscopic display and said stereoscopic video file is formatted to play on said auto-stereoscopic display. 10. A method of providing a virtual avatar to accompany audio signals of a call being received from a caller on a smart phone with a display screen, said method comprising the steps of: retrieving a virtual avatar model of an avatar that is assigned to said caller when said call is received from said caller; altering said virtual avatar model in response to said audio signals contained in said call; creating a stereoscopic video file by imaging said virtual avatar model from two virtual stereoscopic viewpoints in real time while said virtual avatar model is responding to said audio signals; playing said stereoscopic video file on said display screen of said smart phone. 11. The method according to claim 10, wherein altering said virtual avatar model in response to said audio signals contained within said call includes providing said virtual avatar model with a mouth and moving said mouth in response to said audio signals contained within said call. 12. The method according to claim 10, wherein altering said virtual avatar model in response to said audio signals contained within said call includes running a word recognition program and moving said virtual avatar model in a preselected manner as certain words are recognized in said audio signals contained within said call. 13. The method according to claim 10, wherein retrieving a virtual avatar model includes retrieving said virtual avatar model from a database of avatar models that is accessible by said smart phone. 14. The method according to claim 10, wherein said 3D video file is selected from a group consisting of stereoscopic video files that appear three-dimensional when viewed through 3D glasses and auto-stereoscopic files that appear three dimensional to a naked eye. 15. A method of providing a virtual avatar to accompany audio signals being broadcast from an electronic device that has a display screen, said method comprising the steps of: providing a virtual avatar model; altering said virtual avatar model in response to said audio signals; generating a stereoscopic video file by imaging said virtual avatar model from two virtual stereoscopic viewpoints while said virtual avatar model is responding to said audio signals; playing said stereoscopic video file on said display screen of said electronic device, wherein said stereoscopic video file shows an avatar image that, at least in part, appears to a viewer of said screen to be three-dimensional when viewed through 3D glasses. 16. The method according to claim 15, wherein altering said virtual avatar model in response to said audio signals includes providing said virtual avatar model with a mouth and moving said mouth in response to said audio signals. 17. The method according to claim 15, wherein altering said virtual avatar model in response to said audio signals includes running a word recognition program and moving said virtual avatar model in a preselected manner as certain words are recognized in said audio signals. 18. The method according to claim 15, wherein providing a virtual avatar model includes selecting a generic avatar model from a database of avatar models. 19. The method according to claim 18, wherein providing a virtual avatar model includes customizing said generic avatar model with accessories selected from an accessory database. 20. The method according to claim 15, wherein said electronic device is a smart phone and said audio signals are from a phone call received through said smart phone.
2,600
10,339
10,339
13,338,076
2,621
A lighting apparatus includes a string of light emitting diode (LED) sets coupled in series where each set includes at least one LED. A current diversion circuit is coupled to the string and is configured to operate responsive to a bias state transition of one of the LED sets to direct current away from another one of the LED sets. A current limiting circuit is coupled in series with the string and is configured to conduct current responsive to a forward biasing of all of the LED sets. The current limiting circuit includes only passive electrical component(s).
1. A lighting apparatus, comprising: a string of light emitting diode (LED) sets coupled in series, each LED set comprising at least one LED; a current diversion circuit coupled to the string and configured to operate responsive to a bias state transition of one of the LED sets to direct current away from another one of the LED sets; and a current limiting circuit coupled in series with the string and being configured to conduct current responsive to a forward biasing of all of the LED sets; wherein the current limiting circuit is comprised of a passive electrical component without including any active electrical component. 2. The lighting apparatus of claim 1, wherein the current diversion circuit is configured to conduct current via a first one of the LED sets and is configured to be turned off responsive to current through a second one of the LED sets. 3. The lighting apparatus of claim 2, wherein the current diversion circuit is configured to conduct current responsive to a forward biasing of the first one of the LED sets. 4. The lighting apparatus of claim 2, wherein the first one of the LED sets comprises more LEDs than other ones of the LED sets. 5. The lighting apparatus of claim 1, wherein the current diversion circuit is configured to turn off responsive to a voltage at a node of the string. 6. The lighting apparatus of claim 5, further comprising a resistor coupled in series with the string and wherein the first one of the current diversion circuits is configured to turn off responsive to a voltage at a terminal of the resistor. 7. The lighting apparatus of claim 6, wherein the current diversion circuit comprises a bipolar transistor providing a controllable current path between a node of the string and a terminal of a power supply, and wherein current through the resistor varies an emitter bias of the bipolar transistor. 8. The lighting apparatus of claim 1, wherein the current diversion circuit comprises: a transistor providing a controllable current path between a node of the string and a terminal of a power supply; and a turn-off circuit coupled to a node of the string and to a control terminal of the transistor and configured to control the current path responsive to a control input. 9. The lighting apparatus of claim 8, wherein current through one of the LED sets provides the control input. 10. The lighting apparatus of claim 8, wherein the transistor comprises a bipolar transistor and wherein the turn-off circuit is configured to vary a base current of the bipolar transistor responsive to the control input. 11. The lighting apparatus of claim 1, wherein the bias states of the LED sets transition responsive to a power supply having a varying voltage such that the diversion circuit is activated in response to increases and decreases in the varying voltage. 12. The lighting apparatus of claim 1, wherein the current diversion circuit comprises a plurality of current diversion circuits, respective ones of which are coupled to respective nodes of the string and configured to operate responsive to bias state transitions of respective ones of the LED sets; wherein a number of the plurality of current diversion circuits is less than a number of the LED sets. 13. A lighting apparatus comprising: a rectifier circuit configured to be coupled to an alternating current (ac) power source and to generate a rectified ac voltage; a string of serially-connected LED sets, each set comprising at least one LED; a current diversion circuit coupled to the string and configured to be selectively enabled and disabled responsive to bias state transitions of the LED sets as a magnitude of the rectified ac voltage varies; and a current limiting circuit coupled in series with the string and being configured to conduct current responsive to a forward biasing of all of the LED sets; wherein the current limiting circuit is comprised of a passive electrical component without including any active electrical component. 14. The lighting apparatus of claim 13, wherein the current diversion circuit is configured to conduct current via a first one of the LED sets and is configured to be turned off responsive to current through a second one of the LED sets. 15. The lighting apparatus of claim 14, wherein the first one of the LED sets comprises more LEDs than other ones of the LED sets. 16. The lighting apparatus of claim 14, wherein the current diversion circuit is configured to conduct current responsive to a forward biasing of the first one of the LED sets. 17. The lighting apparatus of claim 13, wherein the current diversion circuit is configured to turn off responsive to a voltage at a node of the string. 18. The lighting apparatus of claim 17, further comprising a resistor coupled in series with the string and wherein the current diversion circuit is configured to turn off responsive to a voltage at a terminal of the resistor. 19. The lighting apparatus of claim 13, further comprising a resistor coupled in series with the string, wherein the current diversion circuit comprises a bipolar transistor providing a controllable current path between a node of the string and a terminal of the rectifier circuit and wherein current through the resistor varies an emitter bias of the bipolar transistor. 20. The lighting apparatus of claim 13, wherein the current diversion circuits comprises: a transistor providing a controllable current path between a node of the string and a terminal of the rectifier circuit; and a turn-off circuit coupled to a node of the string and to a control terminal of the transistor and configured to control the current path responsive to a control input. 21. The lighting apparatus of claim 20, wherein a current through one of the LED sets provides the control input. 22. The lighting apparatus of claim 20, wherein the transistor comprises a bipolar transistor and wherein the turn-off circuit is configured to vary a base current of the bipolar transistor responsive to the control input. 23. The lighting apparatus of claim 13, wherein the current diversion circuit comprises a plurality of current diversion circuits, respective ones of which are coupled to respective nodes of the string and configured to operate responsive to bias state transitions of respective ones of the LED sets; wherein a number of the plurality of current diversion circuits is less than a number of the LED sets. 24. An apparatus comprising: a current diversion circuit coupled to a string of serially-connected light emitting diode (LED) sets and configured to operate responsive to bias state transitions of one of the LED sets to direct current away from another one of the LED sets; and a current limiting circuit coupled in series with the string and being configured to conduct current responsive to a forward biasing of all of the LED sets; wherein the current limiting circuit is comprised of a passive electrical component without including any active electrical component. 25. The apparatus of claim 24, wherein the current diversion circuit is configured to conduct current via a first one of the LED sets and is configured to be turned off responsive to current through a second one of the LED sets. 26. The apparatus of claim 25, wherein the first one of the LED sets comprises more LEDs than other ones of the LED sets. 27. The apparatus of claim 25, wherein the current diversion circuit is configured to conduct current responsive to a forward biasing of the first one of the LED sets. 28. The apparatus of claim 24, wherein the current diversion circuit is configured to turn off responsive to a voltage at a node of the string. 29. The apparatus of claim 28, wherein the current diversion circuit is configured to turn off responsive to a voltage at a terminal of a resistor coupled in series with the string. 30. The apparatus of claim 24, wherein the current diversion circuit comprises a bipolar transistor providing a controllable current path between a node of the string and a terminal of a power supply and wherein current through a resistor coupled in series with the string varies an emitter bias of the bipolar transistor. 31. The apparatus of claim 24, wherein the current diversion circuit comprises: a transistor configured to provide a controllable current path between a node of the string and a terminal of a power supply; and a turn-off circuit coupled to a node of the string and to a control terminal of the transistor and configured to control the current path responsive to a control input. 32. The apparatus of claim 31, wherein current through one of the LED sets provides the control input. 33. The apparatus of claim 24, further comprising a rectifier circuit configured to be coupled to a power source and having an output configured to be coupled to the string of LED sets. 34. The apparatus of claim 24, wherein the current diversion circuit comprises a plurality of current diversion circuits, respective ones of which are coupled to respective nodes of the string and configured to operate responsive to bias state transitions of respective ones of the LED sets; wherein a number of the plurality of current diversion circuits is less than a number of the LED sets.
A lighting apparatus includes a string of light emitting diode (LED) sets coupled in series where each set includes at least one LED. A current diversion circuit is coupled to the string and is configured to operate responsive to a bias state transition of one of the LED sets to direct current away from another one of the LED sets. A current limiting circuit is coupled in series with the string and is configured to conduct current responsive to a forward biasing of all of the LED sets. The current limiting circuit includes only passive electrical component(s).1. A lighting apparatus, comprising: a string of light emitting diode (LED) sets coupled in series, each LED set comprising at least one LED; a current diversion circuit coupled to the string and configured to operate responsive to a bias state transition of one of the LED sets to direct current away from another one of the LED sets; and a current limiting circuit coupled in series with the string and being configured to conduct current responsive to a forward biasing of all of the LED sets; wherein the current limiting circuit is comprised of a passive electrical component without including any active electrical component. 2. The lighting apparatus of claim 1, wherein the current diversion circuit is configured to conduct current via a first one of the LED sets and is configured to be turned off responsive to current through a second one of the LED sets. 3. The lighting apparatus of claim 2, wherein the current diversion circuit is configured to conduct current responsive to a forward biasing of the first one of the LED sets. 4. The lighting apparatus of claim 2, wherein the first one of the LED sets comprises more LEDs than other ones of the LED sets. 5. The lighting apparatus of claim 1, wherein the current diversion circuit is configured to turn off responsive to a voltage at a node of the string. 6. The lighting apparatus of claim 5, further comprising a resistor coupled in series with the string and wherein the first one of the current diversion circuits is configured to turn off responsive to a voltage at a terminal of the resistor. 7. The lighting apparatus of claim 6, wherein the current diversion circuit comprises a bipolar transistor providing a controllable current path between a node of the string and a terminal of a power supply, and wherein current through the resistor varies an emitter bias of the bipolar transistor. 8. The lighting apparatus of claim 1, wherein the current diversion circuit comprises: a transistor providing a controllable current path between a node of the string and a terminal of a power supply; and a turn-off circuit coupled to a node of the string and to a control terminal of the transistor and configured to control the current path responsive to a control input. 9. The lighting apparatus of claim 8, wherein current through one of the LED sets provides the control input. 10. The lighting apparatus of claim 8, wherein the transistor comprises a bipolar transistor and wherein the turn-off circuit is configured to vary a base current of the bipolar transistor responsive to the control input. 11. The lighting apparatus of claim 1, wherein the bias states of the LED sets transition responsive to a power supply having a varying voltage such that the diversion circuit is activated in response to increases and decreases in the varying voltage. 12. The lighting apparatus of claim 1, wherein the current diversion circuit comprises a plurality of current diversion circuits, respective ones of which are coupled to respective nodes of the string and configured to operate responsive to bias state transitions of respective ones of the LED sets; wherein a number of the plurality of current diversion circuits is less than a number of the LED sets. 13. A lighting apparatus comprising: a rectifier circuit configured to be coupled to an alternating current (ac) power source and to generate a rectified ac voltage; a string of serially-connected LED sets, each set comprising at least one LED; a current diversion circuit coupled to the string and configured to be selectively enabled and disabled responsive to bias state transitions of the LED sets as a magnitude of the rectified ac voltage varies; and a current limiting circuit coupled in series with the string and being configured to conduct current responsive to a forward biasing of all of the LED sets; wherein the current limiting circuit is comprised of a passive electrical component without including any active electrical component. 14. The lighting apparatus of claim 13, wherein the current diversion circuit is configured to conduct current via a first one of the LED sets and is configured to be turned off responsive to current through a second one of the LED sets. 15. The lighting apparatus of claim 14, wherein the first one of the LED sets comprises more LEDs than other ones of the LED sets. 16. The lighting apparatus of claim 14, wherein the current diversion circuit is configured to conduct current responsive to a forward biasing of the first one of the LED sets. 17. The lighting apparatus of claim 13, wherein the current diversion circuit is configured to turn off responsive to a voltage at a node of the string. 18. The lighting apparatus of claim 17, further comprising a resistor coupled in series with the string and wherein the current diversion circuit is configured to turn off responsive to a voltage at a terminal of the resistor. 19. The lighting apparatus of claim 13, further comprising a resistor coupled in series with the string, wherein the current diversion circuit comprises a bipolar transistor providing a controllable current path between a node of the string and a terminal of the rectifier circuit and wherein current through the resistor varies an emitter bias of the bipolar transistor. 20. The lighting apparatus of claim 13, wherein the current diversion circuits comprises: a transistor providing a controllable current path between a node of the string and a terminal of the rectifier circuit; and a turn-off circuit coupled to a node of the string and to a control terminal of the transistor and configured to control the current path responsive to a control input. 21. The lighting apparatus of claim 20, wherein a current through one of the LED sets provides the control input. 22. The lighting apparatus of claim 20, wherein the transistor comprises a bipolar transistor and wherein the turn-off circuit is configured to vary a base current of the bipolar transistor responsive to the control input. 23. The lighting apparatus of claim 13, wherein the current diversion circuit comprises a plurality of current diversion circuits, respective ones of which are coupled to respective nodes of the string and configured to operate responsive to bias state transitions of respective ones of the LED sets; wherein a number of the plurality of current diversion circuits is less than a number of the LED sets. 24. An apparatus comprising: a current diversion circuit coupled to a string of serially-connected light emitting diode (LED) sets and configured to operate responsive to bias state transitions of one of the LED sets to direct current away from another one of the LED sets; and a current limiting circuit coupled in series with the string and being configured to conduct current responsive to a forward biasing of all of the LED sets; wherein the current limiting circuit is comprised of a passive electrical component without including any active electrical component. 25. The apparatus of claim 24, wherein the current diversion circuit is configured to conduct current via a first one of the LED sets and is configured to be turned off responsive to current through a second one of the LED sets. 26. The apparatus of claim 25, wherein the first one of the LED sets comprises more LEDs than other ones of the LED sets. 27. The apparatus of claim 25, wherein the current diversion circuit is configured to conduct current responsive to a forward biasing of the first one of the LED sets. 28. The apparatus of claim 24, wherein the current diversion circuit is configured to turn off responsive to a voltage at a node of the string. 29. The apparatus of claim 28, wherein the current diversion circuit is configured to turn off responsive to a voltage at a terminal of a resistor coupled in series with the string. 30. The apparatus of claim 24, wherein the current diversion circuit comprises a bipolar transistor providing a controllable current path between a node of the string and a terminal of a power supply and wherein current through a resistor coupled in series with the string varies an emitter bias of the bipolar transistor. 31. The apparatus of claim 24, wherein the current diversion circuit comprises: a transistor configured to provide a controllable current path between a node of the string and a terminal of a power supply; and a turn-off circuit coupled to a node of the string and to a control terminal of the transistor and configured to control the current path responsive to a control input. 32. The apparatus of claim 31, wherein current through one of the LED sets provides the control input. 33. The apparatus of claim 24, further comprising a rectifier circuit configured to be coupled to a power source and having an output configured to be coupled to the string of LED sets. 34. The apparatus of claim 24, wherein the current diversion circuit comprises a plurality of current diversion circuits, respective ones of which are coupled to respective nodes of the string and configured to operate responsive to bias state transitions of respective ones of the LED sets; wherein a number of the plurality of current diversion circuits is less than a number of the LED sets.
2,600
10,340
10,340
15,083,138
2,624
A mobile communication device is equipped with a dynamic local directory into which contact information from a local telephone directory may be downloaded on a temporary basis. The local telephone directory resides on a local communication network and may be accessed by the mobile communication device. The downloaded data is purged automatically after preset limits are reached. The dynamic local telephone directory on the mobile communication device is continuously changing depending on the location.
1. A method comprising: using a mobile communication device, establishing a communication link with a local network server located at a particular locale, said communication link initiated by said local network server when said mobile communication device enters a proximity of said particular locale, said local network server having a directory of local contact information pertaining to said particular locale; authenticating the mobile communication device with the local network server and setting access limits; receiving a portion of said directory of local contact information from said local network server into a dynamic local directory on the mobile communication device, said portion of said directory of local contact information pertaining to said particular locale; and removing at least one local contact information from the dynamic local directory, when the access limits are exceeded. 2. The method of claim 1 wherein the removal is automatic when the access limits are exceeded. 3. The method of claim 1 wherein the access limits are set based on an authorization and a proximity of the mobile communication device and the at least one local contact information is removed when the mobile communication is moved beyond the proximity limit. 4. The method of claim 1, wherein said receiving and removing are continuously performed while said mobile communication device is within proximity at said particular locale such that said dynamic local directory contains a different portion of said directory of local contact information according depending on a location of said mobile communication device within said particular locale. 5. The method of claim 1 further comprising: receiving data and codes that enable the mobile communication device to operate as an internal phone of a business facility. 6. The method according to claim 1 wherein the communication link is a secondary radio link. 7. The method according to claim 10 wherein the secondary radio link is a wireless local area network, blue tooth, ultrawide band, Wifi or WiMax. 8. A mobile communication device comprising: a processor for executing the functions of the mobile communication device; a transceiver for establishing a communication link with a local network server located at a particular locale, said local communication link being initiated by said local network server when said mobile communication device enters a proximity of said particular locale, said local network server having a directory of local contact information pertaining to said particular locale; a memory for storing a dynamic local directory of contact information, wherein the processor causes the transceiver to: establish a communication link with the local communication network when initiated by said local network server, authenticate said mobile communication device with said local network server and setting access limits; receive a portion of said directory of local contact information from said local network server into a dynamic local directory on the mobile communication device, said portion of said directory of local contact information pertaining to said particular locale; and remove at least one local contact information from the dynamic local directory, when the access limits are exceeded. 9. The mobile communications device of claim 8, wherein the removal is automatic when the access limits are exceeded. 10. The mobile communications device of claim 8, wherein the access limits are set based on an authorization and a proximity of the mobile communication device and the at least one local contact information is removed when the mobile communication is moved beyond the proximity limit. 11. The mobile communications device of claim 8, wherein said receiving and removing are continuously performed while said mobile communication device is within proximity at said particular locale such that said dynamic local directory contains a different portion of said directory of local contact information according depending on a location of said mobile communication device within said particular locale. 12. The mobile communications device of claim 8, wherein the processor further causes the transceiver to: receive data and codes that enable the mobile communication device to operate as an internal phone of a business facility. 13. The mobile communications device of claim 8, wherein the communication link is a secondary radio link. 14. The mobile communications device of claim 13, wherein the secondary radio link is a wireless local area network, blue tooth, ultrawide band, Wifi or WiMax. 15. A non-transitory computer-readable storage device storing instructions that are executable at a mobile communications device to perform operations comprising: establishing, said mobile communications device, a communication link with a local network server located at a particular locale, said communication link initiated by said local network server when said mobile communication device enters a proximity of said particular locale, said local network server having a directory of local contact information pertaining to said particular locale; authenticating said mobile communication device with the local network server and setting access limits; receiving a portion of said directory of local contact information from said local network server into a dynamic local directory on the mobile communication device, said portion of maid directory of local contact information pertaining to said particular locale; and removing at least one local contact information from the dynamic local directory, when the access limits are exceeded. 16. The non-transitory computer-readable storage device of claim 15, wherein the removal is automatic when the access limits are exceeded. 17. The non-transitory computer-readable storage device of claim 15, wherein the access limits are set based on an authorization and a proximity of the mobile communication device and the at least one local contact information is removed when the mobile communication is moved beyond the proximity limit. 18. The non-transitory computer-readable storage device of claim 15, wherein said receiving and removing are continuously performed while said mobile communication device is within proximity at said particular locale such that said dynamic local directory contains a different portion of said directory of local contact information according depending on a location of said mobile communication device within said particular locale. 19. The non-transitory computer-readable storage device of claim 15, wherein the instructions are executable at the mobile communications device to perform operations comprising: receiving data and codes that enable the mobile communication device to operate as an internal phone of a business facility. 20. The non-transitory computer-readable storage device according to claim 15, wherein the communication link is a wireless local area network, blue tooth, ultrawide band, Wifi or WiMax.
A mobile communication device is equipped with a dynamic local directory into which contact information from a local telephone directory may be downloaded on a temporary basis. The local telephone directory resides on a local communication network and may be accessed by the mobile communication device. The downloaded data is purged automatically after preset limits are reached. The dynamic local telephone directory on the mobile communication device is continuously changing depending on the location.1. A method comprising: using a mobile communication device, establishing a communication link with a local network server located at a particular locale, said communication link initiated by said local network server when said mobile communication device enters a proximity of said particular locale, said local network server having a directory of local contact information pertaining to said particular locale; authenticating the mobile communication device with the local network server and setting access limits; receiving a portion of said directory of local contact information from said local network server into a dynamic local directory on the mobile communication device, said portion of said directory of local contact information pertaining to said particular locale; and removing at least one local contact information from the dynamic local directory, when the access limits are exceeded. 2. The method of claim 1 wherein the removal is automatic when the access limits are exceeded. 3. The method of claim 1 wherein the access limits are set based on an authorization and a proximity of the mobile communication device and the at least one local contact information is removed when the mobile communication is moved beyond the proximity limit. 4. The method of claim 1, wherein said receiving and removing are continuously performed while said mobile communication device is within proximity at said particular locale such that said dynamic local directory contains a different portion of said directory of local contact information according depending on a location of said mobile communication device within said particular locale. 5. The method of claim 1 further comprising: receiving data and codes that enable the mobile communication device to operate as an internal phone of a business facility. 6. The method according to claim 1 wherein the communication link is a secondary radio link. 7. The method according to claim 10 wherein the secondary radio link is a wireless local area network, blue tooth, ultrawide band, Wifi or WiMax. 8. A mobile communication device comprising: a processor for executing the functions of the mobile communication device; a transceiver for establishing a communication link with a local network server located at a particular locale, said local communication link being initiated by said local network server when said mobile communication device enters a proximity of said particular locale, said local network server having a directory of local contact information pertaining to said particular locale; a memory for storing a dynamic local directory of contact information, wherein the processor causes the transceiver to: establish a communication link with the local communication network when initiated by said local network server, authenticate said mobile communication device with said local network server and setting access limits; receive a portion of said directory of local contact information from said local network server into a dynamic local directory on the mobile communication device, said portion of said directory of local contact information pertaining to said particular locale; and remove at least one local contact information from the dynamic local directory, when the access limits are exceeded. 9. The mobile communications device of claim 8, wherein the removal is automatic when the access limits are exceeded. 10. The mobile communications device of claim 8, wherein the access limits are set based on an authorization and a proximity of the mobile communication device and the at least one local contact information is removed when the mobile communication is moved beyond the proximity limit. 11. The mobile communications device of claim 8, wherein said receiving and removing are continuously performed while said mobile communication device is within proximity at said particular locale such that said dynamic local directory contains a different portion of said directory of local contact information according depending on a location of said mobile communication device within said particular locale. 12. The mobile communications device of claim 8, wherein the processor further causes the transceiver to: receive data and codes that enable the mobile communication device to operate as an internal phone of a business facility. 13. The mobile communications device of claim 8, wherein the communication link is a secondary radio link. 14. The mobile communications device of claim 13, wherein the secondary radio link is a wireless local area network, blue tooth, ultrawide band, Wifi or WiMax. 15. A non-transitory computer-readable storage device storing instructions that are executable at a mobile communications device to perform operations comprising: establishing, said mobile communications device, a communication link with a local network server located at a particular locale, said communication link initiated by said local network server when said mobile communication device enters a proximity of said particular locale, said local network server having a directory of local contact information pertaining to said particular locale; authenticating said mobile communication device with the local network server and setting access limits; receiving a portion of said directory of local contact information from said local network server into a dynamic local directory on the mobile communication device, said portion of maid directory of local contact information pertaining to said particular locale; and removing at least one local contact information from the dynamic local directory, when the access limits are exceeded. 16. The non-transitory computer-readable storage device of claim 15, wherein the removal is automatic when the access limits are exceeded. 17. The non-transitory computer-readable storage device of claim 15, wherein the access limits are set based on an authorization and a proximity of the mobile communication device and the at least one local contact information is removed when the mobile communication is moved beyond the proximity limit. 18. The non-transitory computer-readable storage device of claim 15, wherein said receiving and removing are continuously performed while said mobile communication device is within proximity at said particular locale such that said dynamic local directory contains a different portion of said directory of local contact information according depending on a location of said mobile communication device within said particular locale. 19. The non-transitory computer-readable storage device of claim 15, wherein the instructions are executable at the mobile communications device to perform operations comprising: receiving data and codes that enable the mobile communication device to operate as an internal phone of a business facility. 20. The non-transitory computer-readable storage device according to claim 15, wherein the communication link is a wireless local area network, blue tooth, ultrawide band, Wifi or WiMax.
2,600
10,341
10,341
15,462,510
2,642
Systems and methods for enabling and providing uplink based mobility procedures are disclosed. Embodiments provide uplink based mobility procedures in which one or more physical channel typically used to facilitate uplink based mobility is not utilized. For example, an uplink based mobility process of embodiments utilizes a UL-based mobility specific ID and/or synchronization signals to provide information for decoding signals avoiding the use of physical cell identifier channel (PCICH). Embodiments of an uplink based mobility process utilize a physical channel, such as a physical slot format indication channel (PSFICH) or a physical downlink control channel (PDCCH), to provide uplink mobility signal acknowledgements and paging indications avoiding the use of a physical keep alive channel (PKACH).
1. A method for providing uplink based mobility operation of user equipment (UE) operable in a wireless network, the method comprising: obtaining, by the UE, uplink-based (UL-based) mobility specific identification (ID) information in association with an uplink mobility procedure facilitating mobility of the UE in the wireless network; and utilizing, by the UE, the UL-based mobility specific ID information for decoding signals transmitted to the UE in operation of the uplink mobility procedure. 2. The method of claim 1, further comprising: utilizing, by the UE, the UL-based mobility specific ID information to provide information for decoding at least one of a physical downlink control channel (PDCCH) or a physical downlink shared channel (PDSCH) transmitted by an access node of the wireless network in operation of the uplink mobility procedure, wherein signals of the at least one of the PDCCH or PDSCH decoded utilizing the UL-based mobility specific ID comprise random access responses (RARs) or paging signals. 3. The method of claim 1, further comprising: obtaining, by the UE, the UL-based mobility specific ID information when the UE is configured for uplink based mobility operation of the uplink mobility procedure. 4. The method of claim 3, wherein the UL-based mobility specific ID information is received by the UE in a mobility configuration message of the uplink mobility procedure when the UE transits from downlink based mobility to uplink based mobility. 5. The method of claim 1, further comprising: receiving, by the UE, an uplink mobility reference signal acknowledgment using an alternative physical channel to a physical keep alive channel (PKACH), wherein the uplink mobility reference signal acknowledgment comprises an acknowledgment to a physical uplink measurement indication channel (PUMICH) signal or physical uplink measurement reference signal (PUMRS) transmitted by the UE. 6. The method of claim 5, further comprising: utilizing at least one of a physical slot format indication channel (PSFICH) or a physical downlink control channel (PDCCH) as the alternative physical channel to the PKACH. 7. The method of claim 5, further comprising: receiving, by the UE, a paging indication carried in the alternative physical channel; and decoding, by the UE upon detecting the paging indication carried in the alternative physical channel, a physical downlink shared channel (PDSCH) transmitted by an access node of the wireless network using the UL-based mobility specific ID information. 8. The method of claim 5, wherein the acknowledgment is disposed in a dedicated search space of the PSFICH or PDCCH. 9. An apparatus for providing uplink based mobility operation of user equipment (UE) operable in a wireless network, the apparatus comprising: at least one processor; and a memory coupled to the at least one processor, wherein the at least one processor is configured: to obtain uplink-based (UL-based) mobility specific identification (ID) information in association with an uplink mobility procedure facilitating mobility of the UE in the wireless network; and to utilize the UL-based mobility specific ID information for decoding signals transmitted to the UE in operation of the uplink mobility procedure. 10. The apparatus of claim 9, wherein the at least one processor is further configured: to utilize the UL-based mobility specific ID information to provide information for decoding at least one of a physical downlink control channel (PDCCH) or a physical downlink shared channel (PDSCH) transmitted by an access node of the wireless network in operation of the uplink mobility procedure, wherein signals of the at least one of the PDCCH or PDSCH decoded utilizing the UL-based mobility specific ID comprise random access responses (RARs) or paging signals. 11. The apparatus of claim 9, wherein the at least one processor is further configured: to obtain the UL-based mobility specific ID information when the UE is configured for uplink based mobility operation of the uplink mobility procedure. 12. The apparatus of claim 11, wherein the UL-based mobility specific ID information is received by the UE in a mobility configuration message of the uplink mobility procedure when the UE transits from downlink based mobility to uplink based mobility. 13. The apparatus of claim 9, wherein the at least one processor is further configured: to receive an uplink mobility reference signal acknowledgment using an alternative physical channel to a physical keep alive channel (PKACH), wherein the uplink mobility reference signal acknowledgment comprises an acknowledgment to a physical uplink measurement indication channel (PUMICH) signal or physical uplink measurement reference signal (PUMRS) transmitted by the UE. 14. The apparatus of claim 13, wherein the at least one processor is further configured: utilize at least one of a physical slot format indication channel (PSFICH) or a physical downlink control channel (PDCCH) as the alternative physical channel to the PKACH. 15. The apparatus of claim 13, wherein the at least one processor is further configured: to receive a paging indication carried in the alternative physical channel; and to decode a physical downlink shared channel (PDSCH) transmitted by an access node of the wireless network using the UL-based mobility specific ID information upon detecting the paging indication carried in the alternative physical channel. 16. The apparatus of claim 13, wherein the acknowledgement is disposed in a dedicated search space of the PSFICH or PDCCH. 17. A method for providing uplink based mobility operation of user equipment (UE) operable in a wireless network, the method comprising: transmitting, by the UE, a physical uplink measurement indication channel (PUMICH) signal or physical uplink measurement reference signal (PUMRS) in a uplink mobility procedure implemented in the wireless network with respect to the UE; and receiving, by the UE, an uplink mobility reference signal acknowledgment transmitted by an access node of the wireless network in operation of the uplink mobility procedure using an alternative physical channel to a physical keep alive channel (PKACH), wherein the uplink mobility reference signal acknowledgment comprises an acknowledgment to the PUMICH signal or PUMRS transmitted by the UE. 18. The method of claim 17, further comprising: utilizing at least one of a physical slot format indication channel (PSFICH) or a physical downlink control channel (PDCCH) as the alternative physical channel to the PKACH. 19. The method of claim 18, wherein the acknowledgment is provided in a dedicated search space of the PSFICH or PDCCH. 20. The method of claim 17, further comprising: receiving, by the UE, a paging indication using the alternative physical channel. 21. The method of claim 17, further comprising: obtaining, by the UE, uplink-based (UL-based) mobility specific identification (ID) information in the uplink mobility procedure implemented in the wireless network with respect to the UE; and utilizing, by the UE, the UL-based mobility specific ID information for decoding signals transmitted to the UE in operation of the uplink mobility procedure. 22. The method of claim 21, further comprising: utilizing, by the UE, the UL-based mobility specific ID information to provide information for decoding at least one of a physical downlink control channel (PDCCH) or a physical downlink shared channel (PDSCH) transmitted by the access node of the wireless network in operation of the uplink mobility procedure, wherein signals of the at least one of the PDCCH or PDSCH decoded utilizing the UL-based mobility specific ID comprise random access responses (RARs) or paging signals. 23. The method of claim 21, further comprising: obtaining, by the UE, the UL-based mobility specific ID information when the UE is configured for uplink based mobility operation of the uplink mobility procedure. 24. An apparatus for providing uplink based mobility operation of user equipment (UE) operable in a wireless network, the apparatus comprising: at least one processor; and a memory coupled to the at least one processor, wherein the at least one processor is configured: to transmit a physical uplink measurement indication channel (PUMICH) signal or physical uplink measurement reference signal (PUMRS) in a uplink mobility procedure implemented in the wireless network with respect to the UE; and to receive an uplink mobility reference signal acknowledgment transmitted by an access node of the wireless network in operation of the uplink mobility procedure using an alternative physical channel to a physical keep alive channel (PKACH), wherein the uplink mobility reference signal acknowledgment comprises an acknowledgment to the transmitted PUMICH signal or PUMRS. 25. The apparatus of claim 24, wherein the at least one processor is further configured: to utilize at least one of a physical slot format indication channel (PSFICH) or a physical downlink control channel (PDCCH) as the alternative physical channel to the PKACH. 26. The apparatus of claim 25, wherein the acknowledgment is provided in a dedicated search space of the PSFICH or PDCCH. 27. The apparatus of claim 24, wherein the at least one processor is further configured: to receive a paging indication using the alternative physical channel. 28. The apparatus of claim 24, wherein the at least one processor is further configured: to obtain uplink-based (UL-based) mobility specific identification (ID) information in the uplink mobility procedure implemented in the wireless network with respect to the UE; and to utilize the UL-based mobility specific ID information for decoding signals transmitted to the UE in operation of the uplink mobility procedure. 29. The apparatus of claim 28, wherein the at least one processor is further configured: to utilize the UL-based mobility specific ID information to provide information for decoding at least one of a physical downlink control channel (PDCCH) or a physical downlink shared channel (PDSCH) transmitted by the access node of the wireless network in operation of the uplink mobility procedure, wherein signals of the at least one of the PDCCH or PDSCH decoded utilizing the UL-based mobility specific ID comprise random access responses (RARs) or paging signals. 30. The apparatus of claim 28, wherein the at least one processor is further configured: to obtain the UL-based mobility specific ID information when the UE is configured for uplink based mobility operation of the uplink mobility procedure.
Systems and methods for enabling and providing uplink based mobility procedures are disclosed. Embodiments provide uplink based mobility procedures in which one or more physical channel typically used to facilitate uplink based mobility is not utilized. For example, an uplink based mobility process of embodiments utilizes a UL-based mobility specific ID and/or synchronization signals to provide information for decoding signals avoiding the use of physical cell identifier channel (PCICH). Embodiments of an uplink based mobility process utilize a physical channel, such as a physical slot format indication channel (PSFICH) or a physical downlink control channel (PDCCH), to provide uplink mobility signal acknowledgements and paging indications avoiding the use of a physical keep alive channel (PKACH).1. A method for providing uplink based mobility operation of user equipment (UE) operable in a wireless network, the method comprising: obtaining, by the UE, uplink-based (UL-based) mobility specific identification (ID) information in association with an uplink mobility procedure facilitating mobility of the UE in the wireless network; and utilizing, by the UE, the UL-based mobility specific ID information for decoding signals transmitted to the UE in operation of the uplink mobility procedure. 2. The method of claim 1, further comprising: utilizing, by the UE, the UL-based mobility specific ID information to provide information for decoding at least one of a physical downlink control channel (PDCCH) or a physical downlink shared channel (PDSCH) transmitted by an access node of the wireless network in operation of the uplink mobility procedure, wherein signals of the at least one of the PDCCH or PDSCH decoded utilizing the UL-based mobility specific ID comprise random access responses (RARs) or paging signals. 3. The method of claim 1, further comprising: obtaining, by the UE, the UL-based mobility specific ID information when the UE is configured for uplink based mobility operation of the uplink mobility procedure. 4. The method of claim 3, wherein the UL-based mobility specific ID information is received by the UE in a mobility configuration message of the uplink mobility procedure when the UE transits from downlink based mobility to uplink based mobility. 5. The method of claim 1, further comprising: receiving, by the UE, an uplink mobility reference signal acknowledgment using an alternative physical channel to a physical keep alive channel (PKACH), wherein the uplink mobility reference signal acknowledgment comprises an acknowledgment to a physical uplink measurement indication channel (PUMICH) signal or physical uplink measurement reference signal (PUMRS) transmitted by the UE. 6. The method of claim 5, further comprising: utilizing at least one of a physical slot format indication channel (PSFICH) or a physical downlink control channel (PDCCH) as the alternative physical channel to the PKACH. 7. The method of claim 5, further comprising: receiving, by the UE, a paging indication carried in the alternative physical channel; and decoding, by the UE upon detecting the paging indication carried in the alternative physical channel, a physical downlink shared channel (PDSCH) transmitted by an access node of the wireless network using the UL-based mobility specific ID information. 8. The method of claim 5, wherein the acknowledgment is disposed in a dedicated search space of the PSFICH or PDCCH. 9. An apparatus for providing uplink based mobility operation of user equipment (UE) operable in a wireless network, the apparatus comprising: at least one processor; and a memory coupled to the at least one processor, wherein the at least one processor is configured: to obtain uplink-based (UL-based) mobility specific identification (ID) information in association with an uplink mobility procedure facilitating mobility of the UE in the wireless network; and to utilize the UL-based mobility specific ID information for decoding signals transmitted to the UE in operation of the uplink mobility procedure. 10. The apparatus of claim 9, wherein the at least one processor is further configured: to utilize the UL-based mobility specific ID information to provide information for decoding at least one of a physical downlink control channel (PDCCH) or a physical downlink shared channel (PDSCH) transmitted by an access node of the wireless network in operation of the uplink mobility procedure, wherein signals of the at least one of the PDCCH or PDSCH decoded utilizing the UL-based mobility specific ID comprise random access responses (RARs) or paging signals. 11. The apparatus of claim 9, wherein the at least one processor is further configured: to obtain the UL-based mobility specific ID information when the UE is configured for uplink based mobility operation of the uplink mobility procedure. 12. The apparatus of claim 11, wherein the UL-based mobility specific ID information is received by the UE in a mobility configuration message of the uplink mobility procedure when the UE transits from downlink based mobility to uplink based mobility. 13. The apparatus of claim 9, wherein the at least one processor is further configured: to receive an uplink mobility reference signal acknowledgment using an alternative physical channel to a physical keep alive channel (PKACH), wherein the uplink mobility reference signal acknowledgment comprises an acknowledgment to a physical uplink measurement indication channel (PUMICH) signal or physical uplink measurement reference signal (PUMRS) transmitted by the UE. 14. The apparatus of claim 13, wherein the at least one processor is further configured: utilize at least one of a physical slot format indication channel (PSFICH) or a physical downlink control channel (PDCCH) as the alternative physical channel to the PKACH. 15. The apparatus of claim 13, wherein the at least one processor is further configured: to receive a paging indication carried in the alternative physical channel; and to decode a physical downlink shared channel (PDSCH) transmitted by an access node of the wireless network using the UL-based mobility specific ID information upon detecting the paging indication carried in the alternative physical channel. 16. The apparatus of claim 13, wherein the acknowledgement is disposed in a dedicated search space of the PSFICH or PDCCH. 17. A method for providing uplink based mobility operation of user equipment (UE) operable in a wireless network, the method comprising: transmitting, by the UE, a physical uplink measurement indication channel (PUMICH) signal or physical uplink measurement reference signal (PUMRS) in a uplink mobility procedure implemented in the wireless network with respect to the UE; and receiving, by the UE, an uplink mobility reference signal acknowledgment transmitted by an access node of the wireless network in operation of the uplink mobility procedure using an alternative physical channel to a physical keep alive channel (PKACH), wherein the uplink mobility reference signal acknowledgment comprises an acknowledgment to the PUMICH signal or PUMRS transmitted by the UE. 18. The method of claim 17, further comprising: utilizing at least one of a physical slot format indication channel (PSFICH) or a physical downlink control channel (PDCCH) as the alternative physical channel to the PKACH. 19. The method of claim 18, wherein the acknowledgment is provided in a dedicated search space of the PSFICH or PDCCH. 20. The method of claim 17, further comprising: receiving, by the UE, a paging indication using the alternative physical channel. 21. The method of claim 17, further comprising: obtaining, by the UE, uplink-based (UL-based) mobility specific identification (ID) information in the uplink mobility procedure implemented in the wireless network with respect to the UE; and utilizing, by the UE, the UL-based mobility specific ID information for decoding signals transmitted to the UE in operation of the uplink mobility procedure. 22. The method of claim 21, further comprising: utilizing, by the UE, the UL-based mobility specific ID information to provide information for decoding at least one of a physical downlink control channel (PDCCH) or a physical downlink shared channel (PDSCH) transmitted by the access node of the wireless network in operation of the uplink mobility procedure, wherein signals of the at least one of the PDCCH or PDSCH decoded utilizing the UL-based mobility specific ID comprise random access responses (RARs) or paging signals. 23. The method of claim 21, further comprising: obtaining, by the UE, the UL-based mobility specific ID information when the UE is configured for uplink based mobility operation of the uplink mobility procedure. 24. An apparatus for providing uplink based mobility operation of user equipment (UE) operable in a wireless network, the apparatus comprising: at least one processor; and a memory coupled to the at least one processor, wherein the at least one processor is configured: to transmit a physical uplink measurement indication channel (PUMICH) signal or physical uplink measurement reference signal (PUMRS) in a uplink mobility procedure implemented in the wireless network with respect to the UE; and to receive an uplink mobility reference signal acknowledgment transmitted by an access node of the wireless network in operation of the uplink mobility procedure using an alternative physical channel to a physical keep alive channel (PKACH), wherein the uplink mobility reference signal acknowledgment comprises an acknowledgment to the transmitted PUMICH signal or PUMRS. 25. The apparatus of claim 24, wherein the at least one processor is further configured: to utilize at least one of a physical slot format indication channel (PSFICH) or a physical downlink control channel (PDCCH) as the alternative physical channel to the PKACH. 26. The apparatus of claim 25, wherein the acknowledgment is provided in a dedicated search space of the PSFICH or PDCCH. 27. The apparatus of claim 24, wherein the at least one processor is further configured: to receive a paging indication using the alternative physical channel. 28. The apparatus of claim 24, wherein the at least one processor is further configured: to obtain uplink-based (UL-based) mobility specific identification (ID) information in the uplink mobility procedure implemented in the wireless network with respect to the UE; and to utilize the UL-based mobility specific ID information for decoding signals transmitted to the UE in operation of the uplink mobility procedure. 29. The apparatus of claim 28, wherein the at least one processor is further configured: to utilize the UL-based mobility specific ID information to provide information for decoding at least one of a physical downlink control channel (PDCCH) or a physical downlink shared channel (PDSCH) transmitted by the access node of the wireless network in operation of the uplink mobility procedure, wherein signals of the at least one of the PDCCH or PDSCH decoded utilizing the UL-based mobility specific ID comprise random access responses (RARs) or paging signals. 30. The apparatus of claim 28, wherein the at least one processor is further configured: to obtain the UL-based mobility specific ID information when the UE is configured for uplink based mobility operation of the uplink mobility procedure.
2,600
10,342
10,342
15,949,263
2,626
An electronic device comprising: a user interface having a display for displaying a standby screen when the device is in an idle state and a user input device, wherein the user interface provides a menu system, for re-configuring the standby screen, that is navigated using the user input device.
1. An apparatus, comprising: a user interface comprising a touch screen and a display for displaying a first standby screen, the first standby screen comprising a plurality of graphical items when the apparatus is in an idle state, wherein the user interface provides a menu, for re-configuring the first standby screen, the menu being navigable using the touch screen; at least one processor; and at least one memory including computer program code; the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to: allow a user to select an option in the menu to cause the display to display a representation of a second standby screen, different from the first standby screen, which includes a plurality of zones, wherein the zones in the representation of the second standby screen are associated with graphical items of the first standby screen; and allow the user to move, in the representation of the second standby screen, a zone associated with a graphical item of the first standby screen, which causes the graphical item of the first standby screen to move. 2. The apparatus of claim 1, wherein the menu enables the user to resize the graphical item of the first standby screen. 3. The apparatus of claim 1, wherein the menu enables the user to select and move one zone at a time in the representation of the second standby screen. 4. The apparatus of claim 3, wherein the menu provides an option for modifying an attribute of the graphical item, after selection of the zone associated with the graphical item in the representation of the second standby screen. 5. The apparatus of claim 4, wherein the attribute for modification is selected from a list of possible attributes in the menu. 6. The apparatus of claim 4, wherein modifying an attribute modifies the value associated with the attribute, the value being selected from a predetermined list of possible attribute values in the menu. 7. The apparatus of claim 1, wherein the menu comprises an option for deleting a graphical item from the first standby screen. 8. The apparatus of claim 1, wherein the menu comprises an option for adding a graphical item to the first standby screen. 9. The apparatus of claim 1, wherein the apparatus is a mobile cellular telephone. 10. A method, comprising: providing a menu, for re-configuring a first standby screen of a user interface, that is navigable using a touch screen and which enables a user to: select an option in the menu to cause a display to display a representation of a second standby screen, different from the first standby screen, which includes a plurality of zones, wherein the zones in the representation of the second standby screen are associated with graphical items of the first standby screen; and move, in the representation of the second standby screen, a zone associated with a graphical item of the first standby screen, which causes the graphical item of the first standby screen to move. 11. The method of claim 10, wherein the menu enables the user to resize the graphical item of the first standby screen. 12. The method of claim 10, wherein the menu enables the user to select and move one zone at a time in the representation of the second standby screen. 13. The method of claim 12, wherein the menu provides an option for modifying an attribute of a graphical item, after selection of the zone associated with the graphical item in the representation of the second standby screen. 14. The method of claim 10, wherein the menu comprises an option for deleting a graphical item from the standby screen. 15. The method of claim 10, wherein the menu comprises an option for adding a graphical item to the standby screen. 16. A computer program for re-configuring a first standby screen of an apparatus, the computer program comprising: computer programming instructions that provide a menu, for re-configuring the first standby screen, that is navigable using a touch screen and which enables a user to: select an option in the menu to cause a display to display a representation of a second standby screen, different from the first standby screen, which includes a plurality of zones, wherein the zones in the representation of the second standby screen are associated with graphical items of the first standby screen; and move, in the representation of the second standby screen, a zone associated with a graphical item of the first standby screen, which causes the graphical item of the first standby screen to move. 17. The computer program of claim 16, wherein the menu enables the user to resize the graphical item of the first standby screen. 18. The computer program of claim 16, wherein the menu enables the user to select and move one zone at a time in the representation of the second standby screen. 19. The computer program of claim 18, wherein the menu provides an option for modifying an attribute of a graphical item, after selection of the zone associated with the graphical item in the representation of the second standby screen. 20. The computer program of claim 16, wherein the menu comprises at least one of a first option for deleting a graphical item from the first standby screen and a second option for adding a graphical item from the first standby screen.
An electronic device comprising: a user interface having a display for displaying a standby screen when the device is in an idle state and a user input device, wherein the user interface provides a menu system, for re-configuring the standby screen, that is navigated using the user input device.1. An apparatus, comprising: a user interface comprising a touch screen and a display for displaying a first standby screen, the first standby screen comprising a plurality of graphical items when the apparatus is in an idle state, wherein the user interface provides a menu, for re-configuring the first standby screen, the menu being navigable using the touch screen; at least one processor; and at least one memory including computer program code; the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to: allow a user to select an option in the menu to cause the display to display a representation of a second standby screen, different from the first standby screen, which includes a plurality of zones, wherein the zones in the representation of the second standby screen are associated with graphical items of the first standby screen; and allow the user to move, in the representation of the second standby screen, a zone associated with a graphical item of the first standby screen, which causes the graphical item of the first standby screen to move. 2. The apparatus of claim 1, wherein the menu enables the user to resize the graphical item of the first standby screen. 3. The apparatus of claim 1, wherein the menu enables the user to select and move one zone at a time in the representation of the second standby screen. 4. The apparatus of claim 3, wherein the menu provides an option for modifying an attribute of the graphical item, after selection of the zone associated with the graphical item in the representation of the second standby screen. 5. The apparatus of claim 4, wherein the attribute for modification is selected from a list of possible attributes in the menu. 6. The apparatus of claim 4, wherein modifying an attribute modifies the value associated with the attribute, the value being selected from a predetermined list of possible attribute values in the menu. 7. The apparatus of claim 1, wherein the menu comprises an option for deleting a graphical item from the first standby screen. 8. The apparatus of claim 1, wherein the menu comprises an option for adding a graphical item to the first standby screen. 9. The apparatus of claim 1, wherein the apparatus is a mobile cellular telephone. 10. A method, comprising: providing a menu, for re-configuring a first standby screen of a user interface, that is navigable using a touch screen and which enables a user to: select an option in the menu to cause a display to display a representation of a second standby screen, different from the first standby screen, which includes a plurality of zones, wherein the zones in the representation of the second standby screen are associated with graphical items of the first standby screen; and move, in the representation of the second standby screen, a zone associated with a graphical item of the first standby screen, which causes the graphical item of the first standby screen to move. 11. The method of claim 10, wherein the menu enables the user to resize the graphical item of the first standby screen. 12. The method of claim 10, wherein the menu enables the user to select and move one zone at a time in the representation of the second standby screen. 13. The method of claim 12, wherein the menu provides an option for modifying an attribute of a graphical item, after selection of the zone associated with the graphical item in the representation of the second standby screen. 14. The method of claim 10, wherein the menu comprises an option for deleting a graphical item from the standby screen. 15. The method of claim 10, wherein the menu comprises an option for adding a graphical item to the standby screen. 16. A computer program for re-configuring a first standby screen of an apparatus, the computer program comprising: computer programming instructions that provide a menu, for re-configuring the first standby screen, that is navigable using a touch screen and which enables a user to: select an option in the menu to cause a display to display a representation of a second standby screen, different from the first standby screen, which includes a plurality of zones, wherein the zones in the representation of the second standby screen are associated with graphical items of the first standby screen; and move, in the representation of the second standby screen, a zone associated with a graphical item of the first standby screen, which causes the graphical item of the first standby screen to move. 17. The computer program of claim 16, wherein the menu enables the user to resize the graphical item of the first standby screen. 18. The computer program of claim 16, wherein the menu enables the user to select and move one zone at a time in the representation of the second standby screen. 19. The computer program of claim 18, wherein the menu provides an option for modifying an attribute of a graphical item, after selection of the zone associated with the graphical item in the representation of the second standby screen. 20. The computer program of claim 16, wherein the menu comprises at least one of a first option for deleting a graphical item from the first standby screen and a second option for adding a graphical item from the first standby screen.
2,600
10,343
10,343
15,333,316
2,611
A method for preventing burn-in conditions on a display of an electronic device is disclosed. The electronic device acquires a position of, for example, a task bar being displayed on an OELD screen, extracts a color of a pixel located adjacent to the task bar, and generates an overlay window of a color based on the extracted color. The color of the overlay window is translucent and continuously changes from the extracted color to black with an increase of the distance from the pixel located adjacent to the task bar. The task bar is displayed on the OELD screen with the overlay window overlaying the task bar.
1. An electronic device comprising: a display for displaying images; a position acquisition unit for acquiring a position of a fixed image to be displayed on said display; a color extraction unit for extracting a color of a pixel located adjacent to said fixed image; a mask generation unit for generating a mask having a color based on said color of said pixel located adjacent to said fixed image; and an image display control unit for displaying said fixed image on said display with said mask overlaid on said fixed image. 2. The electronic device of claim 1, wherein said color of said mask is translucent. 3. The electronic device of claim 1, wherein said color of said mask continuously changes from said color of said pixel located adjacent to said fixed image to black with an increase of distance from said pixel located adjacent to said fixed image. 4. The electronic device of claim 1, wherein said color extraction unit uses an average value of colors of a plurality of adjacent pixels located adjacent to said fixed image. 5. The electronic device of claim 1, wherein said mask is overlaid on said fixed image when a cursor is not positioned on said fixed image, and said mask is not overlaid on said fixed image when said cursor is positioned on said fixed image. 6. The electronic device of claim 1, wherein said color of said mask is updated every time when said color of said pixel located adjacent to said fixed image switches. 7. The electronic device of claim 1, wherein said image display control unit repeatedly moves said fixed image by a predetermined movement amount at each time within at least one of a longitudinal direction or a lateral direction at a predetermined time interval. 8. A method comprising: acquiring a position of a fixed image displayed on said display; extracting a color of a pixel located adjacent to said fixed image; generating a mask having a color based on said color of said pixel located adjacent to said fixed image; and displaying said fixed image on said display with said mask overlaid on said fixed image. 9. The method of claim 8, wherein said color of said mask is translucent. 10. The method of claim 8, wherein said color of said mask continuously changes from said color of said pixel located adjacent to said fixed image to black with an increase of distance from said pixel located adjacent to said fixed image. 11. The method of claim 8, wherein said extracting further includes using an average value of colors of a plurality of adjacent pixels located adjacent to said fixed image. 12. The method of claim 8, wherein said mask is overlaid on said fixed image when a cursor is not positioned on said fixed image, and said mask is not overlaid on said fixed image when said cursor is positioned on said fixed image. 13. The method of claim 8, wherein said color of said mask is updated every time when said color of said pixel located adjacent to said fixed image switches. 14. The method of claim 8, wherein said displaying further includes repeatedly moving said fixed image by a predetermined movement amount at each time within at least one of a longitudinal direction or a lateral direction at a predetermined time interval. 15. An electronic device comprising: a display for displaying images; a position acquisition unit for acquiring a position of a fixed image displayed on said display; and an image display control unit for repeatedly moving said fixed image by a predetermined movement amount in at least one of a longitudinal direction or a lateral direction at a predetermined time interval.
A method for preventing burn-in conditions on a display of an electronic device is disclosed. The electronic device acquires a position of, for example, a task bar being displayed on an OELD screen, extracts a color of a pixel located adjacent to the task bar, and generates an overlay window of a color based on the extracted color. The color of the overlay window is translucent and continuously changes from the extracted color to black with an increase of the distance from the pixel located adjacent to the task bar. The task bar is displayed on the OELD screen with the overlay window overlaying the task bar.1. An electronic device comprising: a display for displaying images; a position acquisition unit for acquiring a position of a fixed image to be displayed on said display; a color extraction unit for extracting a color of a pixel located adjacent to said fixed image; a mask generation unit for generating a mask having a color based on said color of said pixel located adjacent to said fixed image; and an image display control unit for displaying said fixed image on said display with said mask overlaid on said fixed image. 2. The electronic device of claim 1, wherein said color of said mask is translucent. 3. The electronic device of claim 1, wherein said color of said mask continuously changes from said color of said pixel located adjacent to said fixed image to black with an increase of distance from said pixel located adjacent to said fixed image. 4. The electronic device of claim 1, wherein said color extraction unit uses an average value of colors of a plurality of adjacent pixels located adjacent to said fixed image. 5. The electronic device of claim 1, wherein said mask is overlaid on said fixed image when a cursor is not positioned on said fixed image, and said mask is not overlaid on said fixed image when said cursor is positioned on said fixed image. 6. The electronic device of claim 1, wherein said color of said mask is updated every time when said color of said pixel located adjacent to said fixed image switches. 7. The electronic device of claim 1, wherein said image display control unit repeatedly moves said fixed image by a predetermined movement amount at each time within at least one of a longitudinal direction or a lateral direction at a predetermined time interval. 8. A method comprising: acquiring a position of a fixed image displayed on said display; extracting a color of a pixel located adjacent to said fixed image; generating a mask having a color based on said color of said pixel located adjacent to said fixed image; and displaying said fixed image on said display with said mask overlaid on said fixed image. 9. The method of claim 8, wherein said color of said mask is translucent. 10. The method of claim 8, wherein said color of said mask continuously changes from said color of said pixel located adjacent to said fixed image to black with an increase of distance from said pixel located adjacent to said fixed image. 11. The method of claim 8, wherein said extracting further includes using an average value of colors of a plurality of adjacent pixels located adjacent to said fixed image. 12. The method of claim 8, wherein said mask is overlaid on said fixed image when a cursor is not positioned on said fixed image, and said mask is not overlaid on said fixed image when said cursor is positioned on said fixed image. 13. The method of claim 8, wherein said color of said mask is updated every time when said color of said pixel located adjacent to said fixed image switches. 14. The method of claim 8, wherein said displaying further includes repeatedly moving said fixed image by a predetermined movement amount at each time within at least one of a longitudinal direction or a lateral direction at a predetermined time interval. 15. An electronic device comprising: a display for displaying images; a position acquisition unit for acquiring a position of a fixed image displayed on said display; and an image display control unit for repeatedly moving said fixed image by a predetermined movement amount in at least one of a longitudinal direction or a lateral direction at a predetermined time interval.
2,600
10,344
10,344
13,771,625
2,642
Disclosed herein are systems and methods for associating a device with a network, such as a wireless network. One method comprises storing network identifying information for a network in a database, receiving contact information from a computing device, determining whether the contact information is associated with the network identifying information in the database, and coupling the computing device with the network in response to determining that the contact information is associated with the network identifying information.
1. A method for associating a device with a network, comprising: storing network identifying information for a network in a database; receiving contact information from a computing device; determining whether the contact information is associated with the network identifying information in the database; and coupling the computing device with the network in response to determining that the contact information is associated with the network identifying information. 2. The method of claim 1, wherein the network identifying information comprises a persistent network identifier. 3. The method of claim 2, wherein the persistent network identifier comprises a media access control (MAC) address. 4. The method of claim 1, wherein the contact information comprises a name. a telephone number, an email address an internet address, or a physical address, or a combination thereof. 5. The method of claim 1, wherein the network comprises a wireless network. 6. The method of claim 1, wherein the computing device comprises a mobile device. 7. The method of claim 1, wherein coupling the computing device with the network further comprises transmitting network access credentials to the computing device. 8. The method of claim 1, further comprising the steps of: associating user idetnifying information with the network identifying information; transmitting a request for permission to couple the device with the network using the user identifying information; and determining whether permission to couple the device with the network has been received prior to coupling the device with the network. 9. The method of claim 8, wherein the identifying information comprises a name, a telephone number, an email address, an interact address, or a physical address, or a combination thereof. 10. The method of claim 8, wherein the step of determining whether the contact information is associated with the network identifying information in the database comprises determining whether the contact information matches the user identifying information associated with the network identifying information. 11. A system for associating a device with a network, comprising: a network having access to network identifying information; and a first computing device in communication with the network and a second computing device, the first computing device having comprising user identifying information associated with the network identifying information, wherein the first computing device is configured to receive contact information from the second computing device, and wherein the first computing device is configured to transmit credentials for accessing the network to the second computing device in response to determining that the user identifying information associated with the network identifying information matches the contact information. 12. The system of claim 11, wherein the network identifying information comprises a persistent address. 13. The system of claim 12, wherein the network identifying information comprises a media access control (MAC) address. 14. The system of claim 11, wherein the user identifying information comprises a name, a telephone number, an email address, an Internet address, or a physical address, or a combination thereof. 15. The system of claim 11, wherein the contact information comprises a name, a telephone number, an email address, an internet address, or a physical address, or a combination thereof. 16. The system of claim 11, wherein the network comprises a wireless network. 17. A method for pairing a mobile device with a wireless network, comprising: receiving a media access control (MAC) address of a first network at a first computing device; transmitting the MAC address and contact information to a second computing device using a second network; determining whether the contact information matches user identifying information associated with the MAC address; and pairing the first computing device with the first network in response to determining that the contact information matches the user identifying information associated with the MAC address. 18. The method of claim 17, wherein the second network comprises a cellular network. 19. The system of claim 17, wherein the user identifying information comprises a name, a telephone number, an email address, an internet address, or a physical address, or a combination thereof. 20. The system of claim 17, wherein the contact information comprises a name, a telephone number, an email address, an internet address, or a physical address, or a combination thereof.
Disclosed herein are systems and methods for associating a device with a network, such as a wireless network. One method comprises storing network identifying information for a network in a database, receiving contact information from a computing device, determining whether the contact information is associated with the network identifying information in the database, and coupling the computing device with the network in response to determining that the contact information is associated with the network identifying information.1. A method for associating a device with a network, comprising: storing network identifying information for a network in a database; receiving contact information from a computing device; determining whether the contact information is associated with the network identifying information in the database; and coupling the computing device with the network in response to determining that the contact information is associated with the network identifying information. 2. The method of claim 1, wherein the network identifying information comprises a persistent network identifier. 3. The method of claim 2, wherein the persistent network identifier comprises a media access control (MAC) address. 4. The method of claim 1, wherein the contact information comprises a name. a telephone number, an email address an internet address, or a physical address, or a combination thereof. 5. The method of claim 1, wherein the network comprises a wireless network. 6. The method of claim 1, wherein the computing device comprises a mobile device. 7. The method of claim 1, wherein coupling the computing device with the network further comprises transmitting network access credentials to the computing device. 8. The method of claim 1, further comprising the steps of: associating user idetnifying information with the network identifying information; transmitting a request for permission to couple the device with the network using the user identifying information; and determining whether permission to couple the device with the network has been received prior to coupling the device with the network. 9. The method of claim 8, wherein the identifying information comprises a name, a telephone number, an email address, an interact address, or a physical address, or a combination thereof. 10. The method of claim 8, wherein the step of determining whether the contact information is associated with the network identifying information in the database comprises determining whether the contact information matches the user identifying information associated with the network identifying information. 11. A system for associating a device with a network, comprising: a network having access to network identifying information; and a first computing device in communication with the network and a second computing device, the first computing device having comprising user identifying information associated with the network identifying information, wherein the first computing device is configured to receive contact information from the second computing device, and wherein the first computing device is configured to transmit credentials for accessing the network to the second computing device in response to determining that the user identifying information associated with the network identifying information matches the contact information. 12. The system of claim 11, wherein the network identifying information comprises a persistent address. 13. The system of claim 12, wherein the network identifying information comprises a media access control (MAC) address. 14. The system of claim 11, wherein the user identifying information comprises a name, a telephone number, an email address, an Internet address, or a physical address, or a combination thereof. 15. The system of claim 11, wherein the contact information comprises a name, a telephone number, an email address, an internet address, or a physical address, or a combination thereof. 16. The system of claim 11, wherein the network comprises a wireless network. 17. A method for pairing a mobile device with a wireless network, comprising: receiving a media access control (MAC) address of a first network at a first computing device; transmitting the MAC address and contact information to a second computing device using a second network; determining whether the contact information matches user identifying information associated with the MAC address; and pairing the first computing device with the first network in response to determining that the contact information matches the user identifying information associated with the MAC address. 18. The method of claim 17, wherein the second network comprises a cellular network. 19. The system of claim 17, wherein the user identifying information comprises a name, a telephone number, an email address, an internet address, or a physical address, or a combination thereof. 20. The system of claim 17, wherein the contact information comprises a name, a telephone number, an email address, an internet address, or a physical address, or a combination thereof.
2,600
10,345
10,345
15,791,507
2,647
A computer-implemented information verification method, system, and non-transitory computer readable medium, include measuring a first signal strength from a user device to a second device, wherein the first signal strength is measured from a perspective of the user device, measuring a second signal strength from the second device to the user device, wherein the second signal strength is measured from a perspective of the second device, comparing the first signal strength with the second signal strength, and verifying an information, based on a result of said comparing.
1.-20. (canceled) 21. A computer-implemented information verification method, the method comprising: measuring a first signal strength from a user device to a second device, wherein the first signal strength is measured from a perspective of the user device; measuring a second signal strength from the second device to the user device, wherein the second signal strength is measured from a perspective of the second device; comparing the first signal strength with the second signal strength to determine a difference in signal strength between the first signal strength and the second signal strength; and verifying an information to confirm a location of the user device in relation to the second device, based on the difference in the signal strength in a result of said comparing which indicates a spoofing of the location of the user device according to the difference in relation to an actual location of the user device. 22. The method of claim 2.1, wherein the information comprises a location of the user device, and wherein the result of said comparing is based on a predetermined threshold value. 23. The method of claim 21, wherein the first device is selected from a group consisting of a mobile phone, a laptop, and a personal digital assistant (PDA). 24. The method of claim 21, wherein the second device is selected from a group consisting of an access point, a Wi-Fi hotspot, a network router, and a Bluetooth-enabled device. 25. The method of claim 21, further comprising measuring a plurality of third signal strengths from a plurality of third devices to the user device, wherein the comparing compares the first signal strength with at least some of the third signal strengths to verify a location of the user device. 26. The method of claim 25, wherein the third devices are within a predetermined distance from the location of the user device. 27. The method of claim 25, wherein the location of the user device is verified based on a difference between the first signal strength and at least one of the third signals strengths being less than a predetermined threshold value. 28. The method of claim 25, wherein the location of the user device is verified based on a difference between the first signal strength and an average of all of the third signals strengths being less than a predetermined threshold value. 29. The method of claim 21, wherein the method is practiced in a cloud-computing environment. 30. A computer program product for verifying information, the computer program product comprising a computer-readable storage medium having program instructions embodied therewith, readable/executable by a computer, to cause the computer to perform a method comprising: measuring a first signal strength from a user device to a second device, wherein the first signal strength is measured from a perspective of the user device; measuring a second signal strength from the second device to the user device, wherein the second signal strength is measured from a perspective of the second device; comparing the first signal strength with the second signal strength to determine a difference in signal strength between the first signal strength and the second signal strength; and verifying an information to confirm a location of the user device M relation to the second device, based on the difference in the signal strength in a result of said comparing which indicates a spoofing of the location of the user device according to the difference in relation to an actual location of the user device. 31. The computer program product of claim 30, wherein the information comprises a location of the user device, wherein said verifying is based on a difference between the first signal strength and the second signal strength being less than a predetermined threshold value. 32. The computer program product of claim 30, wherein the first device is selected from a group consisting of a mobile phone, a laptop, and a personal digital assistant (PDA). 33. The computer program product of claim 30, wherein the second device is selected from a group consisting of a WiFi hotspot, an access point, a network router, and a Bluetooth-enabled device. 34. The computer program product of claim 30, further comprising measuring a plurality of third signal strengths from a plurality of third devices to the user device, wherein the comparing compares the first signal strength with at least some of the third signal strengths to verify a location of the user device. 35. The computer program product of claim 34, wherein the third devices are within a predetermined distance from the location of the user device. 36. A location verification system, said system comprising: a processor; and a memory; the memory storing instructions to cause the processor to: measuring a first signal strength from a user device to a second device, wherein the first signal strength is measured from a perspective of the user device; measuring a second signal strength from the second device to the user device, wherein the second signal strength is measured from a perspective of the second device; comparing the first signal strength with the second signal strength to determine a difference in signal strength between the first signal strength and the second signal strength; and verifying an information to confirm a location of the user device in relation to the second device, based on the difference in the signal strength in a result of said comparing which indicates a spoofing of the location of the user device according to the difference in relation to an actual location of the user device. 37. The system of claim 36, wherein the system is practiced in a cloud-computing environment. 38. The system of claim 36, wherein the information comprises a location of the user device, and wherein said verifying is based on a difference between the first signal strength and the second signal strength being less than a predetermined threshold value. 39.The system of claim 36, wherein the second device is selected from a group consisting of a WiFi hotspot, an access point, a network router, and a Bluetooth-enabled device. 40. The computer-implemented information verification method of claim 21, further comprising: measuring a third signal strength between a third device in a predetermined proximity of the second device and the user device, where the third signal strength is measured from the perspective of the user device; wherein the comparing compares the first signal strength with the second signal strength and comparing the first signal strength with the third signal strength to determine the difference in signal strength between the comparison of the first signal strength with the second signal strength and the first signal strength with the third signal strength, and wherein the verifying verifies the information to confirm the location of the user device in relation to the second device and the third device that is in the predetermined proximity of the second device, based on the difference in the signal strength in the result of said comparing which indicates a spoofing of the location of the user device according to the difference in relation to an actual location of the user device.
A computer-implemented information verification method, system, and non-transitory computer readable medium, include measuring a first signal strength from a user device to a second device, wherein the first signal strength is measured from a perspective of the user device, measuring a second signal strength from the second device to the user device, wherein the second signal strength is measured from a perspective of the second device, comparing the first signal strength with the second signal strength, and verifying an information, based on a result of said comparing.1.-20. (canceled) 21. A computer-implemented information verification method, the method comprising: measuring a first signal strength from a user device to a second device, wherein the first signal strength is measured from a perspective of the user device; measuring a second signal strength from the second device to the user device, wherein the second signal strength is measured from a perspective of the second device; comparing the first signal strength with the second signal strength to determine a difference in signal strength between the first signal strength and the second signal strength; and verifying an information to confirm a location of the user device in relation to the second device, based on the difference in the signal strength in a result of said comparing which indicates a spoofing of the location of the user device according to the difference in relation to an actual location of the user device. 22. The method of claim 2.1, wherein the information comprises a location of the user device, and wherein the result of said comparing is based on a predetermined threshold value. 23. The method of claim 21, wherein the first device is selected from a group consisting of a mobile phone, a laptop, and a personal digital assistant (PDA). 24. The method of claim 21, wherein the second device is selected from a group consisting of an access point, a Wi-Fi hotspot, a network router, and a Bluetooth-enabled device. 25. The method of claim 21, further comprising measuring a plurality of third signal strengths from a plurality of third devices to the user device, wherein the comparing compares the first signal strength with at least some of the third signal strengths to verify a location of the user device. 26. The method of claim 25, wherein the third devices are within a predetermined distance from the location of the user device. 27. The method of claim 25, wherein the location of the user device is verified based on a difference between the first signal strength and at least one of the third signals strengths being less than a predetermined threshold value. 28. The method of claim 25, wherein the location of the user device is verified based on a difference between the first signal strength and an average of all of the third signals strengths being less than a predetermined threshold value. 29. The method of claim 21, wherein the method is practiced in a cloud-computing environment. 30. A computer program product for verifying information, the computer program product comprising a computer-readable storage medium having program instructions embodied therewith, readable/executable by a computer, to cause the computer to perform a method comprising: measuring a first signal strength from a user device to a second device, wherein the first signal strength is measured from a perspective of the user device; measuring a second signal strength from the second device to the user device, wherein the second signal strength is measured from a perspective of the second device; comparing the first signal strength with the second signal strength to determine a difference in signal strength between the first signal strength and the second signal strength; and verifying an information to confirm a location of the user device M relation to the second device, based on the difference in the signal strength in a result of said comparing which indicates a spoofing of the location of the user device according to the difference in relation to an actual location of the user device. 31. The computer program product of claim 30, wherein the information comprises a location of the user device, wherein said verifying is based on a difference between the first signal strength and the second signal strength being less than a predetermined threshold value. 32. The computer program product of claim 30, wherein the first device is selected from a group consisting of a mobile phone, a laptop, and a personal digital assistant (PDA). 33. The computer program product of claim 30, wherein the second device is selected from a group consisting of a WiFi hotspot, an access point, a network router, and a Bluetooth-enabled device. 34. The computer program product of claim 30, further comprising measuring a plurality of third signal strengths from a plurality of third devices to the user device, wherein the comparing compares the first signal strength with at least some of the third signal strengths to verify a location of the user device. 35. The computer program product of claim 34, wherein the third devices are within a predetermined distance from the location of the user device. 36. A location verification system, said system comprising: a processor; and a memory; the memory storing instructions to cause the processor to: measuring a first signal strength from a user device to a second device, wherein the first signal strength is measured from a perspective of the user device; measuring a second signal strength from the second device to the user device, wherein the second signal strength is measured from a perspective of the second device; comparing the first signal strength with the second signal strength to determine a difference in signal strength between the first signal strength and the second signal strength; and verifying an information to confirm a location of the user device in relation to the second device, based on the difference in the signal strength in a result of said comparing which indicates a spoofing of the location of the user device according to the difference in relation to an actual location of the user device. 37. The system of claim 36, wherein the system is practiced in a cloud-computing environment. 38. The system of claim 36, wherein the information comprises a location of the user device, and wherein said verifying is based on a difference between the first signal strength and the second signal strength being less than a predetermined threshold value. 39.The system of claim 36, wherein the second device is selected from a group consisting of a WiFi hotspot, an access point, a network router, and a Bluetooth-enabled device. 40. The computer-implemented information verification method of claim 21, further comprising: measuring a third signal strength between a third device in a predetermined proximity of the second device and the user device, where the third signal strength is measured from the perspective of the user device; wherein the comparing compares the first signal strength with the second signal strength and comparing the first signal strength with the third signal strength to determine the difference in signal strength between the comparison of the first signal strength with the second signal strength and the first signal strength with the third signal strength, and wherein the verifying verifies the information to confirm the location of the user device in relation to the second device and the third device that is in the predetermined proximity of the second device, based on the difference in the signal strength in the result of said comparing which indicates a spoofing of the location of the user device according to the difference in relation to an actual location of the user device.
2,600
10,346
10,346
15,675,330
2,647
A method and apparatus include receiving a resource allocation in a control information message. The resource allocation includes one or more resource blocks, wherein each of the one or more resource blocks comprises a plurality of subcarriers. An indication is received in the control information message identifying whether one or more guard subcarriers are present on a respective one or more of the edges of at least one resource block of the resource allocation.
1. A method in a device comprising: receiving a resource allocation in a control information message, the resource allocation comprising one or more resource blocks, wherein each of the one or more resource blocks comprises a plurality of subcarriers; receiving an indication in the control information message identifying whether one or more guard subcarriers are present on a respective one or more of the edges of at least one resource block of the resource allocation. 2. A method in accordance with claim 1, wherein the at least one resource block comprises an edge resource block of the resource allocation. 3. A method in accordance with claim 1, wherein the indication in the control information message indicating whether one or more guard subcarriers are present, includes an identification that guard subcarriers are present at one of a top edge of the at least one resource block, a bottom edge of the at least one resource block, or both a top edge of a first resource block of the at least one resource block and a bottom edge of a second resource block of the at least one resource block. 4. A method in accordance with claim 3, wherein the first resource block is a top-most edge resource block of the resource allocation, and the second resource block is a bottom-most edge resource block of the resource allocation. 5. A method in accordance with claim 1, further comprising receiving in control information an indication of the number of guard subcarriers. 6. A method in accordance with claim 5, wherein the number of guard subcarriers is selected to be an identified ratio of the number of guard subcarriers to the number of allocated subcarriers in the resource allocation. 7. A method in accordance with claim 5, wherein the control information is included in the control information message. 8. A method in accordance with claim 5, wherein the control information is included in a received broadcast channel. 9. A method in accordance with claim 5, wherein the control information is included in a higher layer control information message, wherein the higher layer is above a physical layer. 10. A method in accordance with claim 1, further comprising receiving the resource allocation for receiving data on a carrier of a serving cell, wherein a first waveform with a first orthogonal frequency division multiplexing (OFDM) subcarrier spacing and a second waveform with a second OFDM subcarrier spacing are frequency division multiplexed on a same carrier. 11. A method in accordance with claim 10, wherein the first waveform with the first OFDM subcarrier spacing is associated with a first numerology, and the second waveform with the second OFDM subcarrier spacing is associated with a second numerology. 12. A method in accordance with claim 11, wherein each of the first numerology and the second numerology can include one or more of a separately defined subcarrier spacing, a separately defined length of cyclic prefix, and a separately defined pilot structure. 13. A method in accordance with claim 11, wherein a symbol size for a first numerology is an integer multiple of a symbol size for a second numerology. 14. A method in accordance with claim 11, further comprising receiving assistance signaling so that the numerology being used by a neighboring cell on the carrier can be determined. 15. A method in accordance with claim 11, wherein the use of multiple numerologies on the carrier for the serving cell and a neighboring cell are coordinated, such that each cell indicates a preferred resource block range for transmission using a numerology of the multiple numerologies. 16. A method in accordance with claim 11, wherein the preferred resource block range for transmission between different numerologies in the serving or neighboring cell is different. 17. A method in accordance with claim 11, wherein a default set of values for a default numerology for one or more of the first numerology and the second numerology is dependent upon the carrier frequency band. 18. A method in accordance with claim 17, wherein the same carrier can be subdivided into multiple subcarrier groupings, where each subcarrier grouping can have different default values. 19. A user equipment in a communication network, the user equipment comprising: a transceiver that sends and receives signals between the user equipment and a communication network entity including a resource allocation in a control information message, the resource allocation comprising one or more resource blocks, wherein each of the one or more resource blocks comprises a plurality of subcarriers; and receives an indication in the control information message identifying the presence and position of any guard subcarriers; and a controller that can decode the indication and determine whether one or more guard subcarriers are present on a respective one or more of the edges of at least one resource block of the resource allocation. 20. A user equipment in accordance with claim 19, wherein the controller further identifies from the indication in the control information message, in addition to identifying whether one or more guard subcarriers are present, that guard subcarriers are present at one of a top edge of the at least one resource block, a bottom edge of the at least one resource block, or both top edge of a first resource block of the at least one resource block and bottom edge of a second resource block of the at least one resource block.
A method and apparatus include receiving a resource allocation in a control information message. The resource allocation includes one or more resource blocks, wherein each of the one or more resource blocks comprises a plurality of subcarriers. An indication is received in the control information message identifying whether one or more guard subcarriers are present on a respective one or more of the edges of at least one resource block of the resource allocation.1. A method in a device comprising: receiving a resource allocation in a control information message, the resource allocation comprising one or more resource blocks, wherein each of the one or more resource blocks comprises a plurality of subcarriers; receiving an indication in the control information message identifying whether one or more guard subcarriers are present on a respective one or more of the edges of at least one resource block of the resource allocation. 2. A method in accordance with claim 1, wherein the at least one resource block comprises an edge resource block of the resource allocation. 3. A method in accordance with claim 1, wherein the indication in the control information message indicating whether one or more guard subcarriers are present, includes an identification that guard subcarriers are present at one of a top edge of the at least one resource block, a bottom edge of the at least one resource block, or both a top edge of a first resource block of the at least one resource block and a bottom edge of a second resource block of the at least one resource block. 4. A method in accordance with claim 3, wherein the first resource block is a top-most edge resource block of the resource allocation, and the second resource block is a bottom-most edge resource block of the resource allocation. 5. A method in accordance with claim 1, further comprising receiving in control information an indication of the number of guard subcarriers. 6. A method in accordance with claim 5, wherein the number of guard subcarriers is selected to be an identified ratio of the number of guard subcarriers to the number of allocated subcarriers in the resource allocation. 7. A method in accordance with claim 5, wherein the control information is included in the control information message. 8. A method in accordance with claim 5, wherein the control information is included in a received broadcast channel. 9. A method in accordance with claim 5, wherein the control information is included in a higher layer control information message, wherein the higher layer is above a physical layer. 10. A method in accordance with claim 1, further comprising receiving the resource allocation for receiving data on a carrier of a serving cell, wherein a first waveform with a first orthogonal frequency division multiplexing (OFDM) subcarrier spacing and a second waveform with a second OFDM subcarrier spacing are frequency division multiplexed on a same carrier. 11. A method in accordance with claim 10, wherein the first waveform with the first OFDM subcarrier spacing is associated with a first numerology, and the second waveform with the second OFDM subcarrier spacing is associated with a second numerology. 12. A method in accordance with claim 11, wherein each of the first numerology and the second numerology can include one or more of a separately defined subcarrier spacing, a separately defined length of cyclic prefix, and a separately defined pilot structure. 13. A method in accordance with claim 11, wherein a symbol size for a first numerology is an integer multiple of a symbol size for a second numerology. 14. A method in accordance with claim 11, further comprising receiving assistance signaling so that the numerology being used by a neighboring cell on the carrier can be determined. 15. A method in accordance with claim 11, wherein the use of multiple numerologies on the carrier for the serving cell and a neighboring cell are coordinated, such that each cell indicates a preferred resource block range for transmission using a numerology of the multiple numerologies. 16. A method in accordance with claim 11, wherein the preferred resource block range for transmission between different numerologies in the serving or neighboring cell is different. 17. A method in accordance with claim 11, wherein a default set of values for a default numerology for one or more of the first numerology and the second numerology is dependent upon the carrier frequency band. 18. A method in accordance with claim 17, wherein the same carrier can be subdivided into multiple subcarrier groupings, where each subcarrier grouping can have different default values. 19. A user equipment in a communication network, the user equipment comprising: a transceiver that sends and receives signals between the user equipment and a communication network entity including a resource allocation in a control information message, the resource allocation comprising one or more resource blocks, wherein each of the one or more resource blocks comprises a plurality of subcarriers; and receives an indication in the control information message identifying the presence and position of any guard subcarriers; and a controller that can decode the indication and determine whether one or more guard subcarriers are present on a respective one or more of the edges of at least one resource block of the resource allocation. 20. A user equipment in accordance with claim 19, wherein the controller further identifies from the indication in the control information message, in addition to identifying whether one or more guard subcarriers are present, that guard subcarriers are present at one of a top edge of the at least one resource block, a bottom edge of the at least one resource block, or both top edge of a first resource block of the at least one resource block and bottom edge of a second resource block of the at least one resource block.
2,600
10,347
10,347
15,500,552
2,693
A method of implicitly grouping annotations with a document includes with a projection device, projecting an image of a document onto a touch sensitive pad. The method further includes receiving a number of user-input annotations to the document, and with a processor, implicitly associating the annotations with the document without receiving selection of an annotation grouping mode from a user.
1. A method of implicitly grouping annotations with a document, comprising: with a projection device, projecting an image of a document onto a touch sensitive pad, receiving a number of user-input annotations to the document; and with a processor, implicitly associating the annotations with the document without receiving selection of an annotation grouping mode from a user. 2. The method of claim 1, further comprising: capturing an image of the document with an image capturing device; and with the processor, initiate an isolation mode in which the image of the document is displayed to a user on a display device. 3. The method of claim 2, in which the display device is a touch-sensitive pad on which the image of the document is projected. 4. The method of claim 2, in which the display device is a touch-screen computing device. 5. The method of claim 1, in which implicitly associating the annotations, with the document without receiving selection of an annotation mode from a user comprises: grouping the annotations by adding the annotations to a common graphical user interface (GUI) layer; and adding the GUI layer as one of a number of layers associated with the image of the document. 6. The method of claim 1, further comprising receiving user-specified association instructions, the user-specified association instructions defining how the annotation group is edited, in which the edited grouping is treated by the processor as compound object. 7. The method of claim 6, in which editing the annotation group comprises adding annotations to the annotation group, removing annotations from the annotation group, repositioning grouped annotations relative to each other, or combinations thereof. 8. A computer program product for implicitly grouping annotations with a document, the computer program product comprising: a computer readable storage medium comprising computer usable program code embodied therewith, the computer usable program code when executed by a processor, to detect user-input on a touch sensitive pad; and implicitly associate the user-input as a number of annotations to an image of a document projected on the touch sensitive pad without receiving selection of an annotation mode from a user. 9. The computer program product of claim 8, further comprising: computer usable program code to, when executed by a processor, determine if a captured image comprises a document; and if the captured image is a document: computer usable program code to, when executed by a processor, define a number of fields in the document; and computer usable program code to, when executed by a processor, recognize a number of characters within the document using an optical character recognition process on the document. 10. The computer program product of claim 8, further comprising: computer usable program code to, when executed by the processor, create a bounding box bounding the annotations; and computer usable program code to, when executed by the processor, determine if a subsequently added annotation is outside the bounding box; and if the subsequently added annotation is outside the bounding box: computer usable program code to, when executed by the processor, increasing the size of the bounding box to include the subsequent annotation. 11. The computer program product of claim 8, further comprising computer usable program code to, when executed by the processor, determine whether annotations should be grouped or treated as independent annotations based on a number of policies. 12. The computer program product of claim 8, further comprising computer usable program code to, when executed by the processor, determine whether the user annotations are ink annotations, a text annotations, or imported digital objects. 13. The computer program product of claim 8, in which the image of the document is an image captured by an image capture device coupled to the processor. 14. A system for annotating document, comprising. an image capture device for capturing an image of a document; an image projection device for projecting the image of the document onto a touch-sensitive pad in the same location and orientation as the original document during the capturing of the image of the document; a processor to receive a number of user-input annotations to the document and implicitly associate the annotations with the document without receiving a selection of an annotation mode from a user. 15. The system of claim 14, in which the image of the document is an image of a document prepared by a computer program.
A method of implicitly grouping annotations with a document includes with a projection device, projecting an image of a document onto a touch sensitive pad. The method further includes receiving a number of user-input annotations to the document, and with a processor, implicitly associating the annotations with the document without receiving selection of an annotation grouping mode from a user.1. A method of implicitly grouping annotations with a document, comprising: with a projection device, projecting an image of a document onto a touch sensitive pad, receiving a number of user-input annotations to the document; and with a processor, implicitly associating the annotations with the document without receiving selection of an annotation grouping mode from a user. 2. The method of claim 1, further comprising: capturing an image of the document with an image capturing device; and with the processor, initiate an isolation mode in which the image of the document is displayed to a user on a display device. 3. The method of claim 2, in which the display device is a touch-sensitive pad on which the image of the document is projected. 4. The method of claim 2, in which the display device is a touch-screen computing device. 5. The method of claim 1, in which implicitly associating the annotations, with the document without receiving selection of an annotation mode from a user comprises: grouping the annotations by adding the annotations to a common graphical user interface (GUI) layer; and adding the GUI layer as one of a number of layers associated with the image of the document. 6. The method of claim 1, further comprising receiving user-specified association instructions, the user-specified association instructions defining how the annotation group is edited, in which the edited grouping is treated by the processor as compound object. 7. The method of claim 6, in which editing the annotation group comprises adding annotations to the annotation group, removing annotations from the annotation group, repositioning grouped annotations relative to each other, or combinations thereof. 8. A computer program product for implicitly grouping annotations with a document, the computer program product comprising: a computer readable storage medium comprising computer usable program code embodied therewith, the computer usable program code when executed by a processor, to detect user-input on a touch sensitive pad; and implicitly associate the user-input as a number of annotations to an image of a document projected on the touch sensitive pad without receiving selection of an annotation mode from a user. 9. The computer program product of claim 8, further comprising: computer usable program code to, when executed by a processor, determine if a captured image comprises a document; and if the captured image is a document: computer usable program code to, when executed by a processor, define a number of fields in the document; and computer usable program code to, when executed by a processor, recognize a number of characters within the document using an optical character recognition process on the document. 10. The computer program product of claim 8, further comprising: computer usable program code to, when executed by the processor, create a bounding box bounding the annotations; and computer usable program code to, when executed by the processor, determine if a subsequently added annotation is outside the bounding box; and if the subsequently added annotation is outside the bounding box: computer usable program code to, when executed by the processor, increasing the size of the bounding box to include the subsequent annotation. 11. The computer program product of claim 8, further comprising computer usable program code to, when executed by the processor, determine whether annotations should be grouped or treated as independent annotations based on a number of policies. 12. The computer program product of claim 8, further comprising computer usable program code to, when executed by the processor, determine whether the user annotations are ink annotations, a text annotations, or imported digital objects. 13. The computer program product of claim 8, in which the image of the document is an image captured by an image capture device coupled to the processor. 14. A system for annotating document, comprising. an image capture device for capturing an image of a document; an image projection device for projecting the image of the document onto a touch-sensitive pad in the same location and orientation as the original document during the capturing of the image of the document; a processor to receive a number of user-input annotations to the document and implicitly associate the annotations with the document without receiving a selection of an annotation mode from a user. 15. The system of claim 14, in which the image of the document is an image of a document prepared by a computer program.
2,600
10,348
10,348
15,073,233
2,621
Embodiments herein describe a display device for performing wavelength multiplex visualization (WMV). The display device includes a display screen that includes a plurality of pixels where each pixel contains at least two discrete emitters that generate electromagnetic radiation at a certain wavelength. By controlling the luminance of the respective emitter, the display device sets the color of the pixel. When performing WMV, the display device uses the pixels to generate a left eye display frame and a right eye display frame. Generally, the left eye frame is generated using a different set of wavelengths than the right eye frame. The user can wear special glasses that have interference filters in the lenses which permit only one of the wavelengths to pass through. As a result, each eye of the user sees only one of the display frames, thereby creating the 3D effects.
1. A display device, comprising: a display screen comprising a plurality of pixels, each of the pixels comprising at least a first discrete emitter and a second discrete emitter, wherein the first and second emitters are configured to generate visible light with different wavelengths; and a display controller configured to: drive the pixels to output a left eye frame, wherein the left eye frame is associated with a first set of wavelengths; and drive the pixels to output a right eye frame, wherein the right eye frame is associated with a second set of wavelengths different from the first set of wavelengths, and wherein the left eye frame and the right eye frame generate 3D effects when viewed by a user. 2. The display device of claim 1, wherein the first set and second set of wavelengths are predefined to perform wavelength multiplexing visualization. 3. The display device of claim 1, wherein the first set of wavelengths comprises a first shade of a first color and a first shade of a second color, and wherein the second set of wavelengths comprises a second shade of the first color and a second shade of the second color. 4. The display device of claim 3, wherein the first set of wavelengths does not include the second shade of the first color and the second shade of the second color and wherein the second set of wavelengths does not include the first shade of the first color and the first shade of the second color. 5. The display device of claim 1, wherein each of the first and second emitters comprises one of an organic light emitting diode (OLED) and a quantum dot. 6. The display device of claim 1, wherein each of the pixels further comprises a third emitter and a fourth emitter, wherein the first and third emitters output visible light of a first color and the second and fourth emitters output visible light of a second color, wherein the first emitter outputs a different shade of the first color than the third emitter and the second emitter outputs a different shade of the second color than the fourth emitter. 7. The display device of claim 6, wherein the first and second emitters output luminance values corresponding to the left eye frame and the third and fourth emitters output luminance values corresponding to the right eye frame, wherein the first, second, third, and fourth emitters output the luminance values for the left and right eye frames simultaneously. 8. The display device of claim 1, wherein the first emitter generates visible light of a first color and the second emitter generates visible light of a second color, wherein the display controller is configured to: during a first time period, drive the first emitter to generate a first shade of the first color and drive the second emitter to generate a first shade of the second color, wherein the first shades comprise wavelengths that are within the first set of wavelengths associated with the left eye frame, and during a second time period, drive the first emitter to generate a second shade of the first color and drive the second emitter to generate a second shade of the second color, wherein the second shades comprise wavelengths that are within the second set of wavelengths associated with the right eye frame. 9. The display device of claim 1, further comprising: a tunable optical filter overlaying the plurality of pixels, wherein the display controller is configured to: during a first time period, set a state of the tunable optical filter to filter visible light generated by the first and second emitters such that the filtered light comprises wavelengths that are within the first set of wavelengths associated with the left eye frame, and during a second time period, change the state of the tunable optical filter to filter the visible light generated by the first and second emitters such that the filtered light comprises wavelengths that are within the second set of wavelengths associated with the right eye frame. 10. The display device of claim 1, wherein each of the pixels further comprises a third emitter and a fourth emitter, wherein the first and third emitters output visible light of a first color and the second and fourth emitters output visible light of a second color, the display device further comprising: a static optical filter overlaying the plurality of pixels, wherein the static optical filter filters the visible light emitted by the first, second, third, and further emitters such that the visible light corresponding to the first emitter is a first shade of the first color, the visible light corresponding to the third emitter is a second shade of the first color, the visible light corresponding to the second emitter is a first shade of the second color, and the visible light corresponding to the fourth emitter is a second shade of the second color. 11. A display screen comprising: a plurality of pixels, each of the pixels comprising at least a first discrete emitter and a second discrete emitter, wherein the first and second emitters are configured to generate visible light with different wavelengths, wherein the pixels are configured to output a left eye frame associated with a first set of wavelengths, and wherein the pixels are configured to output a right eye frame associated with a second set of wavelengths different from the first set of wavelengths, and wherein the left eye frame and the right eye frame generate 3D effects when viewed by a user. 12. The display screen of claim 11, wherein the first set and second set of wavelengths are predefined to perform wavelength multiplexing visualization. 13. The display screen of claim 11, wherein the first set of wavelengths comprises a first shade of a first color and a first shade of a second color, and wherein the second set of wavelengths comprises a second shade of the first color and a second shade of the second color. 14. The display screen of claim 13, wherein the first set of wavelengths does not include the second shade of the first color and the second shade of the second color and wherein the second set of wavelengths does not include the first shade of the first color and the first shade of the second color. 15. The display screen of claim 11, wherein each of the pixels further comprises a third emitter and a fourth emitter, wherein the first and third emitters output visible light of a first color and the second and fourth emitters output visible light of a second color, wherein the first emitter outputs a different shade of the first color than the third emitter and the second emitter outputs a different shade of the second color than the fourth emitter. 16. The display screen of claim 15, wherein the first and second emitters output luminance values corresponding to the left eye frame and the third and fourth emitters output luminance values corresponding to the right eye frame, wherein the first, second, third, and fourth emitters output the luminance values for the left and right eye frames simultaneously. 17. The display screen of claim 11, wherein the first emitter generates visible light of a first color and the second emitter generates visible light of a second color, wherein, during a first time period, the first emitter is configured to generate a first shade of the first color and the second emitter is configured to generate a first shade of the second color, wherein the first shades comprise wavelengths that are within the first set of wavelengths associated with the left eye frame, and wherein, during a second time period, the first emitter generates a second shade of the first color and the second emitter is configured to generate a second shade of the second color, wherein the second shades comprise wavelengths that are within the second set of wavelengths associated with the right eye frame. 18. The display screen of claim 11, further comprising: a tunable optical filter overlaying the plurality of pixels, wherein, during a first time period, a state of the tunable optical filter is set to filter visible light generated by the first and second emitters such that the filtered light comprises wavelengths that are within the first set of wavelengths associated with the left eye frame, and wherein, during a second time period, the state of the tunable optical filter is changed to filter the visible light generated by the first and second emitters such that the filtered light comprises wavelengths that are within the second set of wavelengths associated with the right eye frame. 19. The display screen of claim 18, further comprising: a protective cover, wherein the tunable optical filter is disposed between the protective cover and the plurality of pixels. 20. A computer-readable storage medium including instructions that, when executed by a processor, cause the processor to perform an operation, the operation comprising: driving a plurality of pixels to output a left eye frame, wherein the left eye frame is associated with a first set of wavelengths, and wherein each of the pixels comprises at least a first discrete emitter and a second discrete emitter, wherein the first and second emitters are configured to generate visible light with different wavelengths; and driving the pixels to output a right eye frame, wherein the right eye frame is associated with a second set of wavelengths different from the first set of wavelengths, and wherein the left eye frame and the right eye frame generate 3D effects when viewed by a user.
Embodiments herein describe a display device for performing wavelength multiplex visualization (WMV). The display device includes a display screen that includes a plurality of pixels where each pixel contains at least two discrete emitters that generate electromagnetic radiation at a certain wavelength. By controlling the luminance of the respective emitter, the display device sets the color of the pixel. When performing WMV, the display device uses the pixels to generate a left eye display frame and a right eye display frame. Generally, the left eye frame is generated using a different set of wavelengths than the right eye frame. The user can wear special glasses that have interference filters in the lenses which permit only one of the wavelengths to pass through. As a result, each eye of the user sees only one of the display frames, thereby creating the 3D effects.1. A display device, comprising: a display screen comprising a plurality of pixels, each of the pixels comprising at least a first discrete emitter and a second discrete emitter, wherein the first and second emitters are configured to generate visible light with different wavelengths; and a display controller configured to: drive the pixels to output a left eye frame, wherein the left eye frame is associated with a first set of wavelengths; and drive the pixels to output a right eye frame, wherein the right eye frame is associated with a second set of wavelengths different from the first set of wavelengths, and wherein the left eye frame and the right eye frame generate 3D effects when viewed by a user. 2. The display device of claim 1, wherein the first set and second set of wavelengths are predefined to perform wavelength multiplexing visualization. 3. The display device of claim 1, wherein the first set of wavelengths comprises a first shade of a first color and a first shade of a second color, and wherein the second set of wavelengths comprises a second shade of the first color and a second shade of the second color. 4. The display device of claim 3, wherein the first set of wavelengths does not include the second shade of the first color and the second shade of the second color and wherein the second set of wavelengths does not include the first shade of the first color and the first shade of the second color. 5. The display device of claim 1, wherein each of the first and second emitters comprises one of an organic light emitting diode (OLED) and a quantum dot. 6. The display device of claim 1, wherein each of the pixels further comprises a third emitter and a fourth emitter, wherein the first and third emitters output visible light of a first color and the second and fourth emitters output visible light of a second color, wherein the first emitter outputs a different shade of the first color than the third emitter and the second emitter outputs a different shade of the second color than the fourth emitter. 7. The display device of claim 6, wherein the first and second emitters output luminance values corresponding to the left eye frame and the third and fourth emitters output luminance values corresponding to the right eye frame, wherein the first, second, third, and fourth emitters output the luminance values for the left and right eye frames simultaneously. 8. The display device of claim 1, wherein the first emitter generates visible light of a first color and the second emitter generates visible light of a second color, wherein the display controller is configured to: during a first time period, drive the first emitter to generate a first shade of the first color and drive the second emitter to generate a first shade of the second color, wherein the first shades comprise wavelengths that are within the first set of wavelengths associated with the left eye frame, and during a second time period, drive the first emitter to generate a second shade of the first color and drive the second emitter to generate a second shade of the second color, wherein the second shades comprise wavelengths that are within the second set of wavelengths associated with the right eye frame. 9. The display device of claim 1, further comprising: a tunable optical filter overlaying the plurality of pixels, wherein the display controller is configured to: during a first time period, set a state of the tunable optical filter to filter visible light generated by the first and second emitters such that the filtered light comprises wavelengths that are within the first set of wavelengths associated with the left eye frame, and during a second time period, change the state of the tunable optical filter to filter the visible light generated by the first and second emitters such that the filtered light comprises wavelengths that are within the second set of wavelengths associated with the right eye frame. 10. The display device of claim 1, wherein each of the pixels further comprises a third emitter and a fourth emitter, wherein the first and third emitters output visible light of a first color and the second and fourth emitters output visible light of a second color, the display device further comprising: a static optical filter overlaying the plurality of pixels, wherein the static optical filter filters the visible light emitted by the first, second, third, and further emitters such that the visible light corresponding to the first emitter is a first shade of the first color, the visible light corresponding to the third emitter is a second shade of the first color, the visible light corresponding to the second emitter is a first shade of the second color, and the visible light corresponding to the fourth emitter is a second shade of the second color. 11. A display screen comprising: a plurality of pixels, each of the pixels comprising at least a first discrete emitter and a second discrete emitter, wherein the first and second emitters are configured to generate visible light with different wavelengths, wherein the pixels are configured to output a left eye frame associated with a first set of wavelengths, and wherein the pixels are configured to output a right eye frame associated with a second set of wavelengths different from the first set of wavelengths, and wherein the left eye frame and the right eye frame generate 3D effects when viewed by a user. 12. The display screen of claim 11, wherein the first set and second set of wavelengths are predefined to perform wavelength multiplexing visualization. 13. The display screen of claim 11, wherein the first set of wavelengths comprises a first shade of a first color and a first shade of a second color, and wherein the second set of wavelengths comprises a second shade of the first color and a second shade of the second color. 14. The display screen of claim 13, wherein the first set of wavelengths does not include the second shade of the first color and the second shade of the second color and wherein the second set of wavelengths does not include the first shade of the first color and the first shade of the second color. 15. The display screen of claim 11, wherein each of the pixels further comprises a third emitter and a fourth emitter, wherein the first and third emitters output visible light of a first color and the second and fourth emitters output visible light of a second color, wherein the first emitter outputs a different shade of the first color than the third emitter and the second emitter outputs a different shade of the second color than the fourth emitter. 16. The display screen of claim 15, wherein the first and second emitters output luminance values corresponding to the left eye frame and the third and fourth emitters output luminance values corresponding to the right eye frame, wherein the first, second, third, and fourth emitters output the luminance values for the left and right eye frames simultaneously. 17. The display screen of claim 11, wherein the first emitter generates visible light of a first color and the second emitter generates visible light of a second color, wherein, during a first time period, the first emitter is configured to generate a first shade of the first color and the second emitter is configured to generate a first shade of the second color, wherein the first shades comprise wavelengths that are within the first set of wavelengths associated with the left eye frame, and wherein, during a second time period, the first emitter generates a second shade of the first color and the second emitter is configured to generate a second shade of the second color, wherein the second shades comprise wavelengths that are within the second set of wavelengths associated with the right eye frame. 18. The display screen of claim 11, further comprising: a tunable optical filter overlaying the plurality of pixels, wherein, during a first time period, a state of the tunable optical filter is set to filter visible light generated by the first and second emitters such that the filtered light comprises wavelengths that are within the first set of wavelengths associated with the left eye frame, and wherein, during a second time period, the state of the tunable optical filter is changed to filter the visible light generated by the first and second emitters such that the filtered light comprises wavelengths that are within the second set of wavelengths associated with the right eye frame. 19. The display screen of claim 18, further comprising: a protective cover, wherein the tunable optical filter is disposed between the protective cover and the plurality of pixels. 20. A computer-readable storage medium including instructions that, when executed by a processor, cause the processor to perform an operation, the operation comprising: driving a plurality of pixels to output a left eye frame, wherein the left eye frame is associated with a first set of wavelengths, and wherein each of the pixels comprises at least a first discrete emitter and a second discrete emitter, wherein the first and second emitters are configured to generate visible light with different wavelengths; and driving the pixels to output a right eye frame, wherein the right eye frame is associated with a second set of wavelengths different from the first set of wavelengths, and wherein the left eye frame and the right eye frame generate 3D effects when viewed by a user.
2,600
10,349
10,349
14,202,627
2,689
A system and methods comprise a touchscreen at a premises. The touchscreen includes a processor coupled to a security system at the premises. User interfaces are presented via the touchscreen. The user interfaces include a security interface that provides control of functions of the security system and access to data collected by the security system, and a network interface that provides access to network devices. A plurality of network devices at the premises is coupled to the touchscreen. A plurality of application programming interfaces (APIs) is coupled to the processor and provides access to the plurality of network devices. The plurality of APIs includes at least one event API and at least one command API. A security server at a remote location is coupled to the touchscreen. The security server comprises a client interface through which remote client devices exchange data with the touchscreen and the security system.
1. A system comprising: an interface device located at a premises and in communication with a security system located at the premises, wherein the interface device is configured to output: a security interface configured to control the security system and access data received by the security system, and a network interface configured to access, based on an event application programming interface (API), event data associated with a premises device, wherein the network interface is configured to send, to the premises device a first uniform resource identifier associated with a function of a command API of the premises device; and a security server in communication with the interface device, wherein the security server is located external to the premises, wherein the security server comprises a client interface configured to communicate data between a user device and one or more of the interface device or the security system. 2. The system of claim 1, wherein the premises device comprises one or more of a camera, a sensor, or an automation device. 3. The system of claim 1, wherein the network interface is configured to receive, from the user device, a command associated with the function. 4. The system of claim 1, wherein the network interface is configured to access, based on the event API, event data associated with the premises device by sending a second uniform resource identifier associated with accessing the event data associated with the premises device. 5. The system of claim 1, wherein the first uniform resource identifier is formatted based on a namespace for accessing functions of the premises device. 6. The system of claim 1, wherein the function comprises an operation to perform based on the first uniform resource identifier. 7. The system of claim 1, wherein the function comprises one or more of a function to configure a parameter of the premises device, a reboot function, a firmware upgrade function, or a function to cause the premises device to tunnel to a remote device. 8. A system comprising: a security system located at a premises, wherein the security system comprises a control device and at least one sensor; a premises device located at the premises; and an interface device in communication with the security system and the premises device, wherein the interface devices comprises: a security interface configured to control the security system and access data associated with the security system, and a network interface configured to access, via an event application programming interface (API), event data associated with the premises device, wherein the network interface is configured to send, to the premises device, a first uniform resource identifier associated with a function of a command API of the premises device. 9. The system of claim 8, wherein the premises device comprises one or more of a camera, a sensor, or an automation device. 10. The system of claim 8, wherein the network interface is configured to receive, from a user device, a command associated with the function. 11. The system of claim 8, wherein the network interface is configured to access, based on the event API, event data associated with the premises device by sending a second uniform resource identifier associated with accessing the event data associated with the premises device. 12. The system of claim 8, wherein the first uniform resource identifier is formatted based on a namespace for accessing functions of the premises device. 13. (canceled) 14. The system of claim 8, wherein the function comprises one or more of a function to configure a parameter of the premises device, a reboot function, a firmware upgrade function, or a function to cause the premises device to tunnel to a remote device. 15. A method comprising: receiving, via an interface device, data associated with a security system located at a premises; receiving, based on an event application programming interface (API), event data associated with a premises device located at the premises; outputting, to a user, one or more of the data associated with a security system and the event data associated with the premises device; receiving, by the interface device, data indicative of a command associated with the premises device; and sending, to the premises device and based on the command, a first uniform resource identifier associated with a function of a command API of the premises device. 16. The method of claim 15, wherein the premises device comprises one or more of a camera, a sensor, or an automation device. 17. The method of claim 15, further comprising sending, to the premises device, a second uniform resource identifier associated with accessing the event data associated with the premises device, and wherein receiving the event data associated with the premises device is in response to sending the second uniform resource identifier. 18. The method of claim 15, wherein the first uniform resource identifier is formatted based on a namespace for accessing functions of the premises device. 19. (canceled) 20. The method of claim 15, wherein the function comprises one or more of a function to configure a parameter of the premises device, a reboot function, a firmware upgrade function, or a function to cause the premises device to tunnel to a remote device. 21. The system of claim 1, wherein the function is associated with accessing data on the premises device. 22. The system of claim 1, wherein the function updates a configuration of the premises device.
A system and methods comprise a touchscreen at a premises. The touchscreen includes a processor coupled to a security system at the premises. User interfaces are presented via the touchscreen. The user interfaces include a security interface that provides control of functions of the security system and access to data collected by the security system, and a network interface that provides access to network devices. A plurality of network devices at the premises is coupled to the touchscreen. A plurality of application programming interfaces (APIs) is coupled to the processor and provides access to the plurality of network devices. The plurality of APIs includes at least one event API and at least one command API. A security server at a remote location is coupled to the touchscreen. The security server comprises a client interface through which remote client devices exchange data with the touchscreen and the security system.1. A system comprising: an interface device located at a premises and in communication with a security system located at the premises, wherein the interface device is configured to output: a security interface configured to control the security system and access data received by the security system, and a network interface configured to access, based on an event application programming interface (API), event data associated with a premises device, wherein the network interface is configured to send, to the premises device a first uniform resource identifier associated with a function of a command API of the premises device; and a security server in communication with the interface device, wherein the security server is located external to the premises, wherein the security server comprises a client interface configured to communicate data between a user device and one or more of the interface device or the security system. 2. The system of claim 1, wherein the premises device comprises one or more of a camera, a sensor, or an automation device. 3. The system of claim 1, wherein the network interface is configured to receive, from the user device, a command associated with the function. 4. The system of claim 1, wherein the network interface is configured to access, based on the event API, event data associated with the premises device by sending a second uniform resource identifier associated with accessing the event data associated with the premises device. 5. The system of claim 1, wherein the first uniform resource identifier is formatted based on a namespace for accessing functions of the premises device. 6. The system of claim 1, wherein the function comprises an operation to perform based on the first uniform resource identifier. 7. The system of claim 1, wherein the function comprises one or more of a function to configure a parameter of the premises device, a reboot function, a firmware upgrade function, or a function to cause the premises device to tunnel to a remote device. 8. A system comprising: a security system located at a premises, wherein the security system comprises a control device and at least one sensor; a premises device located at the premises; and an interface device in communication with the security system and the premises device, wherein the interface devices comprises: a security interface configured to control the security system and access data associated with the security system, and a network interface configured to access, via an event application programming interface (API), event data associated with the premises device, wherein the network interface is configured to send, to the premises device, a first uniform resource identifier associated with a function of a command API of the premises device. 9. The system of claim 8, wherein the premises device comprises one or more of a camera, a sensor, or an automation device. 10. The system of claim 8, wherein the network interface is configured to receive, from a user device, a command associated with the function. 11. The system of claim 8, wherein the network interface is configured to access, based on the event API, event data associated with the premises device by sending a second uniform resource identifier associated with accessing the event data associated with the premises device. 12. The system of claim 8, wherein the first uniform resource identifier is formatted based on a namespace for accessing functions of the premises device. 13. (canceled) 14. The system of claim 8, wherein the function comprises one or more of a function to configure a parameter of the premises device, a reboot function, a firmware upgrade function, or a function to cause the premises device to tunnel to a remote device. 15. A method comprising: receiving, via an interface device, data associated with a security system located at a premises; receiving, based on an event application programming interface (API), event data associated with a premises device located at the premises; outputting, to a user, one or more of the data associated with a security system and the event data associated with the premises device; receiving, by the interface device, data indicative of a command associated with the premises device; and sending, to the premises device and based on the command, a first uniform resource identifier associated with a function of a command API of the premises device. 16. The method of claim 15, wherein the premises device comprises one or more of a camera, a sensor, or an automation device. 17. The method of claim 15, further comprising sending, to the premises device, a second uniform resource identifier associated with accessing the event data associated with the premises device, and wherein receiving the event data associated with the premises device is in response to sending the second uniform resource identifier. 18. The method of claim 15, wherein the first uniform resource identifier is formatted based on a namespace for accessing functions of the premises device. 19. (canceled) 20. The method of claim 15, wherein the function comprises one or more of a function to configure a parameter of the premises device, a reboot function, a firmware upgrade function, or a function to cause the premises device to tunnel to a remote device. 21. The system of claim 1, wherein the function is associated with accessing data on the premises device. 22. The system of claim 1, wherein the function updates a configuration of the premises device.
2,600
10,350
10,350
15,670,572
2,657
A method includes, during a teleconference between a first audio input/output device and a second audio input/output device, receiving, at an analysis and response device, a signal indicating a spoken command, the spoken command associated with a command mode. The method further includes, in response to receiving the signal, generating, at the device, a reply message based on the spoken command, the reply message to be output to one or more devices selected based on the command mode. The one or more devices includes the first audio input/output device, the second audio input/output device, or a combination thereof.
1. A method comprising: during a teleconference between a first audio input/output device and a second audio input/output device, receiving, at an analysis and response device, a signal indicating a spoken command, the spoken command associated with a command mode; and in response to receiving the signal, generating, at the analysis and response device, a reply message based on the spoken command, the reply message to be output to one or more devices selected based on the command mode, wherein the one or more devices include the first audio input/output device, the second audio input/output device, or a combination thereof. 2. The method of claim 1, wherein the signal indicating the spoken command is received from the first audio input/output device, and in response to the command mode corresponding to a local mode, the one or more devices include the first audio input/output device but not the second audio input/output device. 3. The method of claim 1, wherein the analysis and response device corresponds to a bridge device configured to facilitate the teleconference, the method further comprising transmitting the reply message from the analysis and response device to the first audio input/output device. 4. The method of claim 1, wherein the analysis and response device corresponds to the first audio input/output device. 5. The method of claim 1, wherein the signal indicating the spoken command is received from the first audio input/output device, and in response to the command mode corresponding to a broadcast mode, the one or more devices include the first audio input/output device and the second audio input/output device. 6. The method of claim 5, wherein in response to the command mode corresponding to the broadcast mode, the method further includes transmitting data indicating the spoken command to the second audio input/output device. 7. The method of claim 1, wherein the spoken command corresponds to a request for information, a request to generate a meeting summary, a request to control a function of a device, or a combination thereof. 8. The method of claim 1, wherein the analysis and response device corresponds to an intermediate device configured to facilitate the communication session between the first audio input/output device and the second audio input/output device. 9. The method of claim 8, wherein the signal indicating the spoken command is received from the first audio input/output device as part of a first audio stream that is distinct from a second audio stream associated with the teleconference. 10. The method of claim 8, wherein the signal indicating the spoken command is received from the first audio input/output device as part of an audio stream associated with the teleconference. 11. The method of claim 1, further comprising: receiving a wake up signal indicating a spoken wake up command; and determining the command mode based on the spoken wake up command. 12. The method of claim 1, further comprising identifying the command mode based on a context associated with the teleconference or the spoken command. 13. The method of claim 12, further comprising identifying the command mode as a local mode in response to the spoken command corresponding to a location. 14. The method of claim 1, further comprising performing echo cancellation at each of the one or more devices to remove or reduce the spoken command in audio signals received at the one or more devices. 15. An apparatus comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the processor to perform operations including: during a communication session between a first audio input/output device and a second audio input/output device, receiving a signal indicating a spoken command, the spoken command associated with a command mode; and in response to receiving the signal, generating a reply message based on the spoken command, the reply message to be output to one or more devices selected based on the command mode, wherein the one or more devices include the first audio input/output device, the second audio input/output device, or a combination thereof. 16. The apparatus of claim 15, wherein the spoken command is associated with a user and wherein the operations further include: determining, based on a profile associated with the user, whether the user is authorized to initiate the spoken command; and in response to determining that the user is unauthorized to initiate the spoken command, the reply message indicates that the user is unauthorized. 17. The apparatus of claim 15, wherein the operation further include: identifying a keyword spoken during the communication session; and transmitting an indication of the keyword to the first device, wherein the spoken command corresponds to a request to access a media content item associated with the keyword and the reply message includes the media content item. 18. The apparatus of claim 17, wherein the operations further include identifying that the media content item is associated with the keyword based on a tag applied to the media content item. 19. A computer-readable storage device storing instructions that, when executed by a processor, cause the processor to perform operations comprising: during a communication session between a first audio input/output device and a second audio input/output device, receiving a signal indicating a spoken command, the spoken command associated with a command mode; and in response to receiving the signal, generating a reply message based on the spoken command, the reply message to be output to one or more devices selected based on the command mode, wherein the one or more devices include the first audio input/output device, the second audio input/output device, or a combination thereof. 20. The computer-readable storage device of claim 19, wherein the operations further include: generating a user interface associated with the communication session based on first data associated with the communication session, second data associated with a previous communication session, or a combination thereof; and transmitting the user interface to the first device and to the second device. 21. The computer-readable storage device of claim 20, wherein the user interface identifies a list of supported spoken commands.
A method includes, during a teleconference between a first audio input/output device and a second audio input/output device, receiving, at an analysis and response device, a signal indicating a spoken command, the spoken command associated with a command mode. The method further includes, in response to receiving the signal, generating, at the device, a reply message based on the spoken command, the reply message to be output to one or more devices selected based on the command mode. The one or more devices includes the first audio input/output device, the second audio input/output device, or a combination thereof.1. A method comprising: during a teleconference between a first audio input/output device and a second audio input/output device, receiving, at an analysis and response device, a signal indicating a spoken command, the spoken command associated with a command mode; and in response to receiving the signal, generating, at the analysis and response device, a reply message based on the spoken command, the reply message to be output to one or more devices selected based on the command mode, wherein the one or more devices include the first audio input/output device, the second audio input/output device, or a combination thereof. 2. The method of claim 1, wherein the signal indicating the spoken command is received from the first audio input/output device, and in response to the command mode corresponding to a local mode, the one or more devices include the first audio input/output device but not the second audio input/output device. 3. The method of claim 1, wherein the analysis and response device corresponds to a bridge device configured to facilitate the teleconference, the method further comprising transmitting the reply message from the analysis and response device to the first audio input/output device. 4. The method of claim 1, wherein the analysis and response device corresponds to the first audio input/output device. 5. The method of claim 1, wherein the signal indicating the spoken command is received from the first audio input/output device, and in response to the command mode corresponding to a broadcast mode, the one or more devices include the first audio input/output device and the second audio input/output device. 6. The method of claim 5, wherein in response to the command mode corresponding to the broadcast mode, the method further includes transmitting data indicating the spoken command to the second audio input/output device. 7. The method of claim 1, wherein the spoken command corresponds to a request for information, a request to generate a meeting summary, a request to control a function of a device, or a combination thereof. 8. The method of claim 1, wherein the analysis and response device corresponds to an intermediate device configured to facilitate the communication session between the first audio input/output device and the second audio input/output device. 9. The method of claim 8, wherein the signal indicating the spoken command is received from the first audio input/output device as part of a first audio stream that is distinct from a second audio stream associated with the teleconference. 10. The method of claim 8, wherein the signal indicating the spoken command is received from the first audio input/output device as part of an audio stream associated with the teleconference. 11. The method of claim 1, further comprising: receiving a wake up signal indicating a spoken wake up command; and determining the command mode based on the spoken wake up command. 12. The method of claim 1, further comprising identifying the command mode based on a context associated with the teleconference or the spoken command. 13. The method of claim 12, further comprising identifying the command mode as a local mode in response to the spoken command corresponding to a location. 14. The method of claim 1, further comprising performing echo cancellation at each of the one or more devices to remove or reduce the spoken command in audio signals received at the one or more devices. 15. An apparatus comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the processor to perform operations including: during a communication session between a first audio input/output device and a second audio input/output device, receiving a signal indicating a spoken command, the spoken command associated with a command mode; and in response to receiving the signal, generating a reply message based on the spoken command, the reply message to be output to one or more devices selected based on the command mode, wherein the one or more devices include the first audio input/output device, the second audio input/output device, or a combination thereof. 16. The apparatus of claim 15, wherein the spoken command is associated with a user and wherein the operations further include: determining, based on a profile associated with the user, whether the user is authorized to initiate the spoken command; and in response to determining that the user is unauthorized to initiate the spoken command, the reply message indicates that the user is unauthorized. 17. The apparatus of claim 15, wherein the operation further include: identifying a keyword spoken during the communication session; and transmitting an indication of the keyword to the first device, wherein the spoken command corresponds to a request to access a media content item associated with the keyword and the reply message includes the media content item. 18. The apparatus of claim 17, wherein the operations further include identifying that the media content item is associated with the keyword based on a tag applied to the media content item. 19. A computer-readable storage device storing instructions that, when executed by a processor, cause the processor to perform operations comprising: during a communication session between a first audio input/output device and a second audio input/output device, receiving a signal indicating a spoken command, the spoken command associated with a command mode; and in response to receiving the signal, generating a reply message based on the spoken command, the reply message to be output to one or more devices selected based on the command mode, wherein the one or more devices include the first audio input/output device, the second audio input/output device, or a combination thereof. 20. The computer-readable storage device of claim 19, wherein the operations further include: generating a user interface associated with the communication session based on first data associated with the communication session, second data associated with a previous communication session, or a combination thereof; and transmitting the user interface to the first device and to the second device. 21. The computer-readable storage device of claim 20, wherein the user interface identifies a list of supported spoken commands.
2,600
10,351
10,351
15,439,501
2,667
The present invention generally relates to a method for transitioning a device controller comprised with an electronic device from an at least partly inactive mode to an at least partly active mode, the electronic device further comprising a pre-processing module and a fingerprint sensor configured to acquire image data. The invention also relates to a corresponding electronic device and to a computer program product.
1. A method for transitioning a device controller comprised with an electronic device from an at least partly inactive mode to an at least partly active mode, the electronic device further comprising a pre-processing module and a fingerprint sensor configured to acquire image data, said method comprising the steps of: determining the presence of an object at a vicinity of the fingerprint sensor; acquiring, using the fingerprint sensor, image data representative of the object; pre-processing the acquired image data, using the pre-processing module, to determine features indicative of a fingerprint, wherein the device controller is in the at least partly inactive mode; matching, using the pre-processing module, the determined features with at least a set of stored fingerprint features of a finger of a user of the electronic device; generating an instruction to transition the device controller to the at least partly active mode if a result of the matching indicates that the acquired image data corresponds to the at least one finger of the user of the electronic device, providing the acquired image data to the device controller being transitioned to the at least partly active mode; and performing a fingerprint authentication procedure, using the device controller, based on the image data and at least a fingerprint template. 2. The method according to claim 1, wherein the step of matching comprises determining a matching score between the image data and the at least a set of stored fingerprint features, and determining that the image data corresponds to stored fingerprint features of the at least one finger of the user of the electronic device if the matching score exceeds a threshold. 3. The method according to claim 1, wherein a false accept rate of the matching, using the pre-processing module, is substantially higher than a false accept rate of the fingerprint authentication procedure. 4. The method according to claim 1, further comprising the steps of: providing, to the device controller, information relating to the matching performed at the pre-processing module, wherein the fingerprint authentication procedure is further based on the information relating to the matching at the pre-processing module. 5. The method according to claim 1, further comprising the steps of: unlocking the electronic device if the fingerprint authentication procedure results in a decision that the image data matches the at least one fingerprint template. 6. The method according to claim 1, wherein the at least one set of stored fingerprint features of one finger of the user comprises predetermined fingerprint ridge flow characteristics. 7. The method according to claim 6, wherein the fingerprint ridge flow characteristics comprises a set of global ridge flow patterns. 8. The method according to claim 7, wherein the set of global ridge flow patterns comprises at least one of information relating to an arch, a tented arch, a right loop, a left loop, and a whorl. 9. The method according to claim 6, wherein the fingerprint ridge flow characteristics comprises a set of local ridge flow descriptors. 10. The method according to claim 9, wherein the local ridge flow descriptors comprises at least one of local ridge orientation, or ridge curvature, or ridge density. 11. The method according to claim 6, further comprising the step of: updating the predetermined fingerprint ridge flow characteristics based on the acquired image data. 12. The method according to claim 1, wherein the pre-processing module is comprised with control circuitry provided with the fingerprint sensor. 13. The method according to claim 1, wherein the pre-processing module is a component of the device controller. 14. The method according to claim 1, wherein the at least partly inactive mode is a low power mode and the at least partly active mode is a normal operational mode for the device controller. 15. An electronic device, comprising: a device controller, the device controller configured to be arranged in an at least partly inactive mode or an at least partly active mode; a pre-processing module; and a fingerprint sensor configured to acquire image data, wherein the electronic device is arranged to: determine the presence of an object at a vicinity of the fingerprint sensor; acquire, using the fingerprint sensor, image data representative of the object; pre-process the acquired image data, using the pre-processing module, to determine features indicative of a fingerprint, wherein the device controller is in the at least partly inactive mode; match, using the pre-processing module, the determined features with at least a set of stored fingerprint features of a finger of a user of the electronic device; generate an instruction to transition the device controller from the at least partly inactive mode to the at least partly active mode if a result of the matching indicates that the acquired image data corresponds to the at least one finger of the user of the electronic device provide the acquired image data to the device controller being transitioned to the at least partly active mode; and perform a fingerprint authentication procedure, using the device controller, based on the image data and at least a fingerprint template. 16. The electronic device according to claim 15, wherein the pre-processing module is comprised with control circuitry provided with the fingerprint sensor. 17. The electronic device according to claim 15, wherein the pre-processing module is a component of the device controller. 18. The electronic device according to claim 15, wherein the fingerprint sensor is a capacitive fingerprint sensor. 19. The electronic device according to claim 15, wherein the electronic device is a mobile phone. 20. A computer program product comprising a non-transitory computer readable medium having stored thereon computer program means for controlling an electronic device, the electronic device comprising a device controller configured to be arranged in an at least partly inactive mode or an at least partly active mode, a pre-processing module, and a fingerprint sensor configured to acquire image data, wherein the computer program product comprises: code for determining the presence of an object at a vicinity of the fingerprint sensor; code for acquiring, using the fingerprint sensor, image data representative of the object; code for pre-processing the acquired image data, using the pre-processing module, to determine features indicative of a fingerprint, wherein the device controller is in the at least partly inactive mode; code for matching, using the pre-processing module, the determined features with at least a set of stored fingerprint features of a finger of a user of the electronic device; code for generating an instruction to transition the device controller to an at least partly active mode if a result of the matching indicates that the acquired image data corresponds to the at least one finger of the user of the electronic device code for providing the acquired image data to the device controller being transitioned to the at least partly active mode; and code for performing a fingerprint authentication procedure, using the device controller, based on the image data and at least a fingerprint template.
The present invention generally relates to a method for transitioning a device controller comprised with an electronic device from an at least partly inactive mode to an at least partly active mode, the electronic device further comprising a pre-processing module and a fingerprint sensor configured to acquire image data. The invention also relates to a corresponding electronic device and to a computer program product.1. A method for transitioning a device controller comprised with an electronic device from an at least partly inactive mode to an at least partly active mode, the electronic device further comprising a pre-processing module and a fingerprint sensor configured to acquire image data, said method comprising the steps of: determining the presence of an object at a vicinity of the fingerprint sensor; acquiring, using the fingerprint sensor, image data representative of the object; pre-processing the acquired image data, using the pre-processing module, to determine features indicative of a fingerprint, wherein the device controller is in the at least partly inactive mode; matching, using the pre-processing module, the determined features with at least a set of stored fingerprint features of a finger of a user of the electronic device; generating an instruction to transition the device controller to the at least partly active mode if a result of the matching indicates that the acquired image data corresponds to the at least one finger of the user of the electronic device, providing the acquired image data to the device controller being transitioned to the at least partly active mode; and performing a fingerprint authentication procedure, using the device controller, based on the image data and at least a fingerprint template. 2. The method according to claim 1, wherein the step of matching comprises determining a matching score between the image data and the at least a set of stored fingerprint features, and determining that the image data corresponds to stored fingerprint features of the at least one finger of the user of the electronic device if the matching score exceeds a threshold. 3. The method according to claim 1, wherein a false accept rate of the matching, using the pre-processing module, is substantially higher than a false accept rate of the fingerprint authentication procedure. 4. The method according to claim 1, further comprising the steps of: providing, to the device controller, information relating to the matching performed at the pre-processing module, wherein the fingerprint authentication procedure is further based on the information relating to the matching at the pre-processing module. 5. The method according to claim 1, further comprising the steps of: unlocking the electronic device if the fingerprint authentication procedure results in a decision that the image data matches the at least one fingerprint template. 6. The method according to claim 1, wherein the at least one set of stored fingerprint features of one finger of the user comprises predetermined fingerprint ridge flow characteristics. 7. The method according to claim 6, wherein the fingerprint ridge flow characteristics comprises a set of global ridge flow patterns. 8. The method according to claim 7, wherein the set of global ridge flow patterns comprises at least one of information relating to an arch, a tented arch, a right loop, a left loop, and a whorl. 9. The method according to claim 6, wherein the fingerprint ridge flow characteristics comprises a set of local ridge flow descriptors. 10. The method according to claim 9, wherein the local ridge flow descriptors comprises at least one of local ridge orientation, or ridge curvature, or ridge density. 11. The method according to claim 6, further comprising the step of: updating the predetermined fingerprint ridge flow characteristics based on the acquired image data. 12. The method according to claim 1, wherein the pre-processing module is comprised with control circuitry provided with the fingerprint sensor. 13. The method according to claim 1, wherein the pre-processing module is a component of the device controller. 14. The method according to claim 1, wherein the at least partly inactive mode is a low power mode and the at least partly active mode is a normal operational mode for the device controller. 15. An electronic device, comprising: a device controller, the device controller configured to be arranged in an at least partly inactive mode or an at least partly active mode; a pre-processing module; and a fingerprint sensor configured to acquire image data, wherein the electronic device is arranged to: determine the presence of an object at a vicinity of the fingerprint sensor; acquire, using the fingerprint sensor, image data representative of the object; pre-process the acquired image data, using the pre-processing module, to determine features indicative of a fingerprint, wherein the device controller is in the at least partly inactive mode; match, using the pre-processing module, the determined features with at least a set of stored fingerprint features of a finger of a user of the electronic device; generate an instruction to transition the device controller from the at least partly inactive mode to the at least partly active mode if a result of the matching indicates that the acquired image data corresponds to the at least one finger of the user of the electronic device provide the acquired image data to the device controller being transitioned to the at least partly active mode; and perform a fingerprint authentication procedure, using the device controller, based on the image data and at least a fingerprint template. 16. The electronic device according to claim 15, wherein the pre-processing module is comprised with control circuitry provided with the fingerprint sensor. 17. The electronic device according to claim 15, wherein the pre-processing module is a component of the device controller. 18. The electronic device according to claim 15, wherein the fingerprint sensor is a capacitive fingerprint sensor. 19. The electronic device according to claim 15, wherein the electronic device is a mobile phone. 20. A computer program product comprising a non-transitory computer readable medium having stored thereon computer program means for controlling an electronic device, the electronic device comprising a device controller configured to be arranged in an at least partly inactive mode or an at least partly active mode, a pre-processing module, and a fingerprint sensor configured to acquire image data, wherein the computer program product comprises: code for determining the presence of an object at a vicinity of the fingerprint sensor; code for acquiring, using the fingerprint sensor, image data representative of the object; code for pre-processing the acquired image data, using the pre-processing module, to determine features indicative of a fingerprint, wherein the device controller is in the at least partly inactive mode; code for matching, using the pre-processing module, the determined features with at least a set of stored fingerprint features of a finger of a user of the electronic device; code for generating an instruction to transition the device controller to an at least partly active mode if a result of the matching indicates that the acquired image data corresponds to the at least one finger of the user of the electronic device code for providing the acquired image data to the device controller being transitioned to the at least partly active mode; and code for performing a fingerprint authentication procedure, using the device controller, based on the image data and at least a fingerprint template.
2,600
10,352
10,352
14,311,243
2,683
A method of user identification in association with an automated furniture item is provided. In embodiments, a user identification method for an automated furniture item utilizes occupancy detection and proximity detection, such as via a BLE PXP. In some embodiments, a system associated with an automated furniture item is provided, which identifies a particular user's smart device (i.e., a device configured to connect to one or more other devices and/or networks, such as a tablet computing device or smartphone) within range of the automated furniture item controller, and generates a corresponding response based on occupancy detection of that particular user. In another embodiment, one or more environment features may be controlled and/or activated, in association with the automated furniture item, based on the coordinated response of both the proximity indication of user identity and the presence detection of a particular user with respect to the automated furniture item.
1. A user identification method for automated furniture, the method comprising: receiving, by a control component of an automated furniture item, an indication of first user proximity based on a proximity profile of a first user device; based at least in part on the received indication of first user proximity, generating at least one first user-specific command associated with the automated furniture item; and communicating the generated at least one first user-specific command to the first user device. 2. The method of claim 1, further comprising: based on the received indication of first user proximity, communicating a connection request to the first user device; and receiving an indication to connect the first user device to the automated furniture item. 3. The method of claim 1, further comprising: receiving, by the control component of the automated furniture item, a first indication of occupancy detection associated with the automated furniture item; based at least in part on the received indication of first user proximity and the received first indication of occupancy detection, generating the at least one first user-specific command associated with the automated furniture item. 4. The method of claim 3, further comprising: based on the received indication of first user proximity and the received first indication of occupancy detection, communicating a connection request to the first user device; and receiving an indication to connect the first user device to the automated furniture item. 5. The method of claim 3, wherein receiving a first indication of occupancy detection associated with the automated furniture item comprises determining that an occupancy threshold has been met. 6. The method of claim 1, further comprising determining that the first user device was previously connected to the control component of the automated furniture item. 7. The method of claim 1, further comprising: receiving, by the control component of the automated furniture item, an indication of a second user proximity based on a proximity profile of a second user device; based at least in part on the received indication of second user proximity, generating at least one second user-specific command associated with the automated furniture item; and communicating the generated at least one second user-specific command to the second user device. 8. The method of claim 7, further comprising: receiving, by the control component of the automated furniture item, a second indication of occupancy detection associated with the automated furniture item; based at least in part on the received indication of second user proximity and the received second indication of occupancy detection, generating the at least one second user-specific command associated with the automated furniture item. 9. A user identification method for automated furniture, the method comprising: monitoring, via a control component of an automated furniture item, a proximity associated with an automated furniture item; receiving an indication that a proximity threshold is satisfied by a first user device; monitoring occupancy detection associated with the automated furniture item; receiving a first indication that an occupancy threshold is satisfied for the automated furniture item; based on one or more of the satisfied proximity threshold and the satisfied occupancy threshold, communicating a notification to connect the first user device to the control component of the automated furniture item; receiving an indication to connect the first user device to the control component; and communicating one or more first user-specific controls to the first user device. 10. The method of claim 9, further comprising: receiving an indication that a proximity threshold is satisfied by a second user device; receiving a second indication that an occupancy threshold is satisfied for the automated furniture item; communicating a notification to connect the second user device to the control component of the automated furniture item; receiving an indication to connect the second user device to the control component; and communicating one or more second user-specific controls to the second user device. 11. The method of claim 9, wherein communicating a notification to connect the first user device to the control component of the automated furniture item comprises determining that a notification to connect the first user device to the control component has not been previously sent within a particular time period. 12. The method of claim 9, wherein receiving an indication to connect the first user device to the control component comprises automatically receiving an indication from the first user device in response to the notification communicated to the first user device to connect the first user device to the control component. 13. The method of claim 9, wherein receiving an indication to connect the first user device to the control component comprises receiving an indication from a user of the first user device in response to a prompt associated with the notification communicated to the first user device. 14. The method of claim 9, wherein communicating one or more first user-specific controls to the first user device comprises generating at least one first user-specific command associated with the automated furniture item. 15. The method of claim 14, wherein generating the at least one first user-specific command associated with the automated furniture item comprises generating at least one first user-specific hospitality setting control associated with a user environment of the automated furniture item. 16. The method of claim 9, further comprising: generating, in response to 1) monitoring the proximity associated with the automated furniture item and 2) monitoring occupancy detection associated with the automated furniture item, tracking one or more user activities associated with a first user of the first user device. 17. A method for user identification for an automated furniture item utilizing occupancy detection and a BLE PXP, the method comprising: receiving a first indication of proximity of a particular user device in association with the automated furniture item based on a BLE PXP associated with the control component of the automated furniture item, wherein the particular user device is associated with a particular user; receiving a first indication of occupancy in association with the automated furniture item based on an occupancy detection component coupled to the control component of the automated furniture item; and in response to the received first indication of proximity of the particular user device and the received first indication of occupancy, generating one or more control features for the particular user device. 18. The method of claim 17, further comprising: determining whether the one or more control features have been provided to the particular user device; upon determining that the one or more control features have not been provided to the particular user device, communicating the one or more control features to the particular user device for presentation to the particular user; and upon determining that the one or more control features have been provided to the particular user device, monitoring for one or more of a second indication of proximity and a second indication of occupancy. 19. The method of claim 17, further comprising: determining that the particular user device was previously coupled to the control component; and in response to determining that the particular user device was previously coupled to the control component, automatically communicating at least one of the one or more control features to the particular user device. 20. The method of claim 17, wherein the one or more control features comprises: one or more environment lighting controls; one or more pre-set articulated positions associated with the automated furniture item; one or more pre-set articulated positions associated with an external automated furniture item coupled to the automated furniture item; one or more HVAC controls; one or more automated furniture item heating controls; one or more automated furniture item cooling controls; one or more heating controls for the external automated furniture item coupled to the automated furniture item; one or more cooling controls for the external automated furniture item coupled to the automated furniture item; one or more health report tracking features; one or more massage controls; one or more home security settings controls; one or more remote monitor controls; one or more remote door locks controls; one or more hospitality pre-set environment controls; and one or more OEM automated furniture item controls.
A method of user identification in association with an automated furniture item is provided. In embodiments, a user identification method for an automated furniture item utilizes occupancy detection and proximity detection, such as via a BLE PXP. In some embodiments, a system associated with an automated furniture item is provided, which identifies a particular user's smart device (i.e., a device configured to connect to one or more other devices and/or networks, such as a tablet computing device or smartphone) within range of the automated furniture item controller, and generates a corresponding response based on occupancy detection of that particular user. In another embodiment, one or more environment features may be controlled and/or activated, in association with the automated furniture item, based on the coordinated response of both the proximity indication of user identity and the presence detection of a particular user with respect to the automated furniture item.1. A user identification method for automated furniture, the method comprising: receiving, by a control component of an automated furniture item, an indication of first user proximity based on a proximity profile of a first user device; based at least in part on the received indication of first user proximity, generating at least one first user-specific command associated with the automated furniture item; and communicating the generated at least one first user-specific command to the first user device. 2. The method of claim 1, further comprising: based on the received indication of first user proximity, communicating a connection request to the first user device; and receiving an indication to connect the first user device to the automated furniture item. 3. The method of claim 1, further comprising: receiving, by the control component of the automated furniture item, a first indication of occupancy detection associated with the automated furniture item; based at least in part on the received indication of first user proximity and the received first indication of occupancy detection, generating the at least one first user-specific command associated with the automated furniture item. 4. The method of claim 3, further comprising: based on the received indication of first user proximity and the received first indication of occupancy detection, communicating a connection request to the first user device; and receiving an indication to connect the first user device to the automated furniture item. 5. The method of claim 3, wherein receiving a first indication of occupancy detection associated with the automated furniture item comprises determining that an occupancy threshold has been met. 6. The method of claim 1, further comprising determining that the first user device was previously connected to the control component of the automated furniture item. 7. The method of claim 1, further comprising: receiving, by the control component of the automated furniture item, an indication of a second user proximity based on a proximity profile of a second user device; based at least in part on the received indication of second user proximity, generating at least one second user-specific command associated with the automated furniture item; and communicating the generated at least one second user-specific command to the second user device. 8. The method of claim 7, further comprising: receiving, by the control component of the automated furniture item, a second indication of occupancy detection associated with the automated furniture item; based at least in part on the received indication of second user proximity and the received second indication of occupancy detection, generating the at least one second user-specific command associated with the automated furniture item. 9. A user identification method for automated furniture, the method comprising: monitoring, via a control component of an automated furniture item, a proximity associated with an automated furniture item; receiving an indication that a proximity threshold is satisfied by a first user device; monitoring occupancy detection associated with the automated furniture item; receiving a first indication that an occupancy threshold is satisfied for the automated furniture item; based on one or more of the satisfied proximity threshold and the satisfied occupancy threshold, communicating a notification to connect the first user device to the control component of the automated furniture item; receiving an indication to connect the first user device to the control component; and communicating one or more first user-specific controls to the first user device. 10. The method of claim 9, further comprising: receiving an indication that a proximity threshold is satisfied by a second user device; receiving a second indication that an occupancy threshold is satisfied for the automated furniture item; communicating a notification to connect the second user device to the control component of the automated furniture item; receiving an indication to connect the second user device to the control component; and communicating one or more second user-specific controls to the second user device. 11. The method of claim 9, wherein communicating a notification to connect the first user device to the control component of the automated furniture item comprises determining that a notification to connect the first user device to the control component has not been previously sent within a particular time period. 12. The method of claim 9, wherein receiving an indication to connect the first user device to the control component comprises automatically receiving an indication from the first user device in response to the notification communicated to the first user device to connect the first user device to the control component. 13. The method of claim 9, wherein receiving an indication to connect the first user device to the control component comprises receiving an indication from a user of the first user device in response to a prompt associated with the notification communicated to the first user device. 14. The method of claim 9, wherein communicating one or more first user-specific controls to the first user device comprises generating at least one first user-specific command associated with the automated furniture item. 15. The method of claim 14, wherein generating the at least one first user-specific command associated with the automated furniture item comprises generating at least one first user-specific hospitality setting control associated with a user environment of the automated furniture item. 16. The method of claim 9, further comprising: generating, in response to 1) monitoring the proximity associated with the automated furniture item and 2) monitoring occupancy detection associated with the automated furniture item, tracking one or more user activities associated with a first user of the first user device. 17. A method for user identification for an automated furniture item utilizing occupancy detection and a BLE PXP, the method comprising: receiving a first indication of proximity of a particular user device in association with the automated furniture item based on a BLE PXP associated with the control component of the automated furniture item, wherein the particular user device is associated with a particular user; receiving a first indication of occupancy in association with the automated furniture item based on an occupancy detection component coupled to the control component of the automated furniture item; and in response to the received first indication of proximity of the particular user device and the received first indication of occupancy, generating one or more control features for the particular user device. 18. The method of claim 17, further comprising: determining whether the one or more control features have been provided to the particular user device; upon determining that the one or more control features have not been provided to the particular user device, communicating the one or more control features to the particular user device for presentation to the particular user; and upon determining that the one or more control features have been provided to the particular user device, monitoring for one or more of a second indication of proximity and a second indication of occupancy. 19. The method of claim 17, further comprising: determining that the particular user device was previously coupled to the control component; and in response to determining that the particular user device was previously coupled to the control component, automatically communicating at least one of the one or more control features to the particular user device. 20. The method of claim 17, wherein the one or more control features comprises: one or more environment lighting controls; one or more pre-set articulated positions associated with the automated furniture item; one or more pre-set articulated positions associated with an external automated furniture item coupled to the automated furniture item; one or more HVAC controls; one or more automated furniture item heating controls; one or more automated furniture item cooling controls; one or more heating controls for the external automated furniture item coupled to the automated furniture item; one or more cooling controls for the external automated furniture item coupled to the automated furniture item; one or more health report tracking features; one or more massage controls; one or more home security settings controls; one or more remote monitor controls; one or more remote door locks controls; one or more hospitality pre-set environment controls; and one or more OEM automated furniture item controls.
2,600
10,353
10,353
15,980,149
2,687
A cargo tracking system for a vehicle includes an RFID reader configured to generate an output in response to a signal received from an RFID tag and an electronic control unit communicatively coupled to the RFID reader. The electronic control unit is configured to determine that the RFID tag is no longer within a range of the RFID reader based on the output of the RFID reader and determine a last location of the vehicle in response to determining that the RFID tag is no longer within the range of the RFID reader based on the output of the RFID reader.
1. A cargo tracking system for a vehicle comprising: an RFID reader configured to generate an output in response to a signal received from an RFID tag; and an electronic control unit communicatively coupled to the RFID reader and configured to: determine that the RFID tag is no longer within a range of the RFID reader based on the output of the RFID reader; and determine a last location of the vehicle in response to determining that the RFID tag is no longer within the range of the RFID reader based on the output of the RFID reader. 2. The cargo tracking system of claim 1, wherein the RFID tag is individually affixed to an item of cargo in the vehicle and configured to emit a presence-indicating RFID signal. 3. (canceled) 4. The cargo tracking system of claim 1, wherein the electronic control unit is further configured to generate an alert in response to determining that the RFID tag is no longer within the range of the RFID reader based on the output of the RFID reader. 5. The cargo tracking system of claim 1, wherein the electronic control unit is further configured to cause a waypoint of the last location to be recorded. 6. (canceled) 7. The cargo tracking system of claim 1, wherein: the RFID reader is configured to generate an active RFID signal at an active RFID signal strength, and the electronic control unit is further configured to cause the RFID reader to increase the active RFID signal strength based on the output of the RFID reader. 8. An electronic control unit for tracking cargo within a vehicle, wherein: the electronic control unit is communicatively coupled to an RFID reader that is configured to generate an output in response to a signal received from an RFID tag; and the electronic control unit is configured to: determine that the RFID tag is no longer within a range of the RFID reader based on the output of the RFID reader; and determine a last location of the vehicle in response to determining that the RFID tag is no longer within the range of the RFID reader based on the output of the RFID reader. 9. The electronic control unit of claim 8, wherein the RFID tag is individually affixed to an item of cargo in the vehicle and configured to emit a presence-indicating RFID signal. 10. (canceled) 11. The electronic control unit of claim 8, wherein the electronic control unit is further configured to generate an alert in response to determining that the RFID tag is no longer within the range of the RFID reader based on the output of the RFID reader. 12. The electronic control unit of claim 11, further configured to cause a waypoint of the last location to be recorded. 13. (canceled) 14. The electronic control unit of claim 8, wherein: the RFID reader is configured to generate an active RFID signal at an active RFID signal strength, and the electronic control unit is further configured to cause the RFID reader to increase the active RFID signal strength based on the output of the RFID reader. 15. A cargo tracking system for a vehicle comprising: an RFID reader configured to generate an output in response to a signal received from an RFID tag; and an electronic control unit communicatively coupled to the RFID reader and configured to: determine that the RFID tag is no longer within a range of the RFID reader based on the output of the RFID reader; and generate an alert in response to determining that the RFID tag is no longer within the range of the RFID reader based on the output of the RFID reader, wherein the electronic control unit is further configured to determine a last location of the vehicle in response to determining that the RFID tag is no longer within the range of the RFID reader based on the output of the RFID reader. 16. The cargo tracking system of claim 15, wherein the RFID tag is individually affixed to an item of cargo in the vehicle and configured to emit a presence-indicating RFID signal. 17. (canceled) 18. (canceled) 19. The cargo tracking system of claim 15, wherein the electronic control unit is further configured to cause a waypoint of the last location to be recorded and data associated with the waypoint is sent to a mobile device. 20. The cargo tracking system of claim 19, wherein the electronic control unit is further configured to cause a route to the waypoint of the last location to be generated. 21. The cargo tracking system of claim 1, wherein: the last location of the vehicle, determined in response to determining that the RFID tag is no longer within the range of the RFID reader, is indicative of a last-tracked location of an item of cargo to which the RFID tag is affixed; and the electronic control unit is further configured to generate a lost cargo signal in response to determining that the RFID tag is no longer within the range of the RFID reader. 22. The cargo tracking system of claim 1, wherein: the last location of the vehicle, determined in response to determining that the RFID tag is no longer within the range of the RFID reader, is indicative of a last-tracked location of an item of cargo to which the RFID tag is affixed; and the electronic control unit is further configured to record a waypoint of the last-tracked location of the item of cargo to which the RFID tag is affixed in response to determining that the RFID tag is no longer within the range of the RFID reader; and generate a route to the waypoint of the last-tracked location of the item of cargo to which the RFID tag is affixed in response to determining that the RFID tag is no longer within the range of the RFID reader. 23. The electronic control unit of claim 8, wherein: the last location of the vehicle, determined in response to determining that the RFID tag is no longer within the range of the RFID reader, is indicative of a last-tracked location of an item of cargo to which the RFID tag is affixed; and the electronic control unit is further configured to generate a lost cargo signal in response to determining that the RFID tag is no longer within the range of the RFID reader. 24. The electronic control unit of claim 8, wherein: the last location of the vehicle, determined in response to determining that the RFID tag is no longer within the range of the RFID reader, is indicative of a last-tracked location of an item of cargo to which the RFID tag is affixed; and the electronic control unit is further configured to record a waypoint of the last-tracked location of the item of cargo to which the RFID tag is affixed in response to determining that the RFID tag is no longer within the range of the RFID reader; and generate a route to the waypoint of the last-tracked location of the item of cargo to which the RFID tag is affixed in response to determining that the RFID tag is no longer within the range of the RFID reader. 25. The cargo tracking system of claim 15, wherein: the last location of the vehicle, determined in response to determining that the RFID tag is no longer within the range of the RFID reader, is indicative of a last-tracked location of an item of cargo to which the RFID tag is affixed; and the electronic control unit is further configured to generate a lost cargo signal in response to determining that the RFID tag is no longer within the range of the RFID reader. 26. The cargo tracking system of claim 15, wherein: the last location of the vehicle, determined in response to determining that the RFID tag is no longer within the range of the RFID reader, is indicative of a last-tracked location of an item of cargo to which the RFID tag is affixed; and the electronic control unit is further configured to record a waypoint of the last-tracked location of the item of cargo to which the RFID tag is affixed in response to determining that the RFID tag is no longer within the range of the RFID reader; and generate a route to the waypoint of the last-tracked location of the item of cargo to which the RFID tag is affixed in response to determining that the RFID tag is no longer within the range of the RFID reader.
A cargo tracking system for a vehicle includes an RFID reader configured to generate an output in response to a signal received from an RFID tag and an electronic control unit communicatively coupled to the RFID reader. The electronic control unit is configured to determine that the RFID tag is no longer within a range of the RFID reader based on the output of the RFID reader and determine a last location of the vehicle in response to determining that the RFID tag is no longer within the range of the RFID reader based on the output of the RFID reader.1. A cargo tracking system for a vehicle comprising: an RFID reader configured to generate an output in response to a signal received from an RFID tag; and an electronic control unit communicatively coupled to the RFID reader and configured to: determine that the RFID tag is no longer within a range of the RFID reader based on the output of the RFID reader; and determine a last location of the vehicle in response to determining that the RFID tag is no longer within the range of the RFID reader based on the output of the RFID reader. 2. The cargo tracking system of claim 1, wherein the RFID tag is individually affixed to an item of cargo in the vehicle and configured to emit a presence-indicating RFID signal. 3. (canceled) 4. The cargo tracking system of claim 1, wherein the electronic control unit is further configured to generate an alert in response to determining that the RFID tag is no longer within the range of the RFID reader based on the output of the RFID reader. 5. The cargo tracking system of claim 1, wherein the electronic control unit is further configured to cause a waypoint of the last location to be recorded. 6. (canceled) 7. The cargo tracking system of claim 1, wherein: the RFID reader is configured to generate an active RFID signal at an active RFID signal strength, and the electronic control unit is further configured to cause the RFID reader to increase the active RFID signal strength based on the output of the RFID reader. 8. An electronic control unit for tracking cargo within a vehicle, wherein: the electronic control unit is communicatively coupled to an RFID reader that is configured to generate an output in response to a signal received from an RFID tag; and the electronic control unit is configured to: determine that the RFID tag is no longer within a range of the RFID reader based on the output of the RFID reader; and determine a last location of the vehicle in response to determining that the RFID tag is no longer within the range of the RFID reader based on the output of the RFID reader. 9. The electronic control unit of claim 8, wherein the RFID tag is individually affixed to an item of cargo in the vehicle and configured to emit a presence-indicating RFID signal. 10. (canceled) 11. The electronic control unit of claim 8, wherein the electronic control unit is further configured to generate an alert in response to determining that the RFID tag is no longer within the range of the RFID reader based on the output of the RFID reader. 12. The electronic control unit of claim 11, further configured to cause a waypoint of the last location to be recorded. 13. (canceled) 14. The electronic control unit of claim 8, wherein: the RFID reader is configured to generate an active RFID signal at an active RFID signal strength, and the electronic control unit is further configured to cause the RFID reader to increase the active RFID signal strength based on the output of the RFID reader. 15. A cargo tracking system for a vehicle comprising: an RFID reader configured to generate an output in response to a signal received from an RFID tag; and an electronic control unit communicatively coupled to the RFID reader and configured to: determine that the RFID tag is no longer within a range of the RFID reader based on the output of the RFID reader; and generate an alert in response to determining that the RFID tag is no longer within the range of the RFID reader based on the output of the RFID reader, wherein the electronic control unit is further configured to determine a last location of the vehicle in response to determining that the RFID tag is no longer within the range of the RFID reader based on the output of the RFID reader. 16. The cargo tracking system of claim 15, wherein the RFID tag is individually affixed to an item of cargo in the vehicle and configured to emit a presence-indicating RFID signal. 17. (canceled) 18. (canceled) 19. The cargo tracking system of claim 15, wherein the electronic control unit is further configured to cause a waypoint of the last location to be recorded and data associated with the waypoint is sent to a mobile device. 20. The cargo tracking system of claim 19, wherein the electronic control unit is further configured to cause a route to the waypoint of the last location to be generated. 21. The cargo tracking system of claim 1, wherein: the last location of the vehicle, determined in response to determining that the RFID tag is no longer within the range of the RFID reader, is indicative of a last-tracked location of an item of cargo to which the RFID tag is affixed; and the electronic control unit is further configured to generate a lost cargo signal in response to determining that the RFID tag is no longer within the range of the RFID reader. 22. The cargo tracking system of claim 1, wherein: the last location of the vehicle, determined in response to determining that the RFID tag is no longer within the range of the RFID reader, is indicative of a last-tracked location of an item of cargo to which the RFID tag is affixed; and the electronic control unit is further configured to record a waypoint of the last-tracked location of the item of cargo to which the RFID tag is affixed in response to determining that the RFID tag is no longer within the range of the RFID reader; and generate a route to the waypoint of the last-tracked location of the item of cargo to which the RFID tag is affixed in response to determining that the RFID tag is no longer within the range of the RFID reader. 23. The electronic control unit of claim 8, wherein: the last location of the vehicle, determined in response to determining that the RFID tag is no longer within the range of the RFID reader, is indicative of a last-tracked location of an item of cargo to which the RFID tag is affixed; and the electronic control unit is further configured to generate a lost cargo signal in response to determining that the RFID tag is no longer within the range of the RFID reader. 24. The electronic control unit of claim 8, wherein: the last location of the vehicle, determined in response to determining that the RFID tag is no longer within the range of the RFID reader, is indicative of a last-tracked location of an item of cargo to which the RFID tag is affixed; and the electronic control unit is further configured to record a waypoint of the last-tracked location of the item of cargo to which the RFID tag is affixed in response to determining that the RFID tag is no longer within the range of the RFID reader; and generate a route to the waypoint of the last-tracked location of the item of cargo to which the RFID tag is affixed in response to determining that the RFID tag is no longer within the range of the RFID reader. 25. The cargo tracking system of claim 15, wherein: the last location of the vehicle, determined in response to determining that the RFID tag is no longer within the range of the RFID reader, is indicative of a last-tracked location of an item of cargo to which the RFID tag is affixed; and the electronic control unit is further configured to generate a lost cargo signal in response to determining that the RFID tag is no longer within the range of the RFID reader. 26. The cargo tracking system of claim 15, wherein: the last location of the vehicle, determined in response to determining that the RFID tag is no longer within the range of the RFID reader, is indicative of a last-tracked location of an item of cargo to which the RFID tag is affixed; and the electronic control unit is further configured to record a waypoint of the last-tracked location of the item of cargo to which the RFID tag is affixed in response to determining that the RFID tag is no longer within the range of the RFID reader; and generate a route to the waypoint of the last-tracked location of the item of cargo to which the RFID tag is affixed in response to determining that the RFID tag is no longer within the range of the RFID reader.
2,600
10,354
10,354
15,234,251
2,691
A display device includes a display panel including a plurality of pixels and a drive circuit which displays an image, which corresponds to input data received from outside, on the display panel in a normal operation mode, and displays an image, which corresponds to an analog clock representing a current time, on the display panel based on end point coordinates of clock hands that are internally stored in the drive circuit in a standby mode.
1. A display device comprising: a display panel including a plurality of pixels; and a drive circuit which, displays an image, which corresponds to input data received from outside, on the display panel in a normal operation mode, and displays an image, which corresponds to an analog clock representing a current time, on the display panel based on end point coordinates of clock hands which are internally stored in the drive circuit in a standby mode. 2. The display device of claim 1, wherein, in the standby mode, the drive circuit generates an internal clock signal, determines a current hour hand coordinate and a current minute hand coordinate among the end point coordinates of clock hands based on the internal clock signal, and displays a current hour hand line, which connects a reference coordinate which is internally stored in the drive circuit and the current hour hand coordinate, and a current minute hand line, which connects the reference coordinate and the current minute hand coordinate, on the display panel. 3. The display device of claim 1, wherein the drive circuit includes: a gate driver coupled to the display panel through a plurality of gate lines; a source driver coupled to the display panel through a plurality of data lines; and a controller which, controls operations of the gate driver and the source driver, generates image data corresponding to the input data and provide the image data to the source driver in the normal operation mode, and generates image data corresponding to the analog clock representing the current time based on the end point coordinates of clock hands and an internal clock signal and provide the image data to the source driver in the standby mode. 4. The display device of claim 3, wherein the controller includes: a register which stores the end point coordinates of clock hands and a reference coordinate corresponding to a center of the analog clock; an internal clock generator which generates the internal clock signal; and a control circuit which, generates the image data by dividing the input data in a unit of a frame and provides the image data to the source driver in the normal operation mode, and determines the current time based on the internal clock signal, determines a current hour hand coordinate and a current minute hand coordinate, which correspond to the current time, among the end point coordinates of clock hands, generates the image data including a current hour hand line, which connects the reference coordinate and the current hour hand coordinate, and a current minute hand line, which connects the reference coordinate and the current minute hand coordinate, and provides the image data to the source driver in the standby mode. 5. The display device of claim 4, wherein the register includes: a first register which stores hour hand coordinates representing locations of end points of an hour hand at a predetermined time interval; a second register which stores minute hand coordinates representing locations of end points of a minute hand at every minute; and a third register which stores the reference coordinate. 6. The display device of claim 5, wherein, in the standby mode, the control circuit determines the current hour hand coordinate, which corresponds to the current time, among the hour hand coordinates stored in the first register, and determines the current minute hand coordinate, which corresponds to the current time, among the minute hand coordinates stored in the second register. 7. The display device of claim 5, wherein, in the standby mode, the control circuit determines the current hour hand coordinate by circularly selecting the hour hand coordinates stored in the first register at each of the predetermined time interval, and determines the current minute hand coordinate by circularly selecting the minute hand coordinates stored in the second register whenever a minute of the current time is changed. 8. The display device of claim 4, wherein, in the standby mode, the control circuit determines a next minute hand coordinate, which corresponds to a next minute of the current hour, among the end point coordinates of clock hands during an overlap period, which is between a first time at which a minute of the current time is changed and a second time which is prior to the first time by a first time period, generates the image data including the current hour hand line, the current minute hand line, and a next minute hand line, which connects the reference coordinate and the next minute hand coordinate, and provides the image data to the source driver. 9. The display device of claim 8, wherein the current hour hand line and the current minute hand line included in the image data has a first gray level, and the next minute hand line included in the image data has a second gray level lower than the first gray level. 10. The display device of claim 9, wherein, in the standby mode, the source driver displays the current hour hand line and the current minute hand line on the display panel with a first brightness and displays the next minute hand line on the display panel with a second brightness lower than the first brightness based on the image data received from the control circuit. 11. The display device of claim 8, wherein a duration of the overlap period is predetermined. 12. The display device of claim 8, wherein the control circuit adjusts a duration of the overlap period based on an overlap control signal. 13. A mobile device comprising: an application processor which, generates a mode signal having a first logic level and output input data in a normal operation mode, and generates the mode signal having a second logic level and stop outputting the input data in a standby mode; and a display device which, receives the mode signal, displays an image corresponding to the input data in the normal operation mode, and displays an image corresponding to an analog clock representing a current time based on end point coordinates of clock hands which are internally stored in the display device in a standby mode. 14. The mobile device of claim 13, wherein the display device includes: a display panel including a plurality of pixels; a gate driver coupled to the display panel through a plurality of gate lines; a source driver coupled to the display panel through a plurality of data lines; and a controller which, controls operations of the gate driver and the source driver, receives the mode signal, generates image data corresponding to the input data and provide the image data to the source driver in the normal operation mode, and generates image data corresponding to the analog clock representing the current time based on the end point coordinates of clock hands and an internal clock signal and provide the image data to the source driver in the standby mode. 15. The mobile device of claim 14, wherein the controller includes: a register which stores the end point coordinates of clock hands and a reference coordinate corresponding to a center of the analog clock; an internal clock generator which generates the internal clock signal; and a control circuit which, generates the image data by dividing the input data in a unit of a frame and provides the image data to the source driver in the normal operation mode, and determines the current time based on the internal clock signal, determines a current hour hand coordinate and a current minute hand coordinate, which correspond to the current time, among the end point coordinates of clock hands, generates the image data including a current hour hand line, which connects the reference coordinate and the current hour hand coordinate, and a current minute hand line, which connects the reference coordinate and the current minute hand coordinate, and provides the image data to the source driver in the standby mode. 16. The mobile device of claim 15, wherein, in the standby mode, the control circuit determines a next minute hand coordinate, which corresponds to a next minute of the current hour, among the end point coordinates of clock hands during an overlap period, which is between a first time at which a minute of the current time is changed and a second time which is prior to the first time by a first time period, generates the image data, which include the current hour hand line and the current minute hand line with a first gray level, and a next minute hand line connecting the reference coordinate and the next minute hand coordinate with a second gray level lower than the first gray level, and provides the image data to the source driver. 17. The mobile device of claim 13, wherein the mobile device corresponds to a smart watch. 18. A method of operating a display device, the method comprising: determining an operation mode; displaying an image, which corresponds to input data received from outside, on a display panel when the operation mode is a normal operation mode; and displaying an image, which corresponds to an analog clock representing a current time, on the display panel based on end point coordinates of clock hands which are internally stored in the display device when the operation mode is a standby mode. 19. The method of claim 18, wherein displaying the image, which corresponds to the analog clock representing the current time, on the display panel based on the end point coordinates of clock hands when the operation mode is the standby mode includes: determining the current time based on an internal clock signal; determining a current hour hand coordinate and a current minute hand coordinate, which correspond to the current time, among the end point coordinates of clock hands; generating image data corresponding to the analog clock which includes a current hour hand line, which connects a reference coordinate which is internally stored in the display device and the current hour hand coordinate, and a current minute hand line, which connects the reference coordinate and the current minute hand coordinate; and displaying the image data on the display panel. 20. The method of claim 19, wherein displaying the image, which corresponds to the analog clock representing the current time, on the display panel based on the end point coordinates of clock hands when the operation mode is the standby mode further includes: determining a next minute hand coordinate, which corresponds to a next minute of the current hour, among the end point coordinates of clock hands during an overlap period, which is between a first time at which a minute of the current time is changed and a second time which is prior to the first time by a first time period; and generating the image data, which include the current hour hand line and the current minute hand line with a first gray level, and a next minute hand line connecting the reference coordinate and the next minute hand coordinate with a second gray level lower than the first gray level.
A display device includes a display panel including a plurality of pixels and a drive circuit which displays an image, which corresponds to input data received from outside, on the display panel in a normal operation mode, and displays an image, which corresponds to an analog clock representing a current time, on the display panel based on end point coordinates of clock hands that are internally stored in the drive circuit in a standby mode.1. A display device comprising: a display panel including a plurality of pixels; and a drive circuit which, displays an image, which corresponds to input data received from outside, on the display panel in a normal operation mode, and displays an image, which corresponds to an analog clock representing a current time, on the display panel based on end point coordinates of clock hands which are internally stored in the drive circuit in a standby mode. 2. The display device of claim 1, wherein, in the standby mode, the drive circuit generates an internal clock signal, determines a current hour hand coordinate and a current minute hand coordinate among the end point coordinates of clock hands based on the internal clock signal, and displays a current hour hand line, which connects a reference coordinate which is internally stored in the drive circuit and the current hour hand coordinate, and a current minute hand line, which connects the reference coordinate and the current minute hand coordinate, on the display panel. 3. The display device of claim 1, wherein the drive circuit includes: a gate driver coupled to the display panel through a plurality of gate lines; a source driver coupled to the display panel through a plurality of data lines; and a controller which, controls operations of the gate driver and the source driver, generates image data corresponding to the input data and provide the image data to the source driver in the normal operation mode, and generates image data corresponding to the analog clock representing the current time based on the end point coordinates of clock hands and an internal clock signal and provide the image data to the source driver in the standby mode. 4. The display device of claim 3, wherein the controller includes: a register which stores the end point coordinates of clock hands and a reference coordinate corresponding to a center of the analog clock; an internal clock generator which generates the internal clock signal; and a control circuit which, generates the image data by dividing the input data in a unit of a frame and provides the image data to the source driver in the normal operation mode, and determines the current time based on the internal clock signal, determines a current hour hand coordinate and a current minute hand coordinate, which correspond to the current time, among the end point coordinates of clock hands, generates the image data including a current hour hand line, which connects the reference coordinate and the current hour hand coordinate, and a current minute hand line, which connects the reference coordinate and the current minute hand coordinate, and provides the image data to the source driver in the standby mode. 5. The display device of claim 4, wherein the register includes: a first register which stores hour hand coordinates representing locations of end points of an hour hand at a predetermined time interval; a second register which stores minute hand coordinates representing locations of end points of a minute hand at every minute; and a third register which stores the reference coordinate. 6. The display device of claim 5, wherein, in the standby mode, the control circuit determines the current hour hand coordinate, which corresponds to the current time, among the hour hand coordinates stored in the first register, and determines the current minute hand coordinate, which corresponds to the current time, among the minute hand coordinates stored in the second register. 7. The display device of claim 5, wherein, in the standby mode, the control circuit determines the current hour hand coordinate by circularly selecting the hour hand coordinates stored in the first register at each of the predetermined time interval, and determines the current minute hand coordinate by circularly selecting the minute hand coordinates stored in the second register whenever a minute of the current time is changed. 8. The display device of claim 4, wherein, in the standby mode, the control circuit determines a next minute hand coordinate, which corresponds to a next minute of the current hour, among the end point coordinates of clock hands during an overlap period, which is between a first time at which a minute of the current time is changed and a second time which is prior to the first time by a first time period, generates the image data including the current hour hand line, the current minute hand line, and a next minute hand line, which connects the reference coordinate and the next minute hand coordinate, and provides the image data to the source driver. 9. The display device of claim 8, wherein the current hour hand line and the current minute hand line included in the image data has a first gray level, and the next minute hand line included in the image data has a second gray level lower than the first gray level. 10. The display device of claim 9, wherein, in the standby mode, the source driver displays the current hour hand line and the current minute hand line on the display panel with a first brightness and displays the next minute hand line on the display panel with a second brightness lower than the first brightness based on the image data received from the control circuit. 11. The display device of claim 8, wherein a duration of the overlap period is predetermined. 12. The display device of claim 8, wherein the control circuit adjusts a duration of the overlap period based on an overlap control signal. 13. A mobile device comprising: an application processor which, generates a mode signal having a first logic level and output input data in a normal operation mode, and generates the mode signal having a second logic level and stop outputting the input data in a standby mode; and a display device which, receives the mode signal, displays an image corresponding to the input data in the normal operation mode, and displays an image corresponding to an analog clock representing a current time based on end point coordinates of clock hands which are internally stored in the display device in a standby mode. 14. The mobile device of claim 13, wherein the display device includes: a display panel including a plurality of pixels; a gate driver coupled to the display panel through a plurality of gate lines; a source driver coupled to the display panel through a plurality of data lines; and a controller which, controls operations of the gate driver and the source driver, receives the mode signal, generates image data corresponding to the input data and provide the image data to the source driver in the normal operation mode, and generates image data corresponding to the analog clock representing the current time based on the end point coordinates of clock hands and an internal clock signal and provide the image data to the source driver in the standby mode. 15. The mobile device of claim 14, wherein the controller includes: a register which stores the end point coordinates of clock hands and a reference coordinate corresponding to a center of the analog clock; an internal clock generator which generates the internal clock signal; and a control circuit which, generates the image data by dividing the input data in a unit of a frame and provides the image data to the source driver in the normal operation mode, and determines the current time based on the internal clock signal, determines a current hour hand coordinate and a current minute hand coordinate, which correspond to the current time, among the end point coordinates of clock hands, generates the image data including a current hour hand line, which connects the reference coordinate and the current hour hand coordinate, and a current minute hand line, which connects the reference coordinate and the current minute hand coordinate, and provides the image data to the source driver in the standby mode. 16. The mobile device of claim 15, wherein, in the standby mode, the control circuit determines a next minute hand coordinate, which corresponds to a next minute of the current hour, among the end point coordinates of clock hands during an overlap period, which is between a first time at which a minute of the current time is changed and a second time which is prior to the first time by a first time period, generates the image data, which include the current hour hand line and the current minute hand line with a first gray level, and a next minute hand line connecting the reference coordinate and the next minute hand coordinate with a second gray level lower than the first gray level, and provides the image data to the source driver. 17. The mobile device of claim 13, wherein the mobile device corresponds to a smart watch. 18. A method of operating a display device, the method comprising: determining an operation mode; displaying an image, which corresponds to input data received from outside, on a display panel when the operation mode is a normal operation mode; and displaying an image, which corresponds to an analog clock representing a current time, on the display panel based on end point coordinates of clock hands which are internally stored in the display device when the operation mode is a standby mode. 19. The method of claim 18, wherein displaying the image, which corresponds to the analog clock representing the current time, on the display panel based on the end point coordinates of clock hands when the operation mode is the standby mode includes: determining the current time based on an internal clock signal; determining a current hour hand coordinate and a current minute hand coordinate, which correspond to the current time, among the end point coordinates of clock hands; generating image data corresponding to the analog clock which includes a current hour hand line, which connects a reference coordinate which is internally stored in the display device and the current hour hand coordinate, and a current minute hand line, which connects the reference coordinate and the current minute hand coordinate; and displaying the image data on the display panel. 20. The method of claim 19, wherein displaying the image, which corresponds to the analog clock representing the current time, on the display panel based on the end point coordinates of clock hands when the operation mode is the standby mode further includes: determining a next minute hand coordinate, which corresponds to a next minute of the current hour, among the end point coordinates of clock hands during an overlap period, which is between a first time at which a minute of the current time is changed and a second time which is prior to the first time by a first time period; and generating the image data, which include the current hour hand line and the current minute hand line with a first gray level, and a next minute hand line connecting the reference coordinate and the next minute hand coordinate with a second gray level lower than the first gray level.
2,600
10,355
10,355
15,270,426
2,624
A device can include a processor; memory operatively coupled to the processor; a planar display operatively coupled to the processor where the planar display includes an axis normal to the planar display; media circuitry operatively coupled to the processor; motion sensing circuitry operatively coupled to the processor; and processor-executable instructions to instruct the device to, responsive to sensed rotational motion of the device about the axis that corresponds to a rotational reference frame, render video media to the display in a stationary reference frame.
1. A device comprising: a processor; memory operatively coupled to the processor; a planar display operatively coupled to the processor wherein the planar display comprises an axis normal to the planar display; media circuitry operatively coupled to the processor; motion sensing circuitry operatively coupled to the processor; and processor-executable instructions to instruct the device to, responsive to sensed rotational motion of the device about the axis that corresponds to a rotational reference frame, render video media to the display in a stationary reference frame. 2. The device of claim 1 wherein the device comprises a network interface and wherein the video media comprises video media of a video media stream received by the device via the network interface. 3. The device of claim 1 comprising communication circuitry operatively coupled to the processor wherein the communication circuitry is operable for cellular calls that comprise video media and wherein the processor-executable instructions to render the video media render video media of a cellular call. 4. The device of claim 1 comprising a smart phone. 5. The device of claim 1 comprising processor-executable instructions that render information to the planar display in the rotational reference frame while video media is rendered to the display in the stationary reference frame. 6. The device of claim 1 wherein the axis normal to the display is substantially aligned with gravity. 7. The device of claim 6 wherein the sensed rotational motion of the device is substantially sensed rotational motion in a plane defined by the planar display. 8. The device of claim 1 wherein the video media is rendered to a portion of the planar display that is substantially centered on the axis normal to the planar display. 9. The device of claim 1 wherein sensed rotational motion comprises clockwise rotational motion with respect to a viewer facing the planar display. 10. The device of claim 1 wherein sensed rotational motion comprises counter-clockwise rotational motion with respect to a viewer facing the planar display. 11. A method comprising: for a device that comprises a processor, memory operatively coupled to the processor, a planar display operatively coupled to the processor wherein the planar display comprises an axis normal to the planar display, media circuitry operatively coupled to the processor, and motion sensing circuitry operatively coupled to the processor, sensing rotational motion of the device that corresponds to a rotational reference frame; and during the sensing of the rotational motion of the device, rendering video media to the planar display in a stationary reference frame. 12. The method of claim 11 wherein the sensing rotational motion comprises sensing clockwise rotational motion of the device with respect to a viewer facing the planar display. 13. The method of claim 11 wherein the sensing rotational motion comprises sensing counter-clockwise rotational motion of the device with respect to a viewer facing the planar display. 14. The method of claim 11 wherein the sensing rotational motion comprises sensing counter-clockwise rotational motion of the device and sensing clockwise rotational motion of the device with respect to a viewer facing the planar display. 15. The method of claim 11 comprising rendering information to the planar display in the rotational reference frame during rendering of the video media in the stationary reference frame. 16. The method of claim 11 wherein the rendering video media comprises rendering video media streamed during a cellular call to the device.
A device can include a processor; memory operatively coupled to the processor; a planar display operatively coupled to the processor where the planar display includes an axis normal to the planar display; media circuitry operatively coupled to the processor; motion sensing circuitry operatively coupled to the processor; and processor-executable instructions to instruct the device to, responsive to sensed rotational motion of the device about the axis that corresponds to a rotational reference frame, render video media to the display in a stationary reference frame.1. A device comprising: a processor; memory operatively coupled to the processor; a planar display operatively coupled to the processor wherein the planar display comprises an axis normal to the planar display; media circuitry operatively coupled to the processor; motion sensing circuitry operatively coupled to the processor; and processor-executable instructions to instruct the device to, responsive to sensed rotational motion of the device about the axis that corresponds to a rotational reference frame, render video media to the display in a stationary reference frame. 2. The device of claim 1 wherein the device comprises a network interface and wherein the video media comprises video media of a video media stream received by the device via the network interface. 3. The device of claim 1 comprising communication circuitry operatively coupled to the processor wherein the communication circuitry is operable for cellular calls that comprise video media and wherein the processor-executable instructions to render the video media render video media of a cellular call. 4. The device of claim 1 comprising a smart phone. 5. The device of claim 1 comprising processor-executable instructions that render information to the planar display in the rotational reference frame while video media is rendered to the display in the stationary reference frame. 6. The device of claim 1 wherein the axis normal to the display is substantially aligned with gravity. 7. The device of claim 6 wherein the sensed rotational motion of the device is substantially sensed rotational motion in a plane defined by the planar display. 8. The device of claim 1 wherein the video media is rendered to a portion of the planar display that is substantially centered on the axis normal to the planar display. 9. The device of claim 1 wherein sensed rotational motion comprises clockwise rotational motion with respect to a viewer facing the planar display. 10. The device of claim 1 wherein sensed rotational motion comprises counter-clockwise rotational motion with respect to a viewer facing the planar display. 11. A method comprising: for a device that comprises a processor, memory operatively coupled to the processor, a planar display operatively coupled to the processor wherein the planar display comprises an axis normal to the planar display, media circuitry operatively coupled to the processor, and motion sensing circuitry operatively coupled to the processor, sensing rotational motion of the device that corresponds to a rotational reference frame; and during the sensing of the rotational motion of the device, rendering video media to the planar display in a stationary reference frame. 12. The method of claim 11 wherein the sensing rotational motion comprises sensing clockwise rotational motion of the device with respect to a viewer facing the planar display. 13. The method of claim 11 wherein the sensing rotational motion comprises sensing counter-clockwise rotational motion of the device with respect to a viewer facing the planar display. 14. The method of claim 11 wherein the sensing rotational motion comprises sensing counter-clockwise rotational motion of the device and sensing clockwise rotational motion of the device with respect to a viewer facing the planar display. 15. The method of claim 11 comprising rendering information to the planar display in the rotational reference frame during rendering of the video media in the stationary reference frame. 16. The method of claim 11 wherein the rendering video media comprises rendering video media streamed during a cellular call to the device.
2,600
10,356
10,356
15,492,589
2,613
Systems and methods are provided rendering a three-dimensional volume. Scan data representing an anatomical object of a patient is acquired. A boundary of the object is identified in the scan data. A first lightmap is positioned inside the boundary of the object. A second lightmap is positioned outside the boundary of the object. The three-dimensional volume of the object is rendered from the scan data with lighting based on the first lightmap and second lightmap.
1. A method for rendering a three-dimensional volume, the method comprising: acquiring scan data representing an object inside of a patient; identifying in the scan data, a boundary of the object; providing a first lightmap inside the boundary of the object; providing a second lightmap outside the boundary of the object; and rendering the three-dimensional volume of the object from the scan data with lighting based on the first lightmap and second lightmap. 2. The method of claim 1, wherein rendering the three-dimensional volume comprises rendering with volumetric path tracing. 3. The method of claim 1, further comprising: rendering a portion of the boundary of the object transparent. 4. The method of claim 3, wherein the transparent portion provides a view of an external object from a camera position inside the object. 5. The method of claim 1, wherein the boundary is identified using segmentation of the scan data. 6. The method of claim 1, wherein the boundary is a wall of the object. 7. The method of claim 1, wherein the first lightmap is spherical and centered at a camera position inside the object, wherein the first lightmap has a radius smaller than a minimum distance from the camera position to the boundary. 8. The method of claim 1, wherein the second lightmap is spherical and centered at a camera position inside the object, wherein the second lightmap has a radius larger than a maximum distance from the camera position to the boundary. 9. The method of claim 1, wherein the first lightmap comprises a reflectance lightmap. 10. The method of claim 1, wherein the second lightmap comprises an illuminance lightmap 11. The method of claim 1, further comprising: providing a synthetic light source; wherein lighting for the three-dimensional image is further rendered including the synthetic light source. 12. The method of claim 11, wherein the three-dimensional image is further rendered using local shading techniques. 13. A method for generating a photorealistic image of an organ, the method comprising: acquiring scan data of the organ; and rendering the scan data to an image with illumination based on a first lightmap positioned inside the organ and a second lightmap positioned outside the organ. 14. The method of claim 13, wherein the first lightmap is spherical and centered at a camera position inside the organ, wherein the first lightmap has a radius smaller than a minimum distance from the camera position to a wall of the organ and wherein the second lightmap is spherical and centered at the camera position inside the organ, wherein the second lightmap has a radius larger than a maximum distance from the camera position to the wall. 15. The method of claim 13, wherein rendering to the image is volumetric path tracing rendering. 16. The method of claim 13, further comprising: transparently rendering to the image with at least a portion of a wall of the organ. 17. A system for rendering a three-dimensional volume, the system comprising: a memory configured to store data representing an object in three dimensions; a graphics processing unit configured to render illumination from a first lightmap positioned inside the object and a second lightmap positioned outside the object; and a processor configured to render an image of the object including the illumination. 18. The system of claim 17, wherein the first lightmap is spherical and centered at a camera position inside the object, wherein the first lightmap has a radius smaller than a minimum distance from the camera position to a wall of the object and wherein the second lightmap is spherical and centered at the camera position inside the object, wherein the second lightmap has a radius larger than a maximum distance from the camera position to the wall. 19. The system of claim 17, wherein the graphics processing unit is further configured to render illumination based on a point light source. 20. The system of claim 17, wherein the graphics processing unit is configured to render illumination using volumetric path tracing.
Systems and methods are provided rendering a three-dimensional volume. Scan data representing an anatomical object of a patient is acquired. A boundary of the object is identified in the scan data. A first lightmap is positioned inside the boundary of the object. A second lightmap is positioned outside the boundary of the object. The three-dimensional volume of the object is rendered from the scan data with lighting based on the first lightmap and second lightmap.1. A method for rendering a three-dimensional volume, the method comprising: acquiring scan data representing an object inside of a patient; identifying in the scan data, a boundary of the object; providing a first lightmap inside the boundary of the object; providing a second lightmap outside the boundary of the object; and rendering the three-dimensional volume of the object from the scan data with lighting based on the first lightmap and second lightmap. 2. The method of claim 1, wherein rendering the three-dimensional volume comprises rendering with volumetric path tracing. 3. The method of claim 1, further comprising: rendering a portion of the boundary of the object transparent. 4. The method of claim 3, wherein the transparent portion provides a view of an external object from a camera position inside the object. 5. The method of claim 1, wherein the boundary is identified using segmentation of the scan data. 6. The method of claim 1, wherein the boundary is a wall of the object. 7. The method of claim 1, wherein the first lightmap is spherical and centered at a camera position inside the object, wherein the first lightmap has a radius smaller than a minimum distance from the camera position to the boundary. 8. The method of claim 1, wherein the second lightmap is spherical and centered at a camera position inside the object, wherein the second lightmap has a radius larger than a maximum distance from the camera position to the boundary. 9. The method of claim 1, wherein the first lightmap comprises a reflectance lightmap. 10. The method of claim 1, wherein the second lightmap comprises an illuminance lightmap 11. The method of claim 1, further comprising: providing a synthetic light source; wherein lighting for the three-dimensional image is further rendered including the synthetic light source. 12. The method of claim 11, wherein the three-dimensional image is further rendered using local shading techniques. 13. A method for generating a photorealistic image of an organ, the method comprising: acquiring scan data of the organ; and rendering the scan data to an image with illumination based on a first lightmap positioned inside the organ and a second lightmap positioned outside the organ. 14. The method of claim 13, wherein the first lightmap is spherical and centered at a camera position inside the organ, wherein the first lightmap has a radius smaller than a minimum distance from the camera position to a wall of the organ and wherein the second lightmap is spherical and centered at the camera position inside the organ, wherein the second lightmap has a radius larger than a maximum distance from the camera position to the wall. 15. The method of claim 13, wherein rendering to the image is volumetric path tracing rendering. 16. The method of claim 13, further comprising: transparently rendering to the image with at least a portion of a wall of the organ. 17. A system for rendering a three-dimensional volume, the system comprising: a memory configured to store data representing an object in three dimensions; a graphics processing unit configured to render illumination from a first lightmap positioned inside the object and a second lightmap positioned outside the object; and a processor configured to render an image of the object including the illumination. 18. The system of claim 17, wherein the first lightmap is spherical and centered at a camera position inside the object, wherein the first lightmap has a radius smaller than a minimum distance from the camera position to a wall of the object and wherein the second lightmap is spherical and centered at the camera position inside the object, wherein the second lightmap has a radius larger than a maximum distance from the camera position to the wall. 19. The system of claim 17, wherein the graphics processing unit is further configured to render illumination based on a point light source. 20. The system of claim 17, wherein the graphics processing unit is configured to render illumination using volumetric path tracing.
2,600
10,357
10,357
15,853,207
2,613
A texture processor based ray tracing accelerator method and system are described. The system includes a shader, texture processor (TP) and cache, which are interconnected. The TP includes a texture address unit (TA), a texture cache processor (TCP), a filter pipeline unit and a ray intersection engine. The shader sends a texture instruction which contains ray data and a pointer to a bounded volume hierarchy (BVH) node to the TA. The TCP uses an address provided by the TA to fetch BVH node data from the cache. The ray intersection engine performs ray-BVH node type intersection testing using the ray data and the BVH node data. The intersection testing results and indications for BVH traversal are returned to the shader via a texture data return path. The shader reviews the intersection results and the indications to decide how to traverse to the next BVH node.
1. A method for texture processor based ray tracing acceleration, the method comprising: receiving, at a texture processor from a shader, a texture instruction which includes at least a bounded volume hierarchy (BVH) node pointer and ray data; fetching, by the texture processor, BVH node data from a cache based on the BVH node pointer; receiving, by a ray intersection engine of the texture processor, the ray data and the BVH node data; performing ray-BVH node type intersection testing using the ray data and the BVH node data; and sending, by the ray intersection engine via a texture data return path to the shader, intersection results based on the ray-BVH node type intersection testing. 2. The method of claim 1, further comprising: decoding the texture instruction to determine a BVH node type and a data address; and filtering the texture instruction to obtain the ray data. 3. The method of claim 2, further comprising: discarding portions of the ray data based on the BVH node type. 4. The method of claim 2, further comprising: discarding ray direction data when the BVH node type is a box node. 5. The method of claim 2, further comprising: discarding ray inverse direction data when the BVH node type is a triangle node. 6. The method of claim 1, wherein the ray data and the BVH node data are received in waves, the method further comprising: generating at least one transaction based on the ray data and the BVH node data, wherein transactions are not generated for inactive lanes in the waves. 7. The method of claim 1, wherein the ray data and the BVH node data are received in waves, the method further comprising: generating transactions based on the ray data and the BVH node data, wherein each transaction is generated from an active lane in the waves. 8. The method of claim 7, wherein the ray data and the BVH node data are received in waves, the method further comprising: writing the intersection results to a buffer based on lane identification to account for inactive lanes in the waves. 9. The method of claim 1, further comprising: advancing traversal by the shader using the intersection results. 10. A texture processor based ray tracing acceleration system, the system comprising: a shader; a cache; a texture processor including at least a ray intersection engine, the texture processor connected to the shader and the cache, wherein the texture processor is configured to: receive, from the shader, a texture instruction which includes at least a bounded volume hierarchy (BVH) node pointer and ray data; and fetch BVH node data from the cache based on the BVH node pointer, wherein the ray intersection engine is configured to: receive the ray data and the BVH node data; perform ray-BVH node type intersection testing using the ray data and the BVH node data; and send intersection results based on the ray-BVH node type intersection testing via a texture data return path to the shader. 11. The system of claim 10, wherein the texture processor is configured to: decode the texture instruction to determine a BVH node type and a data address; and filter the texture instruction to obtain the ray data. 12. The system of claim 11, wherein the texture processor is configured to discard portions of the ray data based on the BVH node type. 13. The system of claim 11, wherein the texture processor is configured to discard ray direction data when the BVH node type is a box node. 14. The system of claim 11, wherein the texture processor is configured to discard ray inverse direction data when the BVH node type is a triangle node. 15. The system of claim 10, wherein the ray data and the BVH node data are received in waves, and the ray intersection engine is further configured to generate at least one transaction based on the ray data and the BVH node data, wherein transactions are not generated for inactive lanes in the waves. 16. The system of claim 10, wherein the ray data and the BVH node data are received in waves, and the ray intersection engine is further configured to generate transactions based on the ray data and the BVH node data, wherein each transaction is generated from active lanes in the waves. 17. The system of claim 16, wherein the ray data and the BVH node data are received in waves, and the ray intersection engine is configured to write the intersection results to a buffer based on lane identification to account for inactive lanes in the waves. 18. The system of claim 10, the texture processor further comprising a state machine, wherein the state machine is configured to generate using the intersection results an indicator on how the shader should advance a traversal stack. 19. A texture processor comprising: a texture address unit connected to a shader; a texture cache connected to the texture address unit; a ray intersection engine connected to the texture address unit, the texture cache and the shader; wherein: the texture address unit is configured to: receive from the shader a texture instruction which includes at least a bounded volume hierarchy (BVH) node pointer and ray data; filter the texture instruction to obtain the ray data; fetch BVH node data from the texture cache based on the BVH node pointer, the ray intersection engine is configured to: receive the ray data and the BVH node data; perform ray-BVH node type intersection testing using the ray data and the BVH node data; and send intersection results based on the ray-BVH node type intersection testing via a texture data return path to the shader. 20. The texture processor of claim 19, wherein the ray data and the BVH node data are received for waves, and the ray intersection engine is further configured to generate transactions based on the ray data and the BVH node data, wherein each transaction is generated from active lanes in the waves.
A texture processor based ray tracing accelerator method and system are described. The system includes a shader, texture processor (TP) and cache, which are interconnected. The TP includes a texture address unit (TA), a texture cache processor (TCP), a filter pipeline unit and a ray intersection engine. The shader sends a texture instruction which contains ray data and a pointer to a bounded volume hierarchy (BVH) node to the TA. The TCP uses an address provided by the TA to fetch BVH node data from the cache. The ray intersection engine performs ray-BVH node type intersection testing using the ray data and the BVH node data. The intersection testing results and indications for BVH traversal are returned to the shader via a texture data return path. The shader reviews the intersection results and the indications to decide how to traverse to the next BVH node.1. A method for texture processor based ray tracing acceleration, the method comprising: receiving, at a texture processor from a shader, a texture instruction which includes at least a bounded volume hierarchy (BVH) node pointer and ray data; fetching, by the texture processor, BVH node data from a cache based on the BVH node pointer; receiving, by a ray intersection engine of the texture processor, the ray data and the BVH node data; performing ray-BVH node type intersection testing using the ray data and the BVH node data; and sending, by the ray intersection engine via a texture data return path to the shader, intersection results based on the ray-BVH node type intersection testing. 2. The method of claim 1, further comprising: decoding the texture instruction to determine a BVH node type and a data address; and filtering the texture instruction to obtain the ray data. 3. The method of claim 2, further comprising: discarding portions of the ray data based on the BVH node type. 4. The method of claim 2, further comprising: discarding ray direction data when the BVH node type is a box node. 5. The method of claim 2, further comprising: discarding ray inverse direction data when the BVH node type is a triangle node. 6. The method of claim 1, wherein the ray data and the BVH node data are received in waves, the method further comprising: generating at least one transaction based on the ray data and the BVH node data, wherein transactions are not generated for inactive lanes in the waves. 7. The method of claim 1, wherein the ray data and the BVH node data are received in waves, the method further comprising: generating transactions based on the ray data and the BVH node data, wherein each transaction is generated from an active lane in the waves. 8. The method of claim 7, wherein the ray data and the BVH node data are received in waves, the method further comprising: writing the intersection results to a buffer based on lane identification to account for inactive lanes in the waves. 9. The method of claim 1, further comprising: advancing traversal by the shader using the intersection results. 10. A texture processor based ray tracing acceleration system, the system comprising: a shader; a cache; a texture processor including at least a ray intersection engine, the texture processor connected to the shader and the cache, wherein the texture processor is configured to: receive, from the shader, a texture instruction which includes at least a bounded volume hierarchy (BVH) node pointer and ray data; and fetch BVH node data from the cache based on the BVH node pointer, wherein the ray intersection engine is configured to: receive the ray data and the BVH node data; perform ray-BVH node type intersection testing using the ray data and the BVH node data; and send intersection results based on the ray-BVH node type intersection testing via a texture data return path to the shader. 11. The system of claim 10, wherein the texture processor is configured to: decode the texture instruction to determine a BVH node type and a data address; and filter the texture instruction to obtain the ray data. 12. The system of claim 11, wherein the texture processor is configured to discard portions of the ray data based on the BVH node type. 13. The system of claim 11, wherein the texture processor is configured to discard ray direction data when the BVH node type is a box node. 14. The system of claim 11, wherein the texture processor is configured to discard ray inverse direction data when the BVH node type is a triangle node. 15. The system of claim 10, wherein the ray data and the BVH node data are received in waves, and the ray intersection engine is further configured to generate at least one transaction based on the ray data and the BVH node data, wherein transactions are not generated for inactive lanes in the waves. 16. The system of claim 10, wherein the ray data and the BVH node data are received in waves, and the ray intersection engine is further configured to generate transactions based on the ray data and the BVH node data, wherein each transaction is generated from active lanes in the waves. 17. The system of claim 16, wherein the ray data and the BVH node data are received in waves, and the ray intersection engine is configured to write the intersection results to a buffer based on lane identification to account for inactive lanes in the waves. 18. The system of claim 10, the texture processor further comprising a state machine, wherein the state machine is configured to generate using the intersection results an indicator on how the shader should advance a traversal stack. 19. A texture processor comprising: a texture address unit connected to a shader; a texture cache connected to the texture address unit; a ray intersection engine connected to the texture address unit, the texture cache and the shader; wherein: the texture address unit is configured to: receive from the shader a texture instruction which includes at least a bounded volume hierarchy (BVH) node pointer and ray data; filter the texture instruction to obtain the ray data; fetch BVH node data from the texture cache based on the BVH node pointer, the ray intersection engine is configured to: receive the ray data and the BVH node data; perform ray-BVH node type intersection testing using the ray data and the BVH node data; and send intersection results based on the ray-BVH node type intersection testing via a texture data return path to the shader. 20. The texture processor of claim 19, wherein the ray data and the BVH node data are received for waves, and the ray intersection engine is further configured to generate transactions based on the ray data and the BVH node data, wherein each transaction is generated from active lanes in the waves.
2,600
10,358
10,358
16,193,611
2,646
Software and computer processor implemented system useful for preventing distracted driving. Here the invention's software runs on a smartphone or other computerized device configured to automatically connect to various devices, such as automobile associated Bluetooth peripherals. When operating, the invention determines what peripheral connections are active, and uses these to automatically select responses to various incoming messages and often automatically send these responses.
1. A handheld computerized device configured to help prevent distracted driving, said handheld computerized device comprising: a Bluetooth transceiver, processor, graphical user interface, memory, wireless cellular network transceiver, and reply software; wherein said handheld computerized device is any of a smartphone and tablet computer device; said handheld computerized device including at least one reply message stored in said memory; said handheld computerized device configured to identify that at least one vehicle associated Bluetooth device is Bluetooth connected to said handheld computerized device; said handheld computerized device further configured so that, while it is Bluetooth connected to said at least one vehicle associated Bluetooth device, said handheld computerized device is configured to respond to at least one incoming cellular network message with at least one reply message from said memory. 2. The device of claim 1, wherein said handheld computerized device is configured to send said reply message automatically in response to said at least one incoming cellular network message. 3. The device of claim 1, wherein said Bluetooth connection is a bidirectional connection. 4. The device of claim 3, wherein said Bluetooth connection is a paired connection. 5. The device of claim 1, wherein said handheld computerized device is configured to request and receive user input before sending said reply message in response to at least one incoming cellular network message. 6. The device of claim 1, wherein said vehicle associated Bluetooth device is a peripheral device. 7. The device of claim 1, wherein said vehicle associated Bluetooth device is embedded within a vehicle. 8. The device of claim 1, wherein said at least one incoming cellular network message comprises any of an SMS message and an MMS message. 9. The device of claim 1, wherein said handheld computerized device is configured to allow a user to enter said at least one reply message in said memory. 10. A handheld computerized device configured to help prevent distracted driving, said handheld computerized device comprising: a Bluetooth transceiver, processor, graphical user interface, memory, wireless cellular network transceiver, and reply software; wherein said handheld computerized device is any of a smartphone and tablet computer device; said handheld computerized device including at least one reply message stored in said memory; said handheld computerized device configured to identify that at least one vehicle associated Bluetooth device is Bluetooth connected to said handheld computerized device; said handheld computerized device further configured so that, while it is Bluetooth connected to said at least one vehicle associated Bluetooth device, said handheld computerized device is configured to automatically respond to at least one incoming cellular network message with at least one reply message from said memory. 11. The device of claim 10, wherein said Bluetooth connection is a bidirectional connection. 12. The device of claim 11, wherein said Bluetooth connection is a paired connection. 13. The device of claim 10, wherein said vehicle associated Bluetooth device is a peripheral device. 14. The device of claim 10, wherein said vehicle associated Bluetooth device is embedded within a vehicle. 15. The device of claim 10, wherein said at least one incoming cellular network message comprises any of an SMS message and an MMS message. 16. The device of claim 10, wherein said handheld computerized device is configured to allow a user to enter said at least one reply message in said memory. 17. A handheld computerized device configured to help prevent distracted driving, said handheld computerized device comprising: a Bluetooth transceiver, processor, graphical user interface, memory, wireless cellular network transceiver, and reply software; wherein said handheld computerized device is any of a smartphone and tablet computer device; said handheld computerized device including at least one reply message stored in said memory; said handheld computerized device configured to identify that at least one vehicle associated Bluetooth device is Bluetooth connected to said handheld computerized device; said handheld computerized device further configured so that, while it is Bluetooth connected to said at least one vehicle associated Bluetooth device, said handheld computerized device is configured to request and receive user input before responding to at least one incoming cellular network message with at least one reply message from said memory. 18. The device of claim 17, wherein said Bluetooth connection is a bidirectional connection. 19. The device of claim 18, wherein said Bluetooth connection is a paired connection. 20. The device of claim 17, wherein said vehicle associated Bluetooth device is a peripheral device. 21. The device of claim 17, wherein said vehicle associated Bluetooth device is embedded within a vehicle. 22. The device of claim 17, wherein said at least one incoming cellular network message comprises any of an SMS message and an MMS message. 23. The device of claim 17, wherein said handheld computerized device is configured to allow a user to enter said at least one reply message in said memory.
Software and computer processor implemented system useful for preventing distracted driving. Here the invention's software runs on a smartphone or other computerized device configured to automatically connect to various devices, such as automobile associated Bluetooth peripherals. When operating, the invention determines what peripheral connections are active, and uses these to automatically select responses to various incoming messages and often automatically send these responses.1. A handheld computerized device configured to help prevent distracted driving, said handheld computerized device comprising: a Bluetooth transceiver, processor, graphical user interface, memory, wireless cellular network transceiver, and reply software; wherein said handheld computerized device is any of a smartphone and tablet computer device; said handheld computerized device including at least one reply message stored in said memory; said handheld computerized device configured to identify that at least one vehicle associated Bluetooth device is Bluetooth connected to said handheld computerized device; said handheld computerized device further configured so that, while it is Bluetooth connected to said at least one vehicle associated Bluetooth device, said handheld computerized device is configured to respond to at least one incoming cellular network message with at least one reply message from said memory. 2. The device of claim 1, wherein said handheld computerized device is configured to send said reply message automatically in response to said at least one incoming cellular network message. 3. The device of claim 1, wherein said Bluetooth connection is a bidirectional connection. 4. The device of claim 3, wherein said Bluetooth connection is a paired connection. 5. The device of claim 1, wherein said handheld computerized device is configured to request and receive user input before sending said reply message in response to at least one incoming cellular network message. 6. The device of claim 1, wherein said vehicle associated Bluetooth device is a peripheral device. 7. The device of claim 1, wherein said vehicle associated Bluetooth device is embedded within a vehicle. 8. The device of claim 1, wherein said at least one incoming cellular network message comprises any of an SMS message and an MMS message. 9. The device of claim 1, wherein said handheld computerized device is configured to allow a user to enter said at least one reply message in said memory. 10. A handheld computerized device configured to help prevent distracted driving, said handheld computerized device comprising: a Bluetooth transceiver, processor, graphical user interface, memory, wireless cellular network transceiver, and reply software; wherein said handheld computerized device is any of a smartphone and tablet computer device; said handheld computerized device including at least one reply message stored in said memory; said handheld computerized device configured to identify that at least one vehicle associated Bluetooth device is Bluetooth connected to said handheld computerized device; said handheld computerized device further configured so that, while it is Bluetooth connected to said at least one vehicle associated Bluetooth device, said handheld computerized device is configured to automatically respond to at least one incoming cellular network message with at least one reply message from said memory. 11. The device of claim 10, wherein said Bluetooth connection is a bidirectional connection. 12. The device of claim 11, wherein said Bluetooth connection is a paired connection. 13. The device of claim 10, wherein said vehicle associated Bluetooth device is a peripheral device. 14. The device of claim 10, wherein said vehicle associated Bluetooth device is embedded within a vehicle. 15. The device of claim 10, wherein said at least one incoming cellular network message comprises any of an SMS message and an MMS message. 16. The device of claim 10, wherein said handheld computerized device is configured to allow a user to enter said at least one reply message in said memory. 17. A handheld computerized device configured to help prevent distracted driving, said handheld computerized device comprising: a Bluetooth transceiver, processor, graphical user interface, memory, wireless cellular network transceiver, and reply software; wherein said handheld computerized device is any of a smartphone and tablet computer device; said handheld computerized device including at least one reply message stored in said memory; said handheld computerized device configured to identify that at least one vehicle associated Bluetooth device is Bluetooth connected to said handheld computerized device; said handheld computerized device further configured so that, while it is Bluetooth connected to said at least one vehicle associated Bluetooth device, said handheld computerized device is configured to request and receive user input before responding to at least one incoming cellular network message with at least one reply message from said memory. 18. The device of claim 17, wherein said Bluetooth connection is a bidirectional connection. 19. The device of claim 18, wherein said Bluetooth connection is a paired connection. 20. The device of claim 17, wherein said vehicle associated Bluetooth device is a peripheral device. 21. The device of claim 17, wherein said vehicle associated Bluetooth device is embedded within a vehicle. 22. The device of claim 17, wherein said at least one incoming cellular network message comprises any of an SMS message and an MMS message. 23. The device of claim 17, wherein said handheld computerized device is configured to allow a user to enter said at least one reply message in said memory.
2,600
10,359
10,359
15,891,555
2,612
An apparatus and a method for generating 3-dimensional computer graphic images. The image is first sub-divided into a plurality of rectangular areas. A display list memory is loaded with object data for each rectangular area. The image and shading data for each picture element of each rectangular area are derived from the object data in the image synthesis processor and a texturizing and shading processor. A depth range generator derives a depth range for each rectangular area from the object data as the imaging and shading data is derived. This is compared with the depth of each new object to be provided to the image synthesis processor and the object may be prevented from being provided to the image synthesis processor independence on the result of the comparison.
1. A graphics renderer for rendering a scene having an image plane divided into a set of one or more tiles each having an associated list stored in memory that contains object pointers for objects overlapping that tile, each object pointer containing information on a depth range of the object, the renderer comprising: a fetch unit configured to read the object pointers for objects identified in the tile list for a tile being rendered, perform a depth range test for each object identified for the tile being rendered to compare the depth range of the object with a received depth range for the tile, and read, from memory, parameter data only for objects that pass the depth range test; and a rendering processor configured to render each tile of the set of one or more tiles using the object pointers and parameter data fetched by the fetch unit; wherein the fetch unit is further configured to receive an updated depth range for the tile being rendered as objects in the tile list for that tile are processed by the rendering processor. 2. The graphics renderer as claimed in claim 1, wherein the rendering processor is further configured to calculate per-pixel depth values for each object processed as part of rendering a tile. 3. The graphics renderer as claimed in claim 2, wherein the fetch unit is further configured to receive an updated depth range for the tile being rendered that is determined using the per-pixel depth values for objects that have been processed for that tile. 4. The graphics renderer as claimed in claim 1, wherein the graphics renderer further comprises a range generation unit configured to compute a depth range for a tile being rendered by the rendering processor that represents the range of depth values for objects that have been processed for that tile, and to feed back computed depth ranges to the fetch unit for the tile being rendered by the rendering processor for use in depth range tests for objects identified in the tile list for that tile. 5. The graphics renderer as claimed in claim 1, wherein the graphics renderer further comprises a tiling unit configured to receive data for a plurality of objects; calculate the tiles overlapped by each object; derive a depth range for each object; and write for each object an object pointer into per-tile lists stored in a memory only for tiles that are overlapped by the object, each object pointer containing information on the depth range of the object. 6. The graphics renderer as claimed in claim 1, wherein the fetch unit is further configured to perform a single test only for each object identified for the tile being rendered to compare the depth range of the object with the received depth range for that tile. 7. The graphics renderer as claimed in claim 1, wherein the fetch unit is further configured to read the object pointers in the tile list for the tile being rendered by the rendering processor. 8. The graphics renderer as claimed in claim 1, wherein the renderer further comprises a counter configured to increment each time parameter data for an object is read from memory, and the fetch unit is configured to disable avoidance of reading parameter data for objects that fail the depth range test at the fetch unit when the counter has a value below a specified threshold. 9. The graphics renderer as claimed in claim 8, wherein the renderer is further configured to render objects according to a graphics pipeline, and the specified threshold of the counter is equal to the maximum number of objects that can be held in the pipeline. 10. The graphics renderer as claimed in claim 8, wherein the fetch unit is further configured to perform the depth range test in dependence on a depth compare mode specifying the conditions an object is to satisfy to pass the depth range test, and the counter is configured to reset in response to a change in the depth compare mode. 11. A method for rendering a scene having an image plane divided into a set of one or more tiles each having an associated list stored in memory that contains object pointers for objects overlapping that tile, each object pointer containing information on a depth range of the object, the method comprising: reading at a fetch unit the object pointers for objects identified in the tile list for a tile being rendered; performing at the fetch unit a depth range test for each object identified for the tile being rendered to compare the depth range of the object with a received depth range for the tile; reading, from memory, parameter data only for objects that pass the depth range test; rendering each tile of the set of one or more tiles at a rendering processor using the object pointers and parameter data fetched by the fetch unit; and receiving at the fetch unit an updated depth range for the tile being rendered as objects in the tile list for that tile are processed by the rendering processor. 12. The method as claimed in claim 11, wherein the step of rendering each tile of the set of one or more tiles comprises calculating per-pixel depth values for each object processed as part of rendering the tile. 13. The method as claimed in claim 12, wherein the updated depth range for the tile being rendered received at the fetch unit is determined using the per-pixel depth values for objects that have been processed for that tile. 14. The method as claimed in claim 11, wherein the method further comprises: computing at a range generation unit a depth range for a tile being rendered by the rendering processor that represents the range of depth values for objects that have been processed for that tile; and feeding back computed depth ranges to the fetch unit for the tile being rendered by the rendering processor for use in depth range tests for objects identified in the tile list for that tile. 15. The method as claimed in claim 11, wherein the method further comprises: receiving data for a plurality of objects at a tiling unit; calculating the tiles overlapped by each object at the tiling unit; and deriving a depth range for each object at the tiling unit and writing for each object an object pointer into per-tile lists stored in a memory only for tiles that are overlapped by the object, each object pointer containing information on the depth range of the object. 16. The method as claimed in claim 11, wherein the step of performing the depth range test at the fetch unit comprises performing a single test only for each object identified for the tile being rendered to compare the depth range of the object with the received depth range for that tile. 17. The method as claimed in claim 11, wherein the step of reading at the fetch unit the object pointers further comprises reading the object pointers in the tile list for the tile being rendered by the rendering processor. 18. The method as claimed in claim 11, wherein the method further comprises incrementing a counter each time parameter data for an object is read from memory, and disabling avoidance of reading parameter data for objects that fail the depth range test at the fetch unit when the counter has a value below a specified threshold. 19. The method as claimed in claim 18, wherein the objects are rendered according to a graphics pipeline, and the specified threshold of the counter is equal to the maximum number of objects that can be held in the pipeline. 20. The method as claimed in claim 18, wherein the depth range test at the fetch unit is performed in dependence on a depth compare mode specifying the conditions an object is to satisfy to pass the depth range test, the method further comprising resetting the counter in response to a change in the depth compare mode.
An apparatus and a method for generating 3-dimensional computer graphic images. The image is first sub-divided into a plurality of rectangular areas. A display list memory is loaded with object data for each rectangular area. The image and shading data for each picture element of each rectangular area are derived from the object data in the image synthesis processor and a texturizing and shading processor. A depth range generator derives a depth range for each rectangular area from the object data as the imaging and shading data is derived. This is compared with the depth of each new object to be provided to the image synthesis processor and the object may be prevented from being provided to the image synthesis processor independence on the result of the comparison.1. A graphics renderer for rendering a scene having an image plane divided into a set of one or more tiles each having an associated list stored in memory that contains object pointers for objects overlapping that tile, each object pointer containing information on a depth range of the object, the renderer comprising: a fetch unit configured to read the object pointers for objects identified in the tile list for a tile being rendered, perform a depth range test for each object identified for the tile being rendered to compare the depth range of the object with a received depth range for the tile, and read, from memory, parameter data only for objects that pass the depth range test; and a rendering processor configured to render each tile of the set of one or more tiles using the object pointers and parameter data fetched by the fetch unit; wherein the fetch unit is further configured to receive an updated depth range for the tile being rendered as objects in the tile list for that tile are processed by the rendering processor. 2. The graphics renderer as claimed in claim 1, wherein the rendering processor is further configured to calculate per-pixel depth values for each object processed as part of rendering a tile. 3. The graphics renderer as claimed in claim 2, wherein the fetch unit is further configured to receive an updated depth range for the tile being rendered that is determined using the per-pixel depth values for objects that have been processed for that tile. 4. The graphics renderer as claimed in claim 1, wherein the graphics renderer further comprises a range generation unit configured to compute a depth range for a tile being rendered by the rendering processor that represents the range of depth values for objects that have been processed for that tile, and to feed back computed depth ranges to the fetch unit for the tile being rendered by the rendering processor for use in depth range tests for objects identified in the tile list for that tile. 5. The graphics renderer as claimed in claim 1, wherein the graphics renderer further comprises a tiling unit configured to receive data for a plurality of objects; calculate the tiles overlapped by each object; derive a depth range for each object; and write for each object an object pointer into per-tile lists stored in a memory only for tiles that are overlapped by the object, each object pointer containing information on the depth range of the object. 6. The graphics renderer as claimed in claim 1, wherein the fetch unit is further configured to perform a single test only for each object identified for the tile being rendered to compare the depth range of the object with the received depth range for that tile. 7. The graphics renderer as claimed in claim 1, wherein the fetch unit is further configured to read the object pointers in the tile list for the tile being rendered by the rendering processor. 8. The graphics renderer as claimed in claim 1, wherein the renderer further comprises a counter configured to increment each time parameter data for an object is read from memory, and the fetch unit is configured to disable avoidance of reading parameter data for objects that fail the depth range test at the fetch unit when the counter has a value below a specified threshold. 9. The graphics renderer as claimed in claim 8, wherein the renderer is further configured to render objects according to a graphics pipeline, and the specified threshold of the counter is equal to the maximum number of objects that can be held in the pipeline. 10. The graphics renderer as claimed in claim 8, wherein the fetch unit is further configured to perform the depth range test in dependence on a depth compare mode specifying the conditions an object is to satisfy to pass the depth range test, and the counter is configured to reset in response to a change in the depth compare mode. 11. A method for rendering a scene having an image plane divided into a set of one or more tiles each having an associated list stored in memory that contains object pointers for objects overlapping that tile, each object pointer containing information on a depth range of the object, the method comprising: reading at a fetch unit the object pointers for objects identified in the tile list for a tile being rendered; performing at the fetch unit a depth range test for each object identified for the tile being rendered to compare the depth range of the object with a received depth range for the tile; reading, from memory, parameter data only for objects that pass the depth range test; rendering each tile of the set of one or more tiles at a rendering processor using the object pointers and parameter data fetched by the fetch unit; and receiving at the fetch unit an updated depth range for the tile being rendered as objects in the tile list for that tile are processed by the rendering processor. 12. The method as claimed in claim 11, wherein the step of rendering each tile of the set of one or more tiles comprises calculating per-pixel depth values for each object processed as part of rendering the tile. 13. The method as claimed in claim 12, wherein the updated depth range for the tile being rendered received at the fetch unit is determined using the per-pixel depth values for objects that have been processed for that tile. 14. The method as claimed in claim 11, wherein the method further comprises: computing at a range generation unit a depth range for a tile being rendered by the rendering processor that represents the range of depth values for objects that have been processed for that tile; and feeding back computed depth ranges to the fetch unit for the tile being rendered by the rendering processor for use in depth range tests for objects identified in the tile list for that tile. 15. The method as claimed in claim 11, wherein the method further comprises: receiving data for a plurality of objects at a tiling unit; calculating the tiles overlapped by each object at the tiling unit; and deriving a depth range for each object at the tiling unit and writing for each object an object pointer into per-tile lists stored in a memory only for tiles that are overlapped by the object, each object pointer containing information on the depth range of the object. 16. The method as claimed in claim 11, wherein the step of performing the depth range test at the fetch unit comprises performing a single test only for each object identified for the tile being rendered to compare the depth range of the object with the received depth range for that tile. 17. The method as claimed in claim 11, wherein the step of reading at the fetch unit the object pointers further comprises reading the object pointers in the tile list for the tile being rendered by the rendering processor. 18. The method as claimed in claim 11, wherein the method further comprises incrementing a counter each time parameter data for an object is read from memory, and disabling avoidance of reading parameter data for objects that fail the depth range test at the fetch unit when the counter has a value below a specified threshold. 19. The method as claimed in claim 18, wherein the objects are rendered according to a graphics pipeline, and the specified threshold of the counter is equal to the maximum number of objects that can be held in the pipeline. 20. The method as claimed in claim 18, wherein the depth range test at the fetch unit is performed in dependence on a depth compare mode specifying the conditions an object is to satisfy to pass the depth range test, the method further comprising resetting the counter in response to a change in the depth compare mode.
2,600
10,360
10,360
15,209,519
2,612
An electronic device includes a housing, which may be deformable or may include hinges to allow a display, which is flexible, to be deform by bending or other operations. One or more flex sensors detect when the electronic device is deformed at a deformation portion. One or more processors, which include an application processor, reconfigure a presentation of content along the flexible display in response to detecting deformation at the deformation portion, where the reconfiguring includes a content aspect ratio transition from a first predefined aspect ratio to a second predefined aspect ratio.
1. An electronic device, comprising; a deformable housing; a flexible display supported by the deformable housing; one or more flex sensors supported by the deformable housing, the one or more flex sensors detecting when the electronic device is deformed at a deformation portion; and one or more processors operable with the flexible display and the one or more flex sensors, the one or more processors reconfiguring a presentation of content along the flexible display in response to detecting deformation at the deformation portion, the reconfiguring comprising a content aspect ratio transition from a first aspect ratio to a second aspect ratio. 2. The electronic device of claim 1, the one or more processors presenting the content only to one side of the deformation portion in response to detecting the deformation. 3. The electronic device of claim 2, the first aspect ratio comprising a 4:3 aspect ratio, the second aspect ratio comprising a 16:9 aspect ratio. 4. The electronic device of claim 2, the one or more processors dividing a portion of the flexible display disposed to the one side of the deformation portion into a first subportion and a second subportion that is complementary to the first subportion, the one or more processors presenting the content in the first subportion at the second aspect ratio. 5. The electronic device of claim 4, the one or more processors further configuring the second subportion differently from the first subportion. 6. The electronic device of claim 5, the one or more processors presenting secondary content, different from the content, in the second subportion. 7. The electronic device of claim 6, the secondary content comprising locally stored content. 8. The electronic device of claim 7, the locally stored content comprising an image. 9. The electronic device of claim 6, the secondary content comprising static content. 10. The electronic device of claim 1, the one or more flex sensors determining a location along the deformable housing defining the deformation portion, the one or more processors adjusting the presentation of the content as a function of the location. 11. The electronic device of claim 1, further comprising a user interface, the user interface receiving a user input, the one or more processors adjusting the presentation of the content as a function of the user input. 12. An electronic device, comprising: a flexible display; one or more flex sensors, the one or more flex sensors detecting a deflection of the flexible display; and one or more processors operable with the one or more flex sensors, the one or more processors dividing a portion of the flexible display disposed to one side of the deflection into a first subportion and a second subportion that is complementary to the first subportion, presenting content in the first subportion with a predefined aspect ratio, and repurposing the second subportion for presentation of secondary content. 13. The electronic device of claim 12, the one or more flex sensors further detecting removal of the deflection, the one or more processors terminating presentation of the secondary content in response to detection of the removal of the deflection and transitioning the predefined aspect ratio to a second predefined aspect ratio. 14. The electronic device of claim 13, the predefined aspect ratio comprising a 16:9 aspect ratio, the second predefined aspect ratio comprising a 4:3 aspect ratio. 15. The electronic device of claim 12, the flexible display defined by a diagonal dimension of between seven and ten inches, inclusive. 16. A method, comprising: presenting content, with one or more processors, on a flexible display at a first aspect ratio; detecting, with one or more flex sensors, deformation of the flexible display by a bend; moving presentation of the content to a portion of the flexible display disposed to one side of the bend; subdividing the portion into a first subportion and a second subportion that is complementary to the first subportion; presenting the content in the first subportion of the flexible display at the a second aspect ratio; and repurposing the second subportion of the flexible display for presentation of secondary content. 17. The method of claim 16, further comprising receiving user input selecting the secondary content. 18. The method of claim 16, further comprising retrieving, with a wireless communication circuit from a remote server, a content package for the content at the second aspect ratio. 19. The method of claim 16, further comprising: detecting, with the one or more flex sensors, removal of the bend; and again presenting the content on the flexible display at the first aspect ratio. 20. The method of claim 16, further comprising rotating the content by ninety degrees.
An electronic device includes a housing, which may be deformable or may include hinges to allow a display, which is flexible, to be deform by bending or other operations. One or more flex sensors detect when the electronic device is deformed at a deformation portion. One or more processors, which include an application processor, reconfigure a presentation of content along the flexible display in response to detecting deformation at the deformation portion, where the reconfiguring includes a content aspect ratio transition from a first predefined aspect ratio to a second predefined aspect ratio.1. An electronic device, comprising; a deformable housing; a flexible display supported by the deformable housing; one or more flex sensors supported by the deformable housing, the one or more flex sensors detecting when the electronic device is deformed at a deformation portion; and one or more processors operable with the flexible display and the one or more flex sensors, the one or more processors reconfiguring a presentation of content along the flexible display in response to detecting deformation at the deformation portion, the reconfiguring comprising a content aspect ratio transition from a first aspect ratio to a second aspect ratio. 2. The electronic device of claim 1, the one or more processors presenting the content only to one side of the deformation portion in response to detecting the deformation. 3. The electronic device of claim 2, the first aspect ratio comprising a 4:3 aspect ratio, the second aspect ratio comprising a 16:9 aspect ratio. 4. The electronic device of claim 2, the one or more processors dividing a portion of the flexible display disposed to the one side of the deformation portion into a first subportion and a second subportion that is complementary to the first subportion, the one or more processors presenting the content in the first subportion at the second aspect ratio. 5. The electronic device of claim 4, the one or more processors further configuring the second subportion differently from the first subportion. 6. The electronic device of claim 5, the one or more processors presenting secondary content, different from the content, in the second subportion. 7. The electronic device of claim 6, the secondary content comprising locally stored content. 8. The electronic device of claim 7, the locally stored content comprising an image. 9. The electronic device of claim 6, the secondary content comprising static content. 10. The electronic device of claim 1, the one or more flex sensors determining a location along the deformable housing defining the deformation portion, the one or more processors adjusting the presentation of the content as a function of the location. 11. The electronic device of claim 1, further comprising a user interface, the user interface receiving a user input, the one or more processors adjusting the presentation of the content as a function of the user input. 12. An electronic device, comprising: a flexible display; one or more flex sensors, the one or more flex sensors detecting a deflection of the flexible display; and one or more processors operable with the one or more flex sensors, the one or more processors dividing a portion of the flexible display disposed to one side of the deflection into a first subportion and a second subportion that is complementary to the first subportion, presenting content in the first subportion with a predefined aspect ratio, and repurposing the second subportion for presentation of secondary content. 13. The electronic device of claim 12, the one or more flex sensors further detecting removal of the deflection, the one or more processors terminating presentation of the secondary content in response to detection of the removal of the deflection and transitioning the predefined aspect ratio to a second predefined aspect ratio. 14. The electronic device of claim 13, the predefined aspect ratio comprising a 16:9 aspect ratio, the second predefined aspect ratio comprising a 4:3 aspect ratio. 15. The electronic device of claim 12, the flexible display defined by a diagonal dimension of between seven and ten inches, inclusive. 16. A method, comprising: presenting content, with one or more processors, on a flexible display at a first aspect ratio; detecting, with one or more flex sensors, deformation of the flexible display by a bend; moving presentation of the content to a portion of the flexible display disposed to one side of the bend; subdividing the portion into a first subportion and a second subportion that is complementary to the first subportion; presenting the content in the first subportion of the flexible display at the a second aspect ratio; and repurposing the second subportion of the flexible display for presentation of secondary content. 17. The method of claim 16, further comprising receiving user input selecting the secondary content. 18. The method of claim 16, further comprising retrieving, with a wireless communication circuit from a remote server, a content package for the content at the second aspect ratio. 19. The method of claim 16, further comprising: detecting, with the one or more flex sensors, removal of the bend; and again presenting the content on the flexible display at the first aspect ratio. 20. The method of claim 16, further comprising rotating the content by ninety degrees.
2,600
10,361
10,361
15,523,768
2,631
A transmission device of the disclosure includes: a generator unit that generates, on the basis of a control signal, a transmission symbol signal that indicates a sequence of transmission symbols; an output control unit that generates an output control signal on the basis of the transmission symbol signal; and a driver unit that generates, on the basis of the output control signal, a first output signal, a second output signal, and a third output signal. The generator unit generates the transmission symbol signal on the basis of the control signal, to allow the first output signal, the second output signal, and the third output signal to exchange signal patterns with one another.
1. A transmission device, comprising: a generator unit that generates, on a basis of a control signal, a transmission symbol signal that indicates a sequence of transmission symbols; an output control unit that generates an output control signal on a basis of the transmission symbol signal; and a driver unit that generates, on a basis of the output control signal, a first output signal, a second output signal, and a third output signal, the generator unit generating the transmission symbol signal on the basis of the control signal, to allow the first output signal, the second output signal, and the third output signal to exchange signal patterns with one another. 2. The transmission device according to claim 1, wherein the generator unit includes: a processor unit that generates, on a basis of a predetermined number of first symbol signals and on the basis of the control signal, second symbol signals that are equal in number to the predetermined number; and a serializer unit that serializes the predetermined number of the second symbol signals, to generate the transmission symbol signal. 3. The transmission device according to claim 2, wherein the predetermined number of the second symbol signals are respectively associated with the predetermined number of the first symbol signals, each of the predetermined number of the first symbol signals includes three signals, each of the predetermined number of the second symbol signals includes three signals, and the processor unit performs, on the basis of the control signal, rearrangement of the three signals included in one first symbol signal out of the predetermined number of the first symbol signals, or rearrangement of inverted signals of the three signals included in the relevant one first symbol signal, to generate one of the second symbol signals that is associated with the relevant one first symbol signal. 4. The transmission device according to claim 3, wherein the transmission symbol signal includes three signals, and the serializer unit serializes the predetermined number of the second symbol signals, with respect to each of the three signals included in the predetermined number of the second symbol signals, to generate each of the three signals included in the transmission symbol signal. 5. The transmission device according to claim 2, wherein the generator unit further includes a symbol generator unit that generates the predetermined number of the first symbol signals, on a basis of transition signals that are equal in number to the predetermined number, the transition signals each indicating a transition in the sequence of the transition symbols. 6. The transmission device according to claim 1, wherein the generator unit includes a processor unit that generates the transmission symbol signal on a basis of a first symbol signal and on the basis of the control signal. 7. The transmission device according to claim 6, wherein the first symbol signal includes three signals, the transmission symbol signal includes three signals, and the generator unit performs, on the basis of the control signal, rearrangement of the three signals included in the first symbol signal, or rearrangement of inverted signals of the three signals included in the first symbol signal, to generate the transmission symbol 8. The transmission device according to claim 6, wherein the generator unit further includes a serializer unit that serializes the predetermined number of second symbol signals, to generate the first symbol signal. 9. The transmission device according to claim 6, wherein the generator unit further includes: a serializer unit that serializes a predetermined number of first transition signals each of which indicates a transition in the sequence of the transmission symbols, to generate a second transition symbol; and a symbol generator unit that generates the first symbol signal on a basis of the second transition signal. 10. The transmission device according to claim 1, wherein the generator unit includes: a symbol generator unit that generates, on a basis of a predetermined number of first transition signals each of which indicates a transition in the sequence of the transmission symbols, first symbol signals that are equal in number to the predetermined number, the symbol generator unit being configured to be able to set the transmission symbol at a head of the sequence; and a serializer unit that serializes the predetermined number of the first symbol signals, to generate the transmission symbol signal. 11. The transmission device according to claim 10, wherein the generator unit further includes a processor unit that generates the predetermined number of the first transition signals, on a basis of second transition signals that are equal in number to the predetermined number and on the basis of the control signal. 12. The transmission device according to claim 11, wherein the predetermined number of the first transition signals are respectively associated with the predetermined number of the second transition signals, each of the predetermined number of the second transition signals include three signals, each of the predetermined number of the first transition signals include three signals, and the processor unit controls, on the basis of the control signal, whether or not to invert one of the three signals included in one second transition signal out of the predetermined number of the second transition signals, to generate one of the first transition signals that is associated with the relevant one second transition signal. 13. The transmission device according to claim 1, wherein the generator unit includes a symbol generator unit that generates the transmission symbol signal, on a basis of a first transition signal that indicates a transition in the sequence of the transmission symbols, the symbol generator unit being configured to be able to set, on the basis of the control signal, the transmission symbol at a head of the sequence. 14. The transmission device according to claim 13, wherein the generator unit further includes: a serializer unit that serializes a predetermined number of second transition signals to generate a third transition signal; and a processor unit that generates the predetermined number of the first transition signals, on a basis of the third transition signal and on the basis of the control signal, 15. A transmission device, comprising: a symbol generator unit that generates a symbol signal on a basis of a transition signal that indicates a transition in a sequence of transmission symbols, the symbol generator unit being configured to be able to set the transmission symbol at a head of the sequence; and an output unit that generates, on a basis of the symbol signal, a first output signal, a second output signal, and a third output signal. 16. A reception device, comprising: a receiver unit that generates, on a basis of a first input signal, a second input signal, and a third input signal, a first symbol signal that indicates a sequence of symbols; and a processor unit that generates, as a second symbol signal, on a basis of a control signal and on a basis of the first symbol signal, the first symbol signal that would be generated on a condition that the first input signal, the second input signal, and the third input signal exchange signal patterns with one another. 17. The reception device according to claim 16, wherein the first symbol signal includes a first signal, a second signal, and a third signal, the second symbol signal includes a fourth signal, a fifth signal, and a sixth signal, the receiver unit generates the first signal on a basis of the first input signal and the second input signal, generates the second signal on a basis of the second input signal and the third input signal, and generates the third signal on a basis of the first input signal and the third input signal, and the processor unit performs, on the basis of the control signal, rearrangement of the first signal, the second signal, and the third signal, or rearrangement of an inverted signal of the first signal, an inverted signal of the second input signal, and an inverted signal of the third input signal, to generate the fourth signal, the fifth signal, and the sixth signal. 18. A communication system, comprising: a transmission device that generates, on a basis of a control signal, a plurality of sets of three output signals; and a reception device that receives the plurality of sets of the output signals, the transmission device being configured to be able to allow, on the basis of the control signal, the three output signals to exchange signal patterns with one another, in each of the plurality of sets of the output signals. 19. The communication system according to claim 18, wherein the reception device generates the control signal. 20. The communication system according to claim 18, wherein the transmission device is an image sensor, and the reception device is a processor that processes an image acquired by the image sensor.
A transmission device of the disclosure includes: a generator unit that generates, on the basis of a control signal, a transmission symbol signal that indicates a sequence of transmission symbols; an output control unit that generates an output control signal on the basis of the transmission symbol signal; and a driver unit that generates, on the basis of the output control signal, a first output signal, a second output signal, and a third output signal. The generator unit generates the transmission symbol signal on the basis of the control signal, to allow the first output signal, the second output signal, and the third output signal to exchange signal patterns with one another.1. A transmission device, comprising: a generator unit that generates, on a basis of a control signal, a transmission symbol signal that indicates a sequence of transmission symbols; an output control unit that generates an output control signal on a basis of the transmission symbol signal; and a driver unit that generates, on a basis of the output control signal, a first output signal, a second output signal, and a third output signal, the generator unit generating the transmission symbol signal on the basis of the control signal, to allow the first output signal, the second output signal, and the third output signal to exchange signal patterns with one another. 2. The transmission device according to claim 1, wherein the generator unit includes: a processor unit that generates, on a basis of a predetermined number of first symbol signals and on the basis of the control signal, second symbol signals that are equal in number to the predetermined number; and a serializer unit that serializes the predetermined number of the second symbol signals, to generate the transmission symbol signal. 3. The transmission device according to claim 2, wherein the predetermined number of the second symbol signals are respectively associated with the predetermined number of the first symbol signals, each of the predetermined number of the first symbol signals includes three signals, each of the predetermined number of the second symbol signals includes three signals, and the processor unit performs, on the basis of the control signal, rearrangement of the three signals included in one first symbol signal out of the predetermined number of the first symbol signals, or rearrangement of inverted signals of the three signals included in the relevant one first symbol signal, to generate one of the second symbol signals that is associated with the relevant one first symbol signal. 4. The transmission device according to claim 3, wherein the transmission symbol signal includes three signals, and the serializer unit serializes the predetermined number of the second symbol signals, with respect to each of the three signals included in the predetermined number of the second symbol signals, to generate each of the three signals included in the transmission symbol signal. 5. The transmission device according to claim 2, wherein the generator unit further includes a symbol generator unit that generates the predetermined number of the first symbol signals, on a basis of transition signals that are equal in number to the predetermined number, the transition signals each indicating a transition in the sequence of the transition symbols. 6. The transmission device according to claim 1, wherein the generator unit includes a processor unit that generates the transmission symbol signal on a basis of a first symbol signal and on the basis of the control signal. 7. The transmission device according to claim 6, wherein the first symbol signal includes three signals, the transmission symbol signal includes three signals, and the generator unit performs, on the basis of the control signal, rearrangement of the three signals included in the first symbol signal, or rearrangement of inverted signals of the three signals included in the first symbol signal, to generate the transmission symbol 8. The transmission device according to claim 6, wherein the generator unit further includes a serializer unit that serializes the predetermined number of second symbol signals, to generate the first symbol signal. 9. The transmission device according to claim 6, wherein the generator unit further includes: a serializer unit that serializes a predetermined number of first transition signals each of which indicates a transition in the sequence of the transmission symbols, to generate a second transition symbol; and a symbol generator unit that generates the first symbol signal on a basis of the second transition signal. 10. The transmission device according to claim 1, wherein the generator unit includes: a symbol generator unit that generates, on a basis of a predetermined number of first transition signals each of which indicates a transition in the sequence of the transmission symbols, first symbol signals that are equal in number to the predetermined number, the symbol generator unit being configured to be able to set the transmission symbol at a head of the sequence; and a serializer unit that serializes the predetermined number of the first symbol signals, to generate the transmission symbol signal. 11. The transmission device according to claim 10, wherein the generator unit further includes a processor unit that generates the predetermined number of the first transition signals, on a basis of second transition signals that are equal in number to the predetermined number and on the basis of the control signal. 12. The transmission device according to claim 11, wherein the predetermined number of the first transition signals are respectively associated with the predetermined number of the second transition signals, each of the predetermined number of the second transition signals include three signals, each of the predetermined number of the first transition signals include three signals, and the processor unit controls, on the basis of the control signal, whether or not to invert one of the three signals included in one second transition signal out of the predetermined number of the second transition signals, to generate one of the first transition signals that is associated with the relevant one second transition signal. 13. The transmission device according to claim 1, wherein the generator unit includes a symbol generator unit that generates the transmission symbol signal, on a basis of a first transition signal that indicates a transition in the sequence of the transmission symbols, the symbol generator unit being configured to be able to set, on the basis of the control signal, the transmission symbol at a head of the sequence. 14. The transmission device according to claim 13, wherein the generator unit further includes: a serializer unit that serializes a predetermined number of second transition signals to generate a third transition signal; and a processor unit that generates the predetermined number of the first transition signals, on a basis of the third transition signal and on the basis of the control signal, 15. A transmission device, comprising: a symbol generator unit that generates a symbol signal on a basis of a transition signal that indicates a transition in a sequence of transmission symbols, the symbol generator unit being configured to be able to set the transmission symbol at a head of the sequence; and an output unit that generates, on a basis of the symbol signal, a first output signal, a second output signal, and a third output signal. 16. A reception device, comprising: a receiver unit that generates, on a basis of a first input signal, a second input signal, and a third input signal, a first symbol signal that indicates a sequence of symbols; and a processor unit that generates, as a second symbol signal, on a basis of a control signal and on a basis of the first symbol signal, the first symbol signal that would be generated on a condition that the first input signal, the second input signal, and the third input signal exchange signal patterns with one another. 17. The reception device according to claim 16, wherein the first symbol signal includes a first signal, a second signal, and a third signal, the second symbol signal includes a fourth signal, a fifth signal, and a sixth signal, the receiver unit generates the first signal on a basis of the first input signal and the second input signal, generates the second signal on a basis of the second input signal and the third input signal, and generates the third signal on a basis of the first input signal and the third input signal, and the processor unit performs, on the basis of the control signal, rearrangement of the first signal, the second signal, and the third signal, or rearrangement of an inverted signal of the first signal, an inverted signal of the second input signal, and an inverted signal of the third input signal, to generate the fourth signal, the fifth signal, and the sixth signal. 18. A communication system, comprising: a transmission device that generates, on a basis of a control signal, a plurality of sets of three output signals; and a reception device that receives the plurality of sets of the output signals, the transmission device being configured to be able to allow, on the basis of the control signal, the three output signals to exchange signal patterns with one another, in each of the plurality of sets of the output signals. 19. The communication system according to claim 18, wherein the reception device generates the control signal. 20. The communication system according to claim 18, wherein the transmission device is an image sensor, and the reception device is a processor that processes an image acquired by the image sensor.
2,600
10,362
10,362
15,315,330
2,643
A wireless device 108 and a method therein for assisting in precoder selection for wireless communication with a Radio Node (RN) 106 , and a RN 106 for performing precoder selection for wireless communication with a wireless device 108 . The wireless device is configured with a set of precoders. The wireless device determines a subset of precoders out of the set of precoders; and transmits, to the RN, at least one Sounding Reference Signal (SRS) precoded with a respective at least one precoder comprised in the subset. The RN receives, from the wireless device, at least one SRS precoded with a respective at least one precoder comprised in a subset of precoders; and transmits, to the wireless device, a signal indicative of a selected precoder to be used for a transmission to the RN, wherein the selected precoder is indirectly selected based on the received at least one SRS.
1-62. (canceled) 63. A method performed by a wireless device for assisting in precoder selection for wireless communication with a Radio Node (RN), the wireless device being configured with a set of precoders unknown to the RN, the method comprising: determining a subset of precoders out of the set of precoders; and transmitting, to the RN, at least one sounding reference signal precoded with a respective at least one precoder comprised in the subset. 64. The method of claim 63, further comprising: receiving, from the RN, a signal indicative of a selected precoder to be used for a transmission to the RN, wherein the selected precoder is indirectly selected from the subset of precoders based on the transmitted at least one sounding reference signal. 65. The method of claim 64, wherein the signal indicative of a selected precoder carries a scheduling grant for transmitting data to the RN, which scheduling grant is based on the transmitted at least one sounding reference signal, the method further comprising: transmitting data to the RN in accordance with the scheduling grant. 66. The method of claim 63, further comprising: receiving, from the RN, a request for transmission of the at least one sounding reference signal by the wireless device. 67. The method of claim 63, wherein transmitting the at least one sounding reference signal comprises: transmitting a first sounding reference signal precoded with a specific precoder out of the respective at least one precoder in a first polarization; and transmitting a second sounding reference signal precoded with the specific precoder in a second polarization. 68. The method of claim 63, further comprising: determining a number of precoders to be comprised in the subset; and transmitting, to the RN, information about the determined number of precoders. 69. The method of claim 63, further comprising: receiving, from the RN, information about a number of precoders to be comprised in the subset. 70. The method of claim 63, wherein determining the subset further comprises: determining the subset of precoders based on a measurement on at least one transmission from the RN to the wireless device. 71. The method of claim 63, wherein determining the subset further comprises: receiving at least one transmission from the RN; and determining the subset to comprise at least one precoder, which at least one precoder gives substantially the same transmit radiated energy pattern as the receive sensed energy pattern of the received at least one transmission. 72. The method of claim 63, wherein determining the subset further comprises: determining the subset based on knowledge about at least one preceding subset or based on a random-access procedure. 73. The method of claim 63, wherein determining the subset further comprises: determining a first precoder from the set of precoders to be included in the subset, which first precoder gives a received signal quality at the RN that is better than the received signal quality given by the other precoders of the set; and determining a second precoder to be included in the subset, which second precoder is orthogonal to the first precoder. 74. The method of claim 63, wherein determining the subset further comprises: determining that a precoder in the set of precoders should be excluded from the subset based on a measurement of an antenna impedance, a reflected antenna power and/or of a physical interaction. 75. The method of claim 63, further comprising: determining that an update of the subset of precoders is needed; and updating the subset in response to the determination. 76. The method of claim 75, wherein updating the subset further comprises: transmitting an update request for updating the subset of precoders to the RN in response to the wireless device determining that an update of the subset of precoders is needed. 77. The method of claim 76, wherein updating the subset further comprises: receiving an update response from the RN, wherein the updating of the subset is performed by the wireless device in response to receiving the update response from the RN. 78. The method of claim 75, wherein determining that an update of the subset is needed further comprises: receiving an update instruction from the RN instructing the wireless device to re-evaluate the subset of precoders; re-evaluating the subset; and wherein the updating the subset further comprises: up-dating the subset based on the re-evaluation. 79. The method of claim 78, wherein determining that an update of the subset is needed further comprises: transmitting a re-evaluation complete message to the RN. 80. The method of claim 78, wherein the update instruction is received in a random access command or a handover command. 81. A method performed by a Radio Node (RN) for performing precoder selection for wireless communication with a wireless device, the wireless device being configured with a set of precoders unknown to the RN, the method comprising: receiving, from the wireless device, at least one sounding reference signal precoded with a respective at least one precoder comprised in a subset of precoders out of the set of precoders; and transmitting, to the wireless device, a signal indicative of a selected precoder to be used for a transmission to the RN, wherein the selected precoder is indirectly selected from the subset of precoders based on the received at least one sounding reference signal. 82. The method of claim 81, wherein the signal indicative of the selected precoder carries a scheduling grant for transmitting data to the RN, which scheduling grant is based on a selected one of the received at least one sounding reference signal, the method comprising: receiving data from the wireless device in accordance with the scheduling grant. 83. The method of claim 81, further comprising: transmitting, to the wireless device a request for transmission, by the wireless device, of the at least one sounding reference signal. 84. The method of claim 81, wherein receiving the at least one sounding reference signal comprises: receiving a first sounding reference signal precoded with a specific precoder out of the respective at least one precoder in a first polarization; receiving a second sounding reference signal precoded with the specific precoder in a second polarization; and determining a phase angle of the second polarization relative to the first polarization based on the received first and second sounding reference signals. 85. The method of claim 82, wherein the scheduling grant is based on the received signal quality of the received at least one sounding reference signal. 86. The method of claim 81, further comprising: receiving, from the wireless device, information about a number of precoders comprised in the subset. 87. The method of claim 81, further comprising: transmitting, to the wireless device, information about a number of precoders to be comprised in the subset. 88. The method of claim 81, further comprising: receiving, from the wireless device, an update request for updating the subset of precoders. 89. The method of claim 88, further comprising: transmitting, to the wireless device, an update response in response to the update request. 90. The method of claim 81, further comprising: transmitting, to the wireless device, an update instruction instructing the wireless device to re-evaluate the subset. 91. The method of claim 90, further comprising: receiving, from the wireless device, a re-evaluation complete message. 92. The method of claim 90, wherein the update instruction is transmitted in a random access command or a handover command. 93. A wireless device for assisting in precoder selection for wireless communication with a Radio Node (RN), the wireless device being configured with a set of precoders unknown to the RN, the wireless device being configured to: determine a subset of precoders out of the set of precoders; and transmit, to the RN, at least one sounding reference signal precoded with a respective at least one precoder comprised in the subset. 94. The wireless device of claim 93, the wireless device further being configured to: receive, from the RN, a signal indicative of a selected precoder to be used for a transmission to the RN, wherein the selected precoder is indirectly selected from the subset of precoders based on the transmitted at least one sounding reference signal. 95. The wireless device of claim 94, the wireless device being configured to: receive, from the RN, the signal indicative of a selected precoder carrying a scheduling grant for transmitting data to the RN, which scheduling grant is based on the transmitted at least one sounding reference signal; and transmit data to the RN in accordance with the scheduling grant. 96. The wireless device of claim 93, the wireless device further being configured to: receive, from the RN, a request for transmission of the at least one sounding reference signal by the wireless device. 97. The wireless device of claim 93, wherein the wireless device being configured to transmit, to the RN, at least one sounding reference signal comprises the wireless device being further configured to: transmit a first sounding reference signal precoded with a specific precoder out of the respective at least one precoder in a first polarization; and transmit a second sounding reference signal precoded with the specific precoder in a second polarization. 98. The wireless device of claim 93, the wireless device further configured to: determine a number of precoders to be comprised in the subset; and transmit, to the RN, information about the determined number of precoders. 99. The wireless device of claim 93, the wireless device further configured to: receive, from the RN, information about a number of precoders to be comprised in the subset. 100. The wireless device of claim 93, wherein the wireless device being configured to determine a subset of precoders comprises the wireless device being further configured to: determine the subset of precoders based on a measurement on at least one transmission from the RN to the wireless device. 101. The wireless device of claim 93, wherein the wireless device being configured to determine a subset of precoders comprises the wireless device being further configured to: receive at least one transmission from the RN; and determine the subset to comprise at least one precoder, which at least one precoder gives substantially the same transmit radiated energy pattern as the receive sensed energy pattern of the received at least one transmission. 102. The wireless device of claim 93, wherein the wireless device being configured to determine a subset of precoders comprises the wireless device being further configured to: determine the subset based on knowledge about at least one preceding subset or based on a random-access procedure. 103. The wireless device of claim 93, wherein the wireless device being configured to determine a subset of precoders comprises the wireless device being further configured to: determine a first precoder from the set of precoders to be included in the subset, which first precoder gives a received signal quality at the RN that is better than the received signal quality given by the other precoders of the set; and determine a second precoder to be included in the subset, which second precoder is orthogonal to the first precoder. 104. The wireless device of claim 93, wherein the wireless device being configured to determine a subset of precoders comprises the wireless device being further configured to: determine that a precoder in the set of precoders should be excluded from the subset based on a measurement of an antenna impedance, a reflected antenna power and/or of a physical interaction. 105. The wireless device of claim 93, the wireless device further configured to: determine that an update of the subset of precoders is needed; and update the subset in response to the determination. 106. The wireless device of claim 105, wherein the wireless device being configured to update the subset comprises the wireless device being further configured to: transmit an update request for updating the subset of precoders to the RN in response to the wireless device determining that an update of the subset of precoders is needed. 107. The wireless device of claim 106, wherein the wireless device being configured to update the subset comprises the wireless device being further configured to: receive an update response from the RN, wherein the subset is updated by the wireless device in response to receiving the update response from the RN. 108. The wireless device of claim 105, wherein the wireless device being configured to determine that an update of the subset is needed comprises the wireless device being further configured to: receive an update instruction from the RN instructing the wireless device to re-evaluate the subset; re-evaluate the subset; and being configured to update the subset by being further configured to: up-date the subset based on the re-evaluation. 109. The wireless device of claim 108, wherein the wireless device being configured to determine that an update of the subset is needed comprises the wireless device being further configured to: transmit a re-evaluation complete message to the RN. 110. The wireless device of claim 108, the wireless device being further configured to receive the update instruction in a random access command or a handover command. 111. A Radio Node (RN) for performing precoder selection for wireless communication with a wireless device, the wireless device being configured with a set of precoders unknown to the RN, the RN being configured to: receive, from the wireless device, at least one sounding reference signal precoded with a respective at least one precoder comprised in a subset of precoders out of the set of precoders; and transmit, to the wireless device, a signal indicative of a selected precoder to be used for a transmission to the RN, wherein the selected precoder is indirectly selected from the subset of precoders based on the received at least one sounding reference signal. 112. The RN of claim 111, the RN being configured to: transmit, to the wireless device, the signal indicative of the selected precoder carrying a scheduling grant for transmitting data to the RN, which scheduling grant is based on a selected one of the received at least one sounding reference signal; and receive data from the wireless device in accordance with the scheduling grant. 113. The RN of claim 111, the RN being further configured to: transmit, to the wireless device, a request for transmission, by the wireless device, of the at least one sounding reference signal. 114. The RN of claim 111, the RN being configured to receive the at least one sounding reference signal by being further configured to: receive a first sounding reference signal precoded with a specific precoder out of the respective at least one precoder in a first polarization; receive a second sounding reference signal precoded with the specific precoder in a second polarization; and determine a phase angle of the second polarization relative to the first polarization based on the received first and second sounding reference signals. 115. The RN of claim 112, wherein the scheduling grant is based on the received signal quality of the received at least one sounding reference signal. 116. The RN of claim 111, the RN being further configured to: receive, from the wireless device, information about a number of precoders comprised in the subset. 117. The RN of claim 111, the RN being further configured to: transmit, to the wireless device, information about a number of precoders to be comprised in the subset. 118. The RN of claim 111, the RN being further configured to: receive, from the wireless device, an update request for updating the subset of precoders. 119. The RN of claim 118, the RN being further configured to: transmit, to the wireless device, an update response. 120. The RN of claim 111, the RN being further configured to: transmit, to the wireless device, an update instruction instructing the wireless device to re-evaluate the subset. 121. The RN of claim 120, the RN being further configured to: receive, from the wireless device, a re-evaluation complete message. 122. The RN of claim 120, the RN being further configured to transmit the update instruction in a random access command or a handover command.
A wireless device 108 and a method therein for assisting in precoder selection for wireless communication with a Radio Node (RN) 106 , and a RN 106 for performing precoder selection for wireless communication with a wireless device 108 . The wireless device is configured with a set of precoders. The wireless device determines a subset of precoders out of the set of precoders; and transmits, to the RN, at least one Sounding Reference Signal (SRS) precoded with a respective at least one precoder comprised in the subset. The RN receives, from the wireless device, at least one SRS precoded with a respective at least one precoder comprised in a subset of precoders; and transmits, to the wireless device, a signal indicative of a selected precoder to be used for a transmission to the RN, wherein the selected precoder is indirectly selected based on the received at least one SRS.1-62. (canceled) 63. A method performed by a wireless device for assisting in precoder selection for wireless communication with a Radio Node (RN), the wireless device being configured with a set of precoders unknown to the RN, the method comprising: determining a subset of precoders out of the set of precoders; and transmitting, to the RN, at least one sounding reference signal precoded with a respective at least one precoder comprised in the subset. 64. The method of claim 63, further comprising: receiving, from the RN, a signal indicative of a selected precoder to be used for a transmission to the RN, wherein the selected precoder is indirectly selected from the subset of precoders based on the transmitted at least one sounding reference signal. 65. The method of claim 64, wherein the signal indicative of a selected precoder carries a scheduling grant for transmitting data to the RN, which scheduling grant is based on the transmitted at least one sounding reference signal, the method further comprising: transmitting data to the RN in accordance with the scheduling grant. 66. The method of claim 63, further comprising: receiving, from the RN, a request for transmission of the at least one sounding reference signal by the wireless device. 67. The method of claim 63, wherein transmitting the at least one sounding reference signal comprises: transmitting a first sounding reference signal precoded with a specific precoder out of the respective at least one precoder in a first polarization; and transmitting a second sounding reference signal precoded with the specific precoder in a second polarization. 68. The method of claim 63, further comprising: determining a number of precoders to be comprised in the subset; and transmitting, to the RN, information about the determined number of precoders. 69. The method of claim 63, further comprising: receiving, from the RN, information about a number of precoders to be comprised in the subset. 70. The method of claim 63, wherein determining the subset further comprises: determining the subset of precoders based on a measurement on at least one transmission from the RN to the wireless device. 71. The method of claim 63, wherein determining the subset further comprises: receiving at least one transmission from the RN; and determining the subset to comprise at least one precoder, which at least one precoder gives substantially the same transmit radiated energy pattern as the receive sensed energy pattern of the received at least one transmission. 72. The method of claim 63, wherein determining the subset further comprises: determining the subset based on knowledge about at least one preceding subset or based on a random-access procedure. 73. The method of claim 63, wherein determining the subset further comprises: determining a first precoder from the set of precoders to be included in the subset, which first precoder gives a received signal quality at the RN that is better than the received signal quality given by the other precoders of the set; and determining a second precoder to be included in the subset, which second precoder is orthogonal to the first precoder. 74. The method of claim 63, wherein determining the subset further comprises: determining that a precoder in the set of precoders should be excluded from the subset based on a measurement of an antenna impedance, a reflected antenna power and/or of a physical interaction. 75. The method of claim 63, further comprising: determining that an update of the subset of precoders is needed; and updating the subset in response to the determination. 76. The method of claim 75, wherein updating the subset further comprises: transmitting an update request for updating the subset of precoders to the RN in response to the wireless device determining that an update of the subset of precoders is needed. 77. The method of claim 76, wherein updating the subset further comprises: receiving an update response from the RN, wherein the updating of the subset is performed by the wireless device in response to receiving the update response from the RN. 78. The method of claim 75, wherein determining that an update of the subset is needed further comprises: receiving an update instruction from the RN instructing the wireless device to re-evaluate the subset of precoders; re-evaluating the subset; and wherein the updating the subset further comprises: up-dating the subset based on the re-evaluation. 79. The method of claim 78, wherein determining that an update of the subset is needed further comprises: transmitting a re-evaluation complete message to the RN. 80. The method of claim 78, wherein the update instruction is received in a random access command or a handover command. 81. A method performed by a Radio Node (RN) for performing precoder selection for wireless communication with a wireless device, the wireless device being configured with a set of precoders unknown to the RN, the method comprising: receiving, from the wireless device, at least one sounding reference signal precoded with a respective at least one precoder comprised in a subset of precoders out of the set of precoders; and transmitting, to the wireless device, a signal indicative of a selected precoder to be used for a transmission to the RN, wherein the selected precoder is indirectly selected from the subset of precoders based on the received at least one sounding reference signal. 82. The method of claim 81, wherein the signal indicative of the selected precoder carries a scheduling grant for transmitting data to the RN, which scheduling grant is based on a selected one of the received at least one sounding reference signal, the method comprising: receiving data from the wireless device in accordance with the scheduling grant. 83. The method of claim 81, further comprising: transmitting, to the wireless device a request for transmission, by the wireless device, of the at least one sounding reference signal. 84. The method of claim 81, wherein receiving the at least one sounding reference signal comprises: receiving a first sounding reference signal precoded with a specific precoder out of the respective at least one precoder in a first polarization; receiving a second sounding reference signal precoded with the specific precoder in a second polarization; and determining a phase angle of the second polarization relative to the first polarization based on the received first and second sounding reference signals. 85. The method of claim 82, wherein the scheduling grant is based on the received signal quality of the received at least one sounding reference signal. 86. The method of claim 81, further comprising: receiving, from the wireless device, information about a number of precoders comprised in the subset. 87. The method of claim 81, further comprising: transmitting, to the wireless device, information about a number of precoders to be comprised in the subset. 88. The method of claim 81, further comprising: receiving, from the wireless device, an update request for updating the subset of precoders. 89. The method of claim 88, further comprising: transmitting, to the wireless device, an update response in response to the update request. 90. The method of claim 81, further comprising: transmitting, to the wireless device, an update instruction instructing the wireless device to re-evaluate the subset. 91. The method of claim 90, further comprising: receiving, from the wireless device, a re-evaluation complete message. 92. The method of claim 90, wherein the update instruction is transmitted in a random access command or a handover command. 93. A wireless device for assisting in precoder selection for wireless communication with a Radio Node (RN), the wireless device being configured with a set of precoders unknown to the RN, the wireless device being configured to: determine a subset of precoders out of the set of precoders; and transmit, to the RN, at least one sounding reference signal precoded with a respective at least one precoder comprised in the subset. 94. The wireless device of claim 93, the wireless device further being configured to: receive, from the RN, a signal indicative of a selected precoder to be used for a transmission to the RN, wherein the selected precoder is indirectly selected from the subset of precoders based on the transmitted at least one sounding reference signal. 95. The wireless device of claim 94, the wireless device being configured to: receive, from the RN, the signal indicative of a selected precoder carrying a scheduling grant for transmitting data to the RN, which scheduling grant is based on the transmitted at least one sounding reference signal; and transmit data to the RN in accordance with the scheduling grant. 96. The wireless device of claim 93, the wireless device further being configured to: receive, from the RN, a request for transmission of the at least one sounding reference signal by the wireless device. 97. The wireless device of claim 93, wherein the wireless device being configured to transmit, to the RN, at least one sounding reference signal comprises the wireless device being further configured to: transmit a first sounding reference signal precoded with a specific precoder out of the respective at least one precoder in a first polarization; and transmit a second sounding reference signal precoded with the specific precoder in a second polarization. 98. The wireless device of claim 93, the wireless device further configured to: determine a number of precoders to be comprised in the subset; and transmit, to the RN, information about the determined number of precoders. 99. The wireless device of claim 93, the wireless device further configured to: receive, from the RN, information about a number of precoders to be comprised in the subset. 100. The wireless device of claim 93, wherein the wireless device being configured to determine a subset of precoders comprises the wireless device being further configured to: determine the subset of precoders based on a measurement on at least one transmission from the RN to the wireless device. 101. The wireless device of claim 93, wherein the wireless device being configured to determine a subset of precoders comprises the wireless device being further configured to: receive at least one transmission from the RN; and determine the subset to comprise at least one precoder, which at least one precoder gives substantially the same transmit radiated energy pattern as the receive sensed energy pattern of the received at least one transmission. 102. The wireless device of claim 93, wherein the wireless device being configured to determine a subset of precoders comprises the wireless device being further configured to: determine the subset based on knowledge about at least one preceding subset or based on a random-access procedure. 103. The wireless device of claim 93, wherein the wireless device being configured to determine a subset of precoders comprises the wireless device being further configured to: determine a first precoder from the set of precoders to be included in the subset, which first precoder gives a received signal quality at the RN that is better than the received signal quality given by the other precoders of the set; and determine a second precoder to be included in the subset, which second precoder is orthogonal to the first precoder. 104. The wireless device of claim 93, wherein the wireless device being configured to determine a subset of precoders comprises the wireless device being further configured to: determine that a precoder in the set of precoders should be excluded from the subset based on a measurement of an antenna impedance, a reflected antenna power and/or of a physical interaction. 105. The wireless device of claim 93, the wireless device further configured to: determine that an update of the subset of precoders is needed; and update the subset in response to the determination. 106. The wireless device of claim 105, wherein the wireless device being configured to update the subset comprises the wireless device being further configured to: transmit an update request for updating the subset of precoders to the RN in response to the wireless device determining that an update of the subset of precoders is needed. 107. The wireless device of claim 106, wherein the wireless device being configured to update the subset comprises the wireless device being further configured to: receive an update response from the RN, wherein the subset is updated by the wireless device in response to receiving the update response from the RN. 108. The wireless device of claim 105, wherein the wireless device being configured to determine that an update of the subset is needed comprises the wireless device being further configured to: receive an update instruction from the RN instructing the wireless device to re-evaluate the subset; re-evaluate the subset; and being configured to update the subset by being further configured to: up-date the subset based on the re-evaluation. 109. The wireless device of claim 108, wherein the wireless device being configured to determine that an update of the subset is needed comprises the wireless device being further configured to: transmit a re-evaluation complete message to the RN. 110. The wireless device of claim 108, the wireless device being further configured to receive the update instruction in a random access command or a handover command. 111. A Radio Node (RN) for performing precoder selection for wireless communication with a wireless device, the wireless device being configured with a set of precoders unknown to the RN, the RN being configured to: receive, from the wireless device, at least one sounding reference signal precoded with a respective at least one precoder comprised in a subset of precoders out of the set of precoders; and transmit, to the wireless device, a signal indicative of a selected precoder to be used for a transmission to the RN, wherein the selected precoder is indirectly selected from the subset of precoders based on the received at least one sounding reference signal. 112. The RN of claim 111, the RN being configured to: transmit, to the wireless device, the signal indicative of the selected precoder carrying a scheduling grant for transmitting data to the RN, which scheduling grant is based on a selected one of the received at least one sounding reference signal; and receive data from the wireless device in accordance with the scheduling grant. 113. The RN of claim 111, the RN being further configured to: transmit, to the wireless device, a request for transmission, by the wireless device, of the at least one sounding reference signal. 114. The RN of claim 111, the RN being configured to receive the at least one sounding reference signal by being further configured to: receive a first sounding reference signal precoded with a specific precoder out of the respective at least one precoder in a first polarization; receive a second sounding reference signal precoded with the specific precoder in a second polarization; and determine a phase angle of the second polarization relative to the first polarization based on the received first and second sounding reference signals. 115. The RN of claim 112, wherein the scheduling grant is based on the received signal quality of the received at least one sounding reference signal. 116. The RN of claim 111, the RN being further configured to: receive, from the wireless device, information about a number of precoders comprised in the subset. 117. The RN of claim 111, the RN being further configured to: transmit, to the wireless device, information about a number of precoders to be comprised in the subset. 118. The RN of claim 111, the RN being further configured to: receive, from the wireless device, an update request for updating the subset of precoders. 119. The RN of claim 118, the RN being further configured to: transmit, to the wireless device, an update response. 120. The RN of claim 111, the RN being further configured to: transmit, to the wireless device, an update instruction instructing the wireless device to re-evaluate the subset. 121. The RN of claim 120, the RN being further configured to: receive, from the wireless device, a re-evaluation complete message. 122. The RN of claim 120, the RN being further configured to transmit the update instruction in a random access command or a handover command.
2,600
10,363
10,363
15,361,469
2,696
A method for parking space control including the steps of: a) flying a drone at regular intervals along a predefined path that covers an area of a parking lot; b) scanning and registering the parking lot; c) using, by software, features detection techniques as a part of image analysis algorithms; d) scanning and searching data from the parking lot for similarities within a given time period to form an analysis; e) determining, by the analysis, two outcomes for a specific parking lot including either a new vehicle is parked or an old vehicle is still located at the same parking lot; f) registering new vehicles at the time of detection; g) registering and checking longer parked vehicles' stay time for violation; h) determining if there is a violation; i) flagging and marking the vehicle(s) on a smart phone or tablet for an officer to view, locate, and ticket, if the answer to step h is yes; j) determining if the parking time exceeds the one allowed in the area of the parking lot; k) flagging ticket alerts on the program and emailing to the supervisor for evaluation and printing, if answer to step j is yes; l) determining if a vehicle can be exempt from the rules; m) deciding, by the supervisor, to generate a ticket with a click of a button, if answer to step 1 is no; n) creating, by the supervisor, the ticket; o) walking to the vehicle in order to assign the ticket thereto; and p) repeating cycle after an hour or as approved.
1. A method for parking space control, comprising the steps of: a) flying a drone at regular intervals along a predefined path that covers an area of a parking lot; b) scanning and registering the parking lot; c) using, by software, features detection techniques as a part of image analysis algorithms; d) scanning and searching data from the parking lot for similarities within a given time period to form an analysis; e) determining, by the analysis, two outcomes for a specific parking lot including one of a new vehicle is parked and an old vehicle is still located at a same parking lot; f) registering new vehicles at the time of detection; g) registering and checking longer parked vehicles' stay time for violation; h) determining if there is a violation; i) flagging and marking the vehicle on a smart phone or tablet for an officer to view, locate, and ticket, if the answer to step h is yes; j) determining if the parking time exceeds that allowed in the area of the parking lot; k) flagging ticket alerts on program and emailing to a supervisor for evaluation and printing, if answer to step j is yes; l) determining if a vehicle can be exempt from the rules; m) deciding, by the supervisor, to generate a ticket with a click of a button, if answer to step 1 is no; n) creating, by the supervisor, the ticket; o) walking to the vehicle in order to assign the ticket thereto; and p) repeating cycle after one of an hour and as approved. 2. The method of claim 1, wherein the vehicle that can be exempt from the rules include personnel cars, other types of parked vehicles, and vehicles of vendors. 3. A method for controlling a parking lot, comprising the steps of: a) inputting data; b) image detecting analyzing with feature detection, key point detector, and descriptor extractor algorithm; c) extracting key points; d) decision analyzing; and e) defining results and channeling analysis towards an application in parking areas' surveillance methodologies. 4. The method of claim 3, wherein said inputting step includes inputting three images at a beginning of each step of an analysis at a specific waypoint. 5. The method of claim 3, wherein said inputting step includes inputting two images if a drone is arriving at a waypoint for the first time during a monitoring flight. 6. The method of claim 3, wherein said extracting step includes extracting areas with specific location and magnitude in image data defining features in each image generated during image detecting. 7. The method of claim 3, wherein said analyzing step includes processing and interpreting results for the all three images. 8. The method of claim 3, wherein said analyzing step includes comparing distances between key points for the three images in order to define significant differences between them. 9. The method of claim 3, wherein said analyzing step includes removing noises generated by environment and return key points having only significant features. 10. The method of claim 3, wherein said defining step includes defining results and channeling analysis towards the application in parking areas' surveillance methodologies. 11. The method of claim 3, wherein the results include one of: a) no differences detected between the three images, and as such, a vehicle is not parked and the parking lot is empty; b) difference detected between all of the three images, and as such, the parking lot is either vacated or there is an arrival of a new vehicle; and c) differences detected between an original image, but not between a current image and a previous image, and as such, there is an old vehicle that had been detected last time and still occupies the parking lot, so thereby do one of issue a ticket and note an update on duration of parking since registration. 12. The method of claim 11, wherein the original image is taken during a set-up flight when the parking lot is empty, and as such, is used as the base state, and as such, is considered a normal state of the area with no object of interest. 13. The method of claim 11, wherein the previous image is taken during a previous flight of the drone for the waypoint of interest. 14. The method of claim 11, wherein the current image is taken during a current flight of the drone for the waypoint of interest. 15. A system for controlling a parking lot, comprising: a) a managing device; b) an image capture device; c) a storage device; and d) a user device; wherein said user device is linkable together by network communication links. 16. The system of claim 15, wherein said managing device includes a controller; and wherein said controller is part of, or associated with, said managing device. 17. The system of claim 16, wherein said controller is adapted for controlling an analysis of video data received by an UAV camera. 18. The system of claim 17, wherein said controller includes a processor; and wherein said processor controls overall operation of said managing device by execution of processing instructions that are stored in a memory connected to said processor. 19. The system of claim 18, wherein said memory represents any type of tangible computer readable medium including at least one of random access memory (RAM), read only memory (ROM), magnetic disk or tape, optical disk, flash memory, and holographic memory. 20. The system of claim 18, wherein said memory includes a combination of random access memory and read only memory. 21. The system of claim 18, wherein said processor includes at least one of a single-core processor, a dual-core processor, a multiple-core processor, a digital processor and cooperating math coprocessor, and a digital controller. 22. The system of claim 15, wherein said managing device is a networked device. 23. The system of claim 22, wherein said networked device of said managing device is at least one of a vehicle capture module and a user device. 24. The system of claim 22, wherein said networked device of said managing device is at least one of a central server, a networked computer, and distributed throughout said network. 25. The system of claim 18, wherein said processor, according to instructions contained in said memory, performs vehicle detection, matching phases, and changes in color, position, size, and angle of position. 26. The system of claim 18, wherein said memory stores a video buffering module; and wherein said video buffering module of said memory receives a video of a select parking area that is captured by a video capture device. 27. The system of claim 26, wherein said memory stores an image buffering module; and wherein said image buffering module of said memory receives images provided by said video capture device. 28. The system of claim 18, wherein said memory stores a vehicle matching module; and wherein said vehicle matching module of said memory matches a vehicle with a vehicle in image data. 29. The system of claim 18, wherein said memory stores a stationary vehicle detection module that; and wherein said stationary vehicle detection of said memory detects objects and/or vehicles within a field of view of said UAV camera. 30. The system of claim 18, wherein said memory stores a timing module; and wherein said timing module of said memory initiates a timer for measuring a duration that a detected vehicle remains parked in a space. 31. The system of claim 18, wherein said memory stores a violation detection module; and wherein said violation detection module of said memory checks if parking time exceeds that allowed in an area, and if so, a ticket alert is sent to a supervisor for evaluation and print. 32. The system of claim 31, wherein said ticket alert is stored in at least one of a single module and as multiple modules embodied in different devices. 33. The system of claim 18, wherein a UAV programming module encompasses any collection of, or set of, software instructions executable by said managing device or another digital system so as to configure said processor or said another digital system to perform a task that is an intent of said software instructions. 34. The system of claim 33, wherein said software instructions are stored in a storage medium including at least one of a RAM, a hard disk, and an optical disk. 35. The system of claim 33, wherein said software instructions encompass firmware that is software stored on a ROM. 36. The system of claim 33, wherein said software instructions are organized in various ways, including software components organized as libraries, Internet-based programs stored on a remote server, source code, interpretive code, object code, and directly executable code. 37. The system of claim 33, wherein said software instructions invoke a system-level code or calls to other software residing on a server or other location to perform certain functions. 38. The system of claim 15, wherein various components of said managing device are connected by a bus. 39. The system of claim 15, wherein said managing device includes at least one communication interface. 40. The system of claim 39, wherein said at least one communication interface includes network interfaces for communicating with external devices. 41. The system of claim 39, wherein said at least one communication interface includes at least one of a modem, a router, a cable, and an Ethernet port. 42. The system of claim 39, wherein said at least one communication interface is adapted to receive video and/or image data as input. 43. The system of claim 15, wherein said managing device includes at least one special purpose or general purpose computing device. 44. The system of claim 43, wherein said at least one special purpose or general purpose computing device is a server computer or digital front end (DFE), or any other computing device capable of executing instructions. 45. The system of claim 15, wherein said managing device is connected to an image source for inputting and/or receiving video data and/or image data in electronic format. 46. The system of claim 45, wherein said image source includes an image capture device. 47. The system of claim 46, wherein said image capture device of said image source includes at least one camera installed on a UAV that captures image and video data from the parking area and/or from a parking area of interest. 48. The system of claim 47, wherein said UAV flies at regular intervals along a predefined path that covers the area. 49. The system of claim 17, wherein said UAV camera includes near infrared (NIR) capabilities at a low-end portion of a near-infrared spectrum (700 nm-1000 nm) for performing at night in parking areas without external sources of illumination.
A method for parking space control including the steps of: a) flying a drone at regular intervals along a predefined path that covers an area of a parking lot; b) scanning and registering the parking lot; c) using, by software, features detection techniques as a part of image analysis algorithms; d) scanning and searching data from the parking lot for similarities within a given time period to form an analysis; e) determining, by the analysis, two outcomes for a specific parking lot including either a new vehicle is parked or an old vehicle is still located at the same parking lot; f) registering new vehicles at the time of detection; g) registering and checking longer parked vehicles' stay time for violation; h) determining if there is a violation; i) flagging and marking the vehicle(s) on a smart phone or tablet for an officer to view, locate, and ticket, if the answer to step h is yes; j) determining if the parking time exceeds the one allowed in the area of the parking lot; k) flagging ticket alerts on the program and emailing to the supervisor for evaluation and printing, if answer to step j is yes; l) determining if a vehicle can be exempt from the rules; m) deciding, by the supervisor, to generate a ticket with a click of a button, if answer to step 1 is no; n) creating, by the supervisor, the ticket; o) walking to the vehicle in order to assign the ticket thereto; and p) repeating cycle after an hour or as approved.1. A method for parking space control, comprising the steps of: a) flying a drone at regular intervals along a predefined path that covers an area of a parking lot; b) scanning and registering the parking lot; c) using, by software, features detection techniques as a part of image analysis algorithms; d) scanning and searching data from the parking lot for similarities within a given time period to form an analysis; e) determining, by the analysis, two outcomes for a specific parking lot including one of a new vehicle is parked and an old vehicle is still located at a same parking lot; f) registering new vehicles at the time of detection; g) registering and checking longer parked vehicles' stay time for violation; h) determining if there is a violation; i) flagging and marking the vehicle on a smart phone or tablet for an officer to view, locate, and ticket, if the answer to step h is yes; j) determining if the parking time exceeds that allowed in the area of the parking lot; k) flagging ticket alerts on program and emailing to a supervisor for evaluation and printing, if answer to step j is yes; l) determining if a vehicle can be exempt from the rules; m) deciding, by the supervisor, to generate a ticket with a click of a button, if answer to step 1 is no; n) creating, by the supervisor, the ticket; o) walking to the vehicle in order to assign the ticket thereto; and p) repeating cycle after one of an hour and as approved. 2. The method of claim 1, wherein the vehicle that can be exempt from the rules include personnel cars, other types of parked vehicles, and vehicles of vendors. 3. A method for controlling a parking lot, comprising the steps of: a) inputting data; b) image detecting analyzing with feature detection, key point detector, and descriptor extractor algorithm; c) extracting key points; d) decision analyzing; and e) defining results and channeling analysis towards an application in parking areas' surveillance methodologies. 4. The method of claim 3, wherein said inputting step includes inputting three images at a beginning of each step of an analysis at a specific waypoint. 5. The method of claim 3, wherein said inputting step includes inputting two images if a drone is arriving at a waypoint for the first time during a monitoring flight. 6. The method of claim 3, wherein said extracting step includes extracting areas with specific location and magnitude in image data defining features in each image generated during image detecting. 7. The method of claim 3, wherein said analyzing step includes processing and interpreting results for the all three images. 8. The method of claim 3, wherein said analyzing step includes comparing distances between key points for the three images in order to define significant differences between them. 9. The method of claim 3, wherein said analyzing step includes removing noises generated by environment and return key points having only significant features. 10. The method of claim 3, wherein said defining step includes defining results and channeling analysis towards the application in parking areas' surveillance methodologies. 11. The method of claim 3, wherein the results include one of: a) no differences detected between the three images, and as such, a vehicle is not parked and the parking lot is empty; b) difference detected between all of the three images, and as such, the parking lot is either vacated or there is an arrival of a new vehicle; and c) differences detected between an original image, but not between a current image and a previous image, and as such, there is an old vehicle that had been detected last time and still occupies the parking lot, so thereby do one of issue a ticket and note an update on duration of parking since registration. 12. The method of claim 11, wherein the original image is taken during a set-up flight when the parking lot is empty, and as such, is used as the base state, and as such, is considered a normal state of the area with no object of interest. 13. The method of claim 11, wherein the previous image is taken during a previous flight of the drone for the waypoint of interest. 14. The method of claim 11, wherein the current image is taken during a current flight of the drone for the waypoint of interest. 15. A system for controlling a parking lot, comprising: a) a managing device; b) an image capture device; c) a storage device; and d) a user device; wherein said user device is linkable together by network communication links. 16. The system of claim 15, wherein said managing device includes a controller; and wherein said controller is part of, or associated with, said managing device. 17. The system of claim 16, wherein said controller is adapted for controlling an analysis of video data received by an UAV camera. 18. The system of claim 17, wherein said controller includes a processor; and wherein said processor controls overall operation of said managing device by execution of processing instructions that are stored in a memory connected to said processor. 19. The system of claim 18, wherein said memory represents any type of tangible computer readable medium including at least one of random access memory (RAM), read only memory (ROM), magnetic disk or tape, optical disk, flash memory, and holographic memory. 20. The system of claim 18, wherein said memory includes a combination of random access memory and read only memory. 21. The system of claim 18, wherein said processor includes at least one of a single-core processor, a dual-core processor, a multiple-core processor, a digital processor and cooperating math coprocessor, and a digital controller. 22. The system of claim 15, wherein said managing device is a networked device. 23. The system of claim 22, wherein said networked device of said managing device is at least one of a vehicle capture module and a user device. 24. The system of claim 22, wherein said networked device of said managing device is at least one of a central server, a networked computer, and distributed throughout said network. 25. The system of claim 18, wherein said processor, according to instructions contained in said memory, performs vehicle detection, matching phases, and changes in color, position, size, and angle of position. 26. The system of claim 18, wherein said memory stores a video buffering module; and wherein said video buffering module of said memory receives a video of a select parking area that is captured by a video capture device. 27. The system of claim 26, wherein said memory stores an image buffering module; and wherein said image buffering module of said memory receives images provided by said video capture device. 28. The system of claim 18, wherein said memory stores a vehicle matching module; and wherein said vehicle matching module of said memory matches a vehicle with a vehicle in image data. 29. The system of claim 18, wherein said memory stores a stationary vehicle detection module that; and wherein said stationary vehicle detection of said memory detects objects and/or vehicles within a field of view of said UAV camera. 30. The system of claim 18, wherein said memory stores a timing module; and wherein said timing module of said memory initiates a timer for measuring a duration that a detected vehicle remains parked in a space. 31. The system of claim 18, wherein said memory stores a violation detection module; and wherein said violation detection module of said memory checks if parking time exceeds that allowed in an area, and if so, a ticket alert is sent to a supervisor for evaluation and print. 32. The system of claim 31, wherein said ticket alert is stored in at least one of a single module and as multiple modules embodied in different devices. 33. The system of claim 18, wherein a UAV programming module encompasses any collection of, or set of, software instructions executable by said managing device or another digital system so as to configure said processor or said another digital system to perform a task that is an intent of said software instructions. 34. The system of claim 33, wherein said software instructions are stored in a storage medium including at least one of a RAM, a hard disk, and an optical disk. 35. The system of claim 33, wherein said software instructions encompass firmware that is software stored on a ROM. 36. The system of claim 33, wherein said software instructions are organized in various ways, including software components organized as libraries, Internet-based programs stored on a remote server, source code, interpretive code, object code, and directly executable code. 37. The system of claim 33, wherein said software instructions invoke a system-level code or calls to other software residing on a server or other location to perform certain functions. 38. The system of claim 15, wherein various components of said managing device are connected by a bus. 39. The system of claim 15, wherein said managing device includes at least one communication interface. 40. The system of claim 39, wherein said at least one communication interface includes network interfaces for communicating with external devices. 41. The system of claim 39, wherein said at least one communication interface includes at least one of a modem, a router, a cable, and an Ethernet port. 42. The system of claim 39, wherein said at least one communication interface is adapted to receive video and/or image data as input. 43. The system of claim 15, wherein said managing device includes at least one special purpose or general purpose computing device. 44. The system of claim 43, wherein said at least one special purpose or general purpose computing device is a server computer or digital front end (DFE), or any other computing device capable of executing instructions. 45. The system of claim 15, wherein said managing device is connected to an image source for inputting and/or receiving video data and/or image data in electronic format. 46. The system of claim 45, wherein said image source includes an image capture device. 47. The system of claim 46, wherein said image capture device of said image source includes at least one camera installed on a UAV that captures image and video data from the parking area and/or from a parking area of interest. 48. The system of claim 47, wherein said UAV flies at regular intervals along a predefined path that covers the area. 49. The system of claim 17, wherein said UAV camera includes near infrared (NIR) capabilities at a low-end portion of a near-infrared spectrum (700 nm-1000 nm) for performing at night in parking areas without external sources of illumination.
2,600
10,364
10,364
15,830,767
2,675
Methods and systems are provided for discriminating ambiguous expressions to enhance user experience. For example, a natural language expression may be received by a speech recognition component. The natural language expression may include at least one of words, terms, and phrases of text. A dialog hypothesis set from the natural language expression may be created by using contextual information. In some cases, the dialog hypothesis set has at least two dialog hypotheses. A plurality of dialog responses may be generated for the dialog hypothesis set. The dialog hypothesis set may be ranked based on an analysis of the plurality of the dialog responses. An action may be performed based on ranking the dialog hypothesis set.
1. A system comprising: at least one processor; and memory encoding computer executable instructions that, when executed by at least one processor, perform a method for discriminating ambiguous requests comprising: receiving a natural language expression, wherein the natural language expression includes at least one of words, terms, and phrases of text; creating a dialog hypothesis set from the natural language expression by using contextual information, wherein the dialog hypothesis set has at least two dialog hypotheses; generating a plurality of dialog responses for the dialog hypothesis set; ranking the dialog hypothesis set based on an analysis of the plurality of the dialog responses; and performing an action based on ranking the dialog hypothesis set. 2. The system of claim 1, wherein the natural language expression is at least one of a spoken language input and a textual input. 3. The system of claim 1, wherein the contextual information includes at least one of information extracted from a previously received natural language expression, a response to a previously received natural language expression, client context, and knowledge content. 4. The system of claim 3, wherein the information extracted from the previously received natural language expression includes at least a domain prediction, an intent prediction, and a slot type. 5. The system of claim 1, wherein creating the dialog hypothesis set comprises: extracting at least one feature from the natural language expression; and generating at least two dialog hypotheses, where each dialog hypothesis of the dialog hypothesis set includes a different natural language expression having at least one extracted feature. 6. The system of claim 1, wherein generating a plurality of dialog responses for the dialog hypothesis set comprises generating a plurality of responses for each dialog hypothesis of the dialog hypothesis set. 7. The system of claim 1, wherein generating a plurality of dialog responses for the dialog hypothesis set comprises at least one of sending the dialog hypotheses to a web backend engine and sending the dialog hypotheses to a domain specific component. 8. The system of claim 1, wherein ranking the dialog hypothesis set based on an analysis of the plurality of the dialog responses comprises: extracting features from the at least two dialog hypotheses in the dialog hypothesis set; and calculating a score for the extracted features, wherein the calculated score is indicative of the dialog hypothesis rank within the dialog hypothesis set. 9. The system of claim 1, wherein ranking the dialog hypothesis set based on an analysis of the plurality of the dialog responses comprises comparing the plurality of the dialog responses with a plurality of logged dialog responses. 10. The system of claim 1, wherein performing an action based on ranking the dialog hypothesis set comprises: using a highest ranked dialog hypothesis to query a web backend engine for results; and sending the results to a user of a client computing device. 11. A system comprising: a speech recognition component for receiving a plurality of natural language expressions, wherein the plurality of natural language expressions include at least one of words, terms, and phrases of text; and a dialog component for: creating a first fallback query from the plurality of natural language expressions, wherein creating the first fallback query comprises concatenating the plurality of natural language expressions; and sending the at least one fallback query to a backend engine for generating search results from the at least one fallback query. 12. The system of claim 11, further comprising the dialog component for receiving the search results from the backend engine. 13. The system of claim 11, further comprising the dialog component for performing a stop-word removal analysis on the plurality of natural language expressions. 14. The system of claim 13, further comprising the dialog component for creating a second fallback query from the plurality of natural language expressions, wherein creating the second fallback query comprises concatenating the stop-word removal analysis performed on the plurality of natural language expressions. 15. The system of claim 11, further comprising the dialog component for extracting semantic entities from the plurality of natural language expressions. 16. The system of claim 15, further comprising the dialog component for creating a third fallback query from the plurality of natural language expressions, wherein creating the third fallback query comprises concatenating the semantic entities extracted from the plurality of natural language expressions. 17. One or more computer-readable storage media, having computer-executable instructions that, when executed by at least one processor, perform a method for training a dialog component to discriminate ambiguous requests, the method comprising: creating a dialog hypothesis set from a natural language expression by using contextual information, wherein the dialog hypothesis set has at least two dialog hypotheses; generating a plurality of dialog responses for the dialog hypothesis set; comparing the plurality of dialog responses with a plurality of logged dialog responses; determining whether at least one of the plurality of dialog responses matches at least one of the logged dialog responses; and when it is determined that at least one of the plurality of dialog responses matches at least one of the logged dialog responses, labeling at least one of the two dialog hypotheses in the dialog hypothesis set corresponding to the at least one dialog response that matches the at least one logged dialog response. 18. The computer-readable storage media of claim 17, wherein the plurality of logged dialog responses includes a plurality of responses generated from the natural language expression. 19. The computer-readable storage media of claim 17, wherein creating the dialog hypothesis set comprises: extracting at least one feature from the natural language expression; and generating at least two dialog hypotheses, where each dialog hypothesis of the dialog hypothesis set includes a different natural language expression having at least one extracted feature. 20. The computer-readable storage media of claim 19, wherein labeling at least one of the two dialog hypotheses in the dialog hypothesis set corresponding to the at least one dialog response that matches the at least one logged dialog response indicates that the natural language expression having the at least one extracted feature can be used to generate relevant responses.
Methods and systems are provided for discriminating ambiguous expressions to enhance user experience. For example, a natural language expression may be received by a speech recognition component. The natural language expression may include at least one of words, terms, and phrases of text. A dialog hypothesis set from the natural language expression may be created by using contextual information. In some cases, the dialog hypothesis set has at least two dialog hypotheses. A plurality of dialog responses may be generated for the dialog hypothesis set. The dialog hypothesis set may be ranked based on an analysis of the plurality of the dialog responses. An action may be performed based on ranking the dialog hypothesis set.1. A system comprising: at least one processor; and memory encoding computer executable instructions that, when executed by at least one processor, perform a method for discriminating ambiguous requests comprising: receiving a natural language expression, wherein the natural language expression includes at least one of words, terms, and phrases of text; creating a dialog hypothesis set from the natural language expression by using contextual information, wherein the dialog hypothesis set has at least two dialog hypotheses; generating a plurality of dialog responses for the dialog hypothesis set; ranking the dialog hypothesis set based on an analysis of the plurality of the dialog responses; and performing an action based on ranking the dialog hypothesis set. 2. The system of claim 1, wherein the natural language expression is at least one of a spoken language input and a textual input. 3. The system of claim 1, wherein the contextual information includes at least one of information extracted from a previously received natural language expression, a response to a previously received natural language expression, client context, and knowledge content. 4. The system of claim 3, wherein the information extracted from the previously received natural language expression includes at least a domain prediction, an intent prediction, and a slot type. 5. The system of claim 1, wherein creating the dialog hypothesis set comprises: extracting at least one feature from the natural language expression; and generating at least two dialog hypotheses, where each dialog hypothesis of the dialog hypothesis set includes a different natural language expression having at least one extracted feature. 6. The system of claim 1, wherein generating a plurality of dialog responses for the dialog hypothesis set comprises generating a plurality of responses for each dialog hypothesis of the dialog hypothesis set. 7. The system of claim 1, wherein generating a plurality of dialog responses for the dialog hypothesis set comprises at least one of sending the dialog hypotheses to a web backend engine and sending the dialog hypotheses to a domain specific component. 8. The system of claim 1, wherein ranking the dialog hypothesis set based on an analysis of the plurality of the dialog responses comprises: extracting features from the at least two dialog hypotheses in the dialog hypothesis set; and calculating a score for the extracted features, wherein the calculated score is indicative of the dialog hypothesis rank within the dialog hypothesis set. 9. The system of claim 1, wherein ranking the dialog hypothesis set based on an analysis of the plurality of the dialog responses comprises comparing the plurality of the dialog responses with a plurality of logged dialog responses. 10. The system of claim 1, wherein performing an action based on ranking the dialog hypothesis set comprises: using a highest ranked dialog hypothesis to query a web backend engine for results; and sending the results to a user of a client computing device. 11. A system comprising: a speech recognition component for receiving a plurality of natural language expressions, wherein the plurality of natural language expressions include at least one of words, terms, and phrases of text; and a dialog component for: creating a first fallback query from the plurality of natural language expressions, wherein creating the first fallback query comprises concatenating the plurality of natural language expressions; and sending the at least one fallback query to a backend engine for generating search results from the at least one fallback query. 12. The system of claim 11, further comprising the dialog component for receiving the search results from the backend engine. 13. The system of claim 11, further comprising the dialog component for performing a stop-word removal analysis on the plurality of natural language expressions. 14. The system of claim 13, further comprising the dialog component for creating a second fallback query from the plurality of natural language expressions, wherein creating the second fallback query comprises concatenating the stop-word removal analysis performed on the plurality of natural language expressions. 15. The system of claim 11, further comprising the dialog component for extracting semantic entities from the plurality of natural language expressions. 16. The system of claim 15, further comprising the dialog component for creating a third fallback query from the plurality of natural language expressions, wherein creating the third fallback query comprises concatenating the semantic entities extracted from the plurality of natural language expressions. 17. One or more computer-readable storage media, having computer-executable instructions that, when executed by at least one processor, perform a method for training a dialog component to discriminate ambiguous requests, the method comprising: creating a dialog hypothesis set from a natural language expression by using contextual information, wherein the dialog hypothesis set has at least two dialog hypotheses; generating a plurality of dialog responses for the dialog hypothesis set; comparing the plurality of dialog responses with a plurality of logged dialog responses; determining whether at least one of the plurality of dialog responses matches at least one of the logged dialog responses; and when it is determined that at least one of the plurality of dialog responses matches at least one of the logged dialog responses, labeling at least one of the two dialog hypotheses in the dialog hypothesis set corresponding to the at least one dialog response that matches the at least one logged dialog response. 18. The computer-readable storage media of claim 17, wherein the plurality of logged dialog responses includes a plurality of responses generated from the natural language expression. 19. The computer-readable storage media of claim 17, wherein creating the dialog hypothesis set comprises: extracting at least one feature from the natural language expression; and generating at least two dialog hypotheses, where each dialog hypothesis of the dialog hypothesis set includes a different natural language expression having at least one extracted feature. 20. The computer-readable storage media of claim 19, wherein labeling at least one of the two dialog hypotheses in the dialog hypothesis set corresponding to the at least one dialog response that matches the at least one logged dialog response indicates that the natural language expression having the at least one extracted feature can be used to generate relevant responses.
2,600
10,365
10,365
15,719,112
2,644
Aspects of this disclosure are directed to a method of routing toll-free telephone calls using a toll-free exchange, thereby minimizing the number of hand-offs, increasing the technological capability and reducing the ultimate cost of the toll-free call. Toll-free subscribers are generally assessed a cost based on each exchange plus the duration of the call. Subscribers are also limited to the decades old technological standards of PSTN switching. It is therefore an object of the present disclosure to minimize the number of exchanges, promote technological possibility and simplify the process of directing a toll-free telephone call by providing a toll-free exchange.
1. A method for routing toll-free calls through a toll-free exchange, the method, performed by a policy router, comprising: receiving, from an originating responsible organization (RESPORG), a toll-free call, wherein the toll-free call is directed to a toll-free subscriber that is served by a terminating RESPORG; querying a database to determine a RESPORG identification associated with the terminating RESPORG; and routing, to the terminating RESPORG, the toll-free call based in part on the RESPORG identification. 2. The method of claim 1, wherein the originating RESPORG and the terminating RESPORG are enrolled in the toll-free exchange. 3. The method of claim 1, wherein the RESPORG identification is obtained by a service control point. 4. The method of claim 1, further comprising: monitoring the toll-free call to generate a call detail record, wherein the call detail record includes at least one of: a time duration of the toll-free call, an identification of the originating RESPORG and the terminating RESPORG, buy-rate of the terminating RESPORG, a floor rate of the originating RESPORG, and a cost associated with the toll-free call. 5. The method of claim 4, wherein routing to the terminating RESPORG further comprises: determining that the floor rate of the originating RESPORG is less than the buy rate of the terminating RESPORG. 6. The method of claim 1, further comprising: determining whether the terminating RESPORG is enrolled with the exchange; based on a determination that the terminating RESPORG is not enrolled with the exchange, rejecting the toll-free call; and based on a determination that the terminating RESPORG is enrolled with the exchange, connecting the toll-free call to the terminating RESPORG. 7. The method of claim 6, wherein based on a determination that the terminating RESPORG is not enrolled with the exchange, further comprises: determining whether the originating RESPORG requested to connect the toll-free call to a public switched telephone network. 8. The method of claim 1, wherein a carrier identification code (CIC) of the terminating RESPORG is not used by the method. 9. The method of claim 1, further comprising routing a media portion of the toll-free call through a media server farm, wherein the media server farm configures the media portion of the toll-free call for routing. 10. A method for routing toll-free calls without using a carrier identification code (CIC), comprising: receiving, from an originating responsible organization (RESPORG), a toll-free call, wherein the toll-free call is directed to a toll-free subscriber that is served by a terminating RESPORG; querying a database to determine a RESPORG identification associated with the terminating RESPORG; and routing, to the terminating RESPORG, the toll-free call through a protected network based in part on the RESPORG identification. 11. The method of claim 10, further comprising: determining whether the terminating RESPORG is enrolled with the exchange; based on a determination that the terminating RESPORG is not enrolled with the exchange, rejecting the toll-free call; and based on a determination that the terminating RESPORG is enrolled with the exchange, connecting the toll-free call to the terminating RESPORG. 12. The method of claim 10, further comprising: monitoring the toll-free call to generate a call detail record, wherein the call detail record includes at least one of: a time duration of the toll-free call, an identification of the originating RESPORG and the terminating RESPORG, buy-rate of the terminating RESPORG, a floor rate of the originating RESPORG, and a cost associated with the toll-free call. 13. The method of claim 12, wherein routing to the terminating RESPORG further comprises: determining that the floor rate of the originating RESPORG is less than the buy rate of the terminating RESPORG. 14. The method of claim 10, further comprising routing a media portion of the toll-free call through a media server farm, wherein the media server farm configures the media portion of the toll-free call for routing. 15. The method of claim 10, wherein receiving, from an originating RESPORG, a toll-free further comprises: receiving, at a session border controller, a data portion of the toll-free call; and receiving, at a media gateway server farm, a media portion of the call. 16. The method of claim 15, wherein routing, to the terminating RESPORG further comprises: routing, from a second session border controller to the terminating RESPORG, the data portion of the toll-free call; routing, from the media gateway server farm to the terminating RESPORG, the media portion of the call. 17. A system for routing a toll-free call using a toll-free exchange, the system comprising: at least one session border controller for routing a data portion of the toll-free call within the toll-free exchange; a policy router for determining, based at least on data included within the data portion, a responsible organization (RESPORG) identification (ID) associated with a terminating RESPORG; a service control point for storing the RESPORG ID of the terminating RESPORG; a database for storing enrollment information associated with the terminating RESPORG; and a media gateway server farm having one or more server computing devices for routing a media portion of the toll-free call. 18. The system of claim 17, further comprising a thrasher component for storing a call data record of the toll-free call. 19. The system of claim 17, further comprising a first session border controller for receiving the data portion of the toll-free call, wherein the first session border controller is further configured to read data from a header of the data portion of the toll-free call and to write data to the header of the data portion of the toll-free call; and a second session border controller for routing the data portion of the toll-free call to the terminating RESPORG based in part on the data read from the data portion of the toll-free call. 20. The system of claim 17, wherein the database stores at least one of: enrollment status information of one or more enrolled RESPORGs; payment information of each of the one or more enrolled RESPORGs, comprising: a buy rate; and a floor rate; identification information of each of the one or more enrolled RESPORGs; and an access control list for listing each enrolled RESPORG;
Aspects of this disclosure are directed to a method of routing toll-free telephone calls using a toll-free exchange, thereby minimizing the number of hand-offs, increasing the technological capability and reducing the ultimate cost of the toll-free call. Toll-free subscribers are generally assessed a cost based on each exchange plus the duration of the call. Subscribers are also limited to the decades old technological standards of PSTN switching. It is therefore an object of the present disclosure to minimize the number of exchanges, promote technological possibility and simplify the process of directing a toll-free telephone call by providing a toll-free exchange.1. A method for routing toll-free calls through a toll-free exchange, the method, performed by a policy router, comprising: receiving, from an originating responsible organization (RESPORG), a toll-free call, wherein the toll-free call is directed to a toll-free subscriber that is served by a terminating RESPORG; querying a database to determine a RESPORG identification associated with the terminating RESPORG; and routing, to the terminating RESPORG, the toll-free call based in part on the RESPORG identification. 2. The method of claim 1, wherein the originating RESPORG and the terminating RESPORG are enrolled in the toll-free exchange. 3. The method of claim 1, wherein the RESPORG identification is obtained by a service control point. 4. The method of claim 1, further comprising: monitoring the toll-free call to generate a call detail record, wherein the call detail record includes at least one of: a time duration of the toll-free call, an identification of the originating RESPORG and the terminating RESPORG, buy-rate of the terminating RESPORG, a floor rate of the originating RESPORG, and a cost associated with the toll-free call. 5. The method of claim 4, wherein routing to the terminating RESPORG further comprises: determining that the floor rate of the originating RESPORG is less than the buy rate of the terminating RESPORG. 6. The method of claim 1, further comprising: determining whether the terminating RESPORG is enrolled with the exchange; based on a determination that the terminating RESPORG is not enrolled with the exchange, rejecting the toll-free call; and based on a determination that the terminating RESPORG is enrolled with the exchange, connecting the toll-free call to the terminating RESPORG. 7. The method of claim 6, wherein based on a determination that the terminating RESPORG is not enrolled with the exchange, further comprises: determining whether the originating RESPORG requested to connect the toll-free call to a public switched telephone network. 8. The method of claim 1, wherein a carrier identification code (CIC) of the terminating RESPORG is not used by the method. 9. The method of claim 1, further comprising routing a media portion of the toll-free call through a media server farm, wherein the media server farm configures the media portion of the toll-free call for routing. 10. A method for routing toll-free calls without using a carrier identification code (CIC), comprising: receiving, from an originating responsible organization (RESPORG), a toll-free call, wherein the toll-free call is directed to a toll-free subscriber that is served by a terminating RESPORG; querying a database to determine a RESPORG identification associated with the terminating RESPORG; and routing, to the terminating RESPORG, the toll-free call through a protected network based in part on the RESPORG identification. 11. The method of claim 10, further comprising: determining whether the terminating RESPORG is enrolled with the exchange; based on a determination that the terminating RESPORG is not enrolled with the exchange, rejecting the toll-free call; and based on a determination that the terminating RESPORG is enrolled with the exchange, connecting the toll-free call to the terminating RESPORG. 12. The method of claim 10, further comprising: monitoring the toll-free call to generate a call detail record, wherein the call detail record includes at least one of: a time duration of the toll-free call, an identification of the originating RESPORG and the terminating RESPORG, buy-rate of the terminating RESPORG, a floor rate of the originating RESPORG, and a cost associated with the toll-free call. 13. The method of claim 12, wherein routing to the terminating RESPORG further comprises: determining that the floor rate of the originating RESPORG is less than the buy rate of the terminating RESPORG. 14. The method of claim 10, further comprising routing a media portion of the toll-free call through a media server farm, wherein the media server farm configures the media portion of the toll-free call for routing. 15. The method of claim 10, wherein receiving, from an originating RESPORG, a toll-free further comprises: receiving, at a session border controller, a data portion of the toll-free call; and receiving, at a media gateway server farm, a media portion of the call. 16. The method of claim 15, wherein routing, to the terminating RESPORG further comprises: routing, from a second session border controller to the terminating RESPORG, the data portion of the toll-free call; routing, from the media gateway server farm to the terminating RESPORG, the media portion of the call. 17. A system for routing a toll-free call using a toll-free exchange, the system comprising: at least one session border controller for routing a data portion of the toll-free call within the toll-free exchange; a policy router for determining, based at least on data included within the data portion, a responsible organization (RESPORG) identification (ID) associated with a terminating RESPORG; a service control point for storing the RESPORG ID of the terminating RESPORG; a database for storing enrollment information associated with the terminating RESPORG; and a media gateway server farm having one or more server computing devices for routing a media portion of the toll-free call. 18. The system of claim 17, further comprising a thrasher component for storing a call data record of the toll-free call. 19. The system of claim 17, further comprising a first session border controller for receiving the data portion of the toll-free call, wherein the first session border controller is further configured to read data from a header of the data portion of the toll-free call and to write data to the header of the data portion of the toll-free call; and a second session border controller for routing the data portion of the toll-free call to the terminating RESPORG based in part on the data read from the data portion of the toll-free call. 20. The system of claim 17, wherein the database stores at least one of: enrollment status information of one or more enrolled RESPORGs; payment information of each of the one or more enrolled RESPORGs, comprising: a buy rate; and a floor rate; identification information of each of the one or more enrolled RESPORGs; and an access control list for listing each enrolled RESPORG;
2,600
10,366
10,366
15,864,542
2,644
Aspects of this disclosure are directed to a method of routing text messages to subscribers of toll-free numbers using the toll-free exchange. Typically, an originating texter will draft a text message, using, for example, an SMS component of a cellular phone to a toll-free subscriber. In an aspect of the present disclosure, the text message is sent to the originating texter's service provider, which then routes the text message to the toll-free exchange. The toll-free exchange looks up the RESPORG ID associated with the ten-digit toll-free number, and the RESPORG ID is used for text message routing.
1-20. (canceled) 21. A method for routing text messages comprising: receiving a text message directed to a toll-free number that is served by a terminating responsible organization (RESPORG); determining a RESPORG identification of the terminating RESPORG; and routing the text message to the terminating RESPORG based on the RESPORG identification. 22. The method of claim 21, wherein the terminating RESPORG is enrolled in a toll-free exchange. 23. The method of claim 21, wherein the RESPORG identification is obtained from a database that associates toll-free numbers with RESPORG identifications. 24. The method of claim 21, further comprising: monitoring the text message to generate a text message detail record, wherein the text message detail record includes at least one of: a transmission time of the text message; an identification of a text message originator; an identification of the terminating RESPORG; a buy-rate of the terminating RESPORG, a floor rate of the text message originator; or a cost associated with the text message. 25. The method of claim 24, wherein routing to the terminating RESPORG further comprises: determining that the floor rate of the text message originator is less than the buy rate of the terminating RESPORG. 26. The method of claim 21, further comprising: determining whether the terminating RESPORG is enrolled with the toll-free exchange; based on a determination that the terminating RESPORG is not enrolled with the toll-free exchange, rejecting the text message; and based on a determination that the terminating RESPORG is enrolled with the toll-free exchange, connecting the text message to the terminating RESPORG. 27. The method of claim 26, wherein the determination that the terminating RESPORG is not enrolled with the toll-free exchange, further comprises: determining whether the text message originator requested to route the text message to a public switched telephone network when the terminating RESPORG is not enrolled with the toll-free exchange. 28. The method of claim 21, wherein a carrier identification code (CIC) of the terminating RESPORG is not used as part of routing the text message to the terminating RESPORG. 29. The method of claim 21, further comprising routing a media portion of the text message through a media server farm, wherein the media server farm configures the media portion of the text message for routing. 30. A method for routing text messages comprising: receiving a text message directed to a toll-free number to be routed over a public switched telephone network (PSTN), wherein the toll-free number is served by a terminating responsible organization (RESPORG); querying a database for a RESPORG identification associated with the terminating RESPORG; and based on results of the querying, a) routing the text message through the PSTN to the terminating RESPORG using a carrier identification code (CIC); or b) routing the text message to the terminating RESPORG via a first network different from the PSTN based in part on the RESPORG identification. 31. The method of claim 30, further comprising: determining whether the terminating RESPORG is enrolled with the exchange; based on a determination that the terminating RESPORG is not enrolled with the exchange, routing the text message through the PSTN to the terminating RESPORG using the carrier identification code (CIC); and based on a determination that the terminating RESPORG is enrolled with the exchange, connecting the text message to the terminating RESPORG via the first network. 32. The method of claim 30, wherein routing the text message to the terminating RESPORG via the first network further comprises: routing a media portion of the text message through a media server farm on the first network, wherein the media server farm configures the media portion of the text message for routing via first network. 33. The method of claim 30, wherein receiving the text message further comprises: receiving, at a session border controller, a data portion of the text message; and receiving, at a media gateway server farm, a media portion of the text message. 34. The method of claim 33, wherein routing the text message to the terminating RESPORG via the first network further comprises: routing, from a second session border controller to the terminating RESPORG, the data portion of the text message; routing, from the media gateway server farm to the terminating RESPORG, the media portion of the text message. 35. A method for routing text messages to toll-free numbers comprising: receiving a plurality of text messages, each text message directed to one of a set of toll-free numbers and each toll-free number served by one of a group of terminating toll-free subscribers; determining, for at least one of the plurality of text messages, a responsible organization identifier (RESPORG ID) associated with the terminating toll-free subscriber of the text message; routing the text messages for which RESPORG IDs are determined through a toll-free exchange network using the RESPORG ID, the toll-free exchange network different from a public switched telephone network; and causing the text messages for which a RESPORG ID is not determined to be routed through the public switched telephone network using the toll-free number. 36. The method of claim 35 wherein determined the RESPORG ID further comprises: querying a database for a RESPORG ID associated with a toll-free number. 37. The method of claim 35 wherein receiving a plurality of text messages comprises: receiving, for each of the plurality of text messages, a request to route the text message via the toll-free exchange network from a requestor. 38. The method of claim 37 wherein causing the text messages for which a RESPORG ID is not determined to be routed through the public switched telephone network comprises: sending, for each of the text messages for which a RESPORG ID is not determined, a response to the requestor that the text message cannot be routed over the toll-free exchange network, thereby causing the requestor to route the text message through the public switched telephone network using the toll-free number.
Aspects of this disclosure are directed to a method of routing text messages to subscribers of toll-free numbers using the toll-free exchange. Typically, an originating texter will draft a text message, using, for example, an SMS component of a cellular phone to a toll-free subscriber. In an aspect of the present disclosure, the text message is sent to the originating texter's service provider, which then routes the text message to the toll-free exchange. The toll-free exchange looks up the RESPORG ID associated with the ten-digit toll-free number, and the RESPORG ID is used for text message routing.1-20. (canceled) 21. A method for routing text messages comprising: receiving a text message directed to a toll-free number that is served by a terminating responsible organization (RESPORG); determining a RESPORG identification of the terminating RESPORG; and routing the text message to the terminating RESPORG based on the RESPORG identification. 22. The method of claim 21, wherein the terminating RESPORG is enrolled in a toll-free exchange. 23. The method of claim 21, wherein the RESPORG identification is obtained from a database that associates toll-free numbers with RESPORG identifications. 24. The method of claim 21, further comprising: monitoring the text message to generate a text message detail record, wherein the text message detail record includes at least one of: a transmission time of the text message; an identification of a text message originator; an identification of the terminating RESPORG; a buy-rate of the terminating RESPORG, a floor rate of the text message originator; or a cost associated with the text message. 25. The method of claim 24, wherein routing to the terminating RESPORG further comprises: determining that the floor rate of the text message originator is less than the buy rate of the terminating RESPORG. 26. The method of claim 21, further comprising: determining whether the terminating RESPORG is enrolled with the toll-free exchange; based on a determination that the terminating RESPORG is not enrolled with the toll-free exchange, rejecting the text message; and based on a determination that the terminating RESPORG is enrolled with the toll-free exchange, connecting the text message to the terminating RESPORG. 27. The method of claim 26, wherein the determination that the terminating RESPORG is not enrolled with the toll-free exchange, further comprises: determining whether the text message originator requested to route the text message to a public switched telephone network when the terminating RESPORG is not enrolled with the toll-free exchange. 28. The method of claim 21, wherein a carrier identification code (CIC) of the terminating RESPORG is not used as part of routing the text message to the terminating RESPORG. 29. The method of claim 21, further comprising routing a media portion of the text message through a media server farm, wherein the media server farm configures the media portion of the text message for routing. 30. A method for routing text messages comprising: receiving a text message directed to a toll-free number to be routed over a public switched telephone network (PSTN), wherein the toll-free number is served by a terminating responsible organization (RESPORG); querying a database for a RESPORG identification associated with the terminating RESPORG; and based on results of the querying, a) routing the text message through the PSTN to the terminating RESPORG using a carrier identification code (CIC); or b) routing the text message to the terminating RESPORG via a first network different from the PSTN based in part on the RESPORG identification. 31. The method of claim 30, further comprising: determining whether the terminating RESPORG is enrolled with the exchange; based on a determination that the terminating RESPORG is not enrolled with the exchange, routing the text message through the PSTN to the terminating RESPORG using the carrier identification code (CIC); and based on a determination that the terminating RESPORG is enrolled with the exchange, connecting the text message to the terminating RESPORG via the first network. 32. The method of claim 30, wherein routing the text message to the terminating RESPORG via the first network further comprises: routing a media portion of the text message through a media server farm on the first network, wherein the media server farm configures the media portion of the text message for routing via first network. 33. The method of claim 30, wherein receiving the text message further comprises: receiving, at a session border controller, a data portion of the text message; and receiving, at a media gateway server farm, a media portion of the text message. 34. The method of claim 33, wherein routing the text message to the terminating RESPORG via the first network further comprises: routing, from a second session border controller to the terminating RESPORG, the data portion of the text message; routing, from the media gateway server farm to the terminating RESPORG, the media portion of the text message. 35. A method for routing text messages to toll-free numbers comprising: receiving a plurality of text messages, each text message directed to one of a set of toll-free numbers and each toll-free number served by one of a group of terminating toll-free subscribers; determining, for at least one of the plurality of text messages, a responsible organization identifier (RESPORG ID) associated with the terminating toll-free subscriber of the text message; routing the text messages for which RESPORG IDs are determined through a toll-free exchange network using the RESPORG ID, the toll-free exchange network different from a public switched telephone network; and causing the text messages for which a RESPORG ID is not determined to be routed through the public switched telephone network using the toll-free number. 36. The method of claim 35 wherein determined the RESPORG ID further comprises: querying a database for a RESPORG ID associated with a toll-free number. 37. The method of claim 35 wherein receiving a plurality of text messages comprises: receiving, for each of the plurality of text messages, a request to route the text message via the toll-free exchange network from a requestor. 38. The method of claim 37 wherein causing the text messages for which a RESPORG ID is not determined to be routed through the public switched telephone network comprises: sending, for each of the text messages for which a RESPORG ID is not determined, a response to the requestor that the text message cannot be routed over the toll-free exchange network, thereby causing the requestor to route the text message through the public switched telephone network using the toll-free number.
2,600
10,367
10,367
14,567,097
2,694
Once size of the substrate of a control-point sensing panel and tip width of a control object are given, an electrode layout structure can be acquired. The electrode layout structure includes M*N first sensing electrodes; M*N second sensing electrodes; a first signal input/output terminal set including M signal input/output terminals, each being electrically connected to N first sensing electrodes in parallel; and a second signal input/output terminal set including N signal input/output terminals, each being electrically connected to M second sensing electrodes in series. The first and second sensing electrodes are formed on the same plane, and form M*N electrode juxtaposition zones in M*N sensing cells at intersections. Each the electrode juxtaposition zone has width being 0.5˜4.5 times the tip width of the control object, and/or clearance between adjacent ones of the electrode juxtaposition zones is 0.5˜1.5 times the tip width of the control object.
1. A control-point sensing panel for sensing a control point thereon in response to an action of a control object, comprising: a substrate; M*N first sensing electrodes formed on a surface of the substrate; a first signal input/output terminal set including M signal input/output terminals, each of which is at least electrically connected to N of the first sensing electrodes in parallel; M*N second sensing electrodes formed on the surface of the substrate; and a second signal input/output terminal set including N signal input/output terminals, each of which is at least electrically connected to M of the second sensing electrodes; wherein the first sensing electrodes and the second sensing electrodes are formed on the same plane, and form M*N electrode juxtaposition zones at intersections of the first and second sensing electrodes, and each of the electrode juxtaposition zones has a width being 0.5˜4.5 times the tip width of the control object. 2. The control-point sensing panel according to claim 1, wherein the M first sensing electrodes in the same column are coupled thereto M signal lines, respectively, which are grouped into a set of signal lines so that the control-point sensing panel includes N sets of signal lines, and wherein N signal lines corresponding to the N first sensing electrodes in the same row are electrically connected, in parallel, to a corresponding one of the M signal input/out terminals in the first signal input/output terminal set. 3. The control-point sensing panel according to claim 2, wherein the N sets of signal lines pass through respective columns of wiring zones, each of which is disposed between adjacent two of the electrode juxtaposition zones. 4. The control-point sensing panel according to claim 2, comprising a non-wiring region where dummy transparent wires are formed. 5. The control-point sensing panel according to claim 1, wherein the first sensing electrode and the second sensing electrode respectively include a plurality of sub-electrodes, and the sub-electrodes of the first sensing electrode and the sub-electrodes of the second sensing electrode are coplanar and alternately allocated in the electrode juxtaposition zones. 6. The control-point sensing panel according to claim 5, wherein at least one of the electrode juxtaposition zones has a width smaller than the tip width of the control object, and the effective area of the sub-electrodes of the first sensing electrode or the second sensing electrode decreases along a specified direction. 7. A control-point sensing panel for sensing a control point thereon in response to an action of a control object, comprising: a substrate defined thereon M*N sensing cells; M*N first sensing electrodes formed on a surface of the substrate; a first signal input/output terminal set including M signal input/output terminals, each of which is at least electrically connected to N of the first sensing electrodes in parallel; M*N second sensing electrodes formed on the surface of the substrate; and a second signal input/output terminal set including N signal input/output terminals, each of which is at least electrically connected to M of the second sensing electrodes in series; wherein the first sensing electrodes and the second sensing electrodes are formed on the same plane, and form M*N electrode juxtaposition zones in the M*N sensing cells at intersections of the first and second sensing electrodes, respectively, and each of the electrode juxtaposition zones has an area being ⅓˜½ times the area of the corresponding sensing cell. 8. The control-point sensing panel according to claim 7, further comprising N sets of M signal lines, wherein the M signal lines in each set respectively coupled to the M first sensing electrodes in the same column, and the N signal lines, each selected from one of the N sets and corresponding to one of the N first sensing electrodes in the same row, are electrically connected in parallel to a corresponding one of the M signal input/out terminals in the first signal input/output terminal set. 9. The control-point sensing panel according to claim 8, wherein the N sets of signal lines pass through respective columns of wiring zones, each of which is disposed between adjacent two of the electrode juxtaposition zones. 10. The control-point sensing panel according to claim 8, comprising a non-wiring region where dummy transparent wires are formed. 11. The control-point sensing panel according to claim 7, wherein the first sensing electrode and the second sensing electrode respectively include a plurality of sub-electrodes, and the sub-electrodes of the first sensing electrode and the sub-electrodes of the second sensing electrode are coplanar and alternately allocated in the electrode juxtaposition zones. 12. The control-point sensing panel according to claim 11, wherein at least one of the electrode juxtaposition zones has a width smaller than the tip width of the control object, and the effective area of the sub-electrodes of the first sensing electrode or the second sensing electrode decreases along a specified direction. 13. A control-point sensing panel for sensing a control point thereon in response to an action of a control object, comprising: a substrate; M*N first sensing electrodes formed on a surface of the substrate; a first signal input/output terminal set including M signal input/output terminals, each of which is at least electrically connected to N of the first sensing electrodes in parallel; M*N second sensing electrodes formed on the surface of the substrate; and a second signal input/output terminal set including N signal input/output terminals, each of which is at least electrically connected to M of the second sensing electrodes in series; wherein the first sensing electrodes and the second sensing electrodes are formed on the same plane, and form M*N electrode juxtaposition zones at intersections of the first and second sensing electrodes, and a clearance between every two adjacent ones of the electrode juxtaposition zones is 0.5˜1.5 times the tip width of the control object. 14. The control-point sensing panel according to claim 13, wherein the M first sensing electrodes in the same column are coupled thereto M signal lines, respectively, which are grouped into a set of signal lines so that the control-point sensing panel includes N sets of signal lines, and wherein N signal lines corresponding to the N first sensing electrodes in the same row are electrically connected, in parallel, to a corresponding one of the M signal input/out terminals in the first signal input/output terminal set. 15. The control-point sensing panel according to claim 14, wherein the N sets of signal lines pass through respective columns of wiring zones, each of which is disposed between adjacent two of the electrode juxtaposition zones. 16. The control-point sensing panel according to claim 14, comprising a non-wiring region where dummy transparent wires are formed. 17. The control-point sensing panel according to claim 13, wherein the first sensing electrode and the second sensing electrode respectively include a plurality of sub-electrodes, and the sub-electrodes of the first sensing electrode and the sub-electrodes of the second sensing electrode are coplanar and alternately allocated in the electrode juxtaposition zones. 18. The control-point sensing panel according to claim 17, wherein at least one of the electrode juxtaposition zones has a width smaller than the tip width of the control object, and the effective area of the sub-electrodes of the first sensing electrode or the second sensing electrode decreases along a specified direction. 19. A design method of a control-point sensing panel executable by a digital data processing device to define an electrode layout structure, the control-point sensing panel being used for sensing a control point thereon in response to an action of a control object, and the method comprising: inputting a size of a substrate where the electrode layout structure is to be formed, and a tip width of the control object; and acquiring the electrode layout structure according to the size of the substrate and the tip width of the control object, wherein the electrode layout structure includes M*N first sensing electrodes; M*N second sensing electrodes; a first signal input/output terminal set including M signal input/output terminals, each of which is at least electrically connected to N of the first sensing electrodes in parallel; and a second signal input/output terminal set including N signal input/output terminals, each of which is at least electrically connected to M of the second sensing electrodes in series; wherein the first sensing electrodes and the second sensing electrodes are formed on the same plane, and form M*N electrode juxtaposition zones in M*N sensing cells at intersections of the first and second sensing electrodes, respectively. 20. The design method according to claim 19, wherein each of the electrode juxtaposition zones has a width being 0.5˜4.5 times the tip width of the control object. 21. The design method according to claim 19, wherein a clearance between every two adjacent ones of the electrode juxtaposition zones is 0.5˜1.5 times the tip width of the control object 22. The design method according to claim 19, wherein each of the electrode juxtaposition zones has an area being ⅓˜½ times the area of the corresponding sensing cell. 23. The design method according to claim 19, wherein at least one of the resulting electrode juxtaposition zones has a width smaller than the tip width of the control object, and the effective area of the sub-electrodes of the first sensing electrode or the second sensing electrode decreases along a specified direction.
Once size of the substrate of a control-point sensing panel and tip width of a control object are given, an electrode layout structure can be acquired. The electrode layout structure includes M*N first sensing electrodes; M*N second sensing electrodes; a first signal input/output terminal set including M signal input/output terminals, each being electrically connected to N first sensing electrodes in parallel; and a second signal input/output terminal set including N signal input/output terminals, each being electrically connected to M second sensing electrodes in series. The first and second sensing electrodes are formed on the same plane, and form M*N electrode juxtaposition zones in M*N sensing cells at intersections. Each the electrode juxtaposition zone has width being 0.5˜4.5 times the tip width of the control object, and/or clearance between adjacent ones of the electrode juxtaposition zones is 0.5˜1.5 times the tip width of the control object.1. A control-point sensing panel for sensing a control point thereon in response to an action of a control object, comprising: a substrate; M*N first sensing electrodes formed on a surface of the substrate; a first signal input/output terminal set including M signal input/output terminals, each of which is at least electrically connected to N of the first sensing electrodes in parallel; M*N second sensing electrodes formed on the surface of the substrate; and a second signal input/output terminal set including N signal input/output terminals, each of which is at least electrically connected to M of the second sensing electrodes; wherein the first sensing electrodes and the second sensing electrodes are formed on the same plane, and form M*N electrode juxtaposition zones at intersections of the first and second sensing electrodes, and each of the electrode juxtaposition zones has a width being 0.5˜4.5 times the tip width of the control object. 2. The control-point sensing panel according to claim 1, wherein the M first sensing electrodes in the same column are coupled thereto M signal lines, respectively, which are grouped into a set of signal lines so that the control-point sensing panel includes N sets of signal lines, and wherein N signal lines corresponding to the N first sensing electrodes in the same row are electrically connected, in parallel, to a corresponding one of the M signal input/out terminals in the first signal input/output terminal set. 3. The control-point sensing panel according to claim 2, wherein the N sets of signal lines pass through respective columns of wiring zones, each of which is disposed between adjacent two of the electrode juxtaposition zones. 4. The control-point sensing panel according to claim 2, comprising a non-wiring region where dummy transparent wires are formed. 5. The control-point sensing panel according to claim 1, wherein the first sensing electrode and the second sensing electrode respectively include a plurality of sub-electrodes, and the sub-electrodes of the first sensing electrode and the sub-electrodes of the second sensing electrode are coplanar and alternately allocated in the electrode juxtaposition zones. 6. The control-point sensing panel according to claim 5, wherein at least one of the electrode juxtaposition zones has a width smaller than the tip width of the control object, and the effective area of the sub-electrodes of the first sensing electrode or the second sensing electrode decreases along a specified direction. 7. A control-point sensing panel for sensing a control point thereon in response to an action of a control object, comprising: a substrate defined thereon M*N sensing cells; M*N first sensing electrodes formed on a surface of the substrate; a first signal input/output terminal set including M signal input/output terminals, each of which is at least electrically connected to N of the first sensing electrodes in parallel; M*N second sensing electrodes formed on the surface of the substrate; and a second signal input/output terminal set including N signal input/output terminals, each of which is at least electrically connected to M of the second sensing electrodes in series; wherein the first sensing electrodes and the second sensing electrodes are formed on the same plane, and form M*N electrode juxtaposition zones in the M*N sensing cells at intersections of the first and second sensing electrodes, respectively, and each of the electrode juxtaposition zones has an area being ⅓˜½ times the area of the corresponding sensing cell. 8. The control-point sensing panel according to claim 7, further comprising N sets of M signal lines, wherein the M signal lines in each set respectively coupled to the M first sensing electrodes in the same column, and the N signal lines, each selected from one of the N sets and corresponding to one of the N first sensing electrodes in the same row, are electrically connected in parallel to a corresponding one of the M signal input/out terminals in the first signal input/output terminal set. 9. The control-point sensing panel according to claim 8, wherein the N sets of signal lines pass through respective columns of wiring zones, each of which is disposed between adjacent two of the electrode juxtaposition zones. 10. The control-point sensing panel according to claim 8, comprising a non-wiring region where dummy transparent wires are formed. 11. The control-point sensing panel according to claim 7, wherein the first sensing electrode and the second sensing electrode respectively include a plurality of sub-electrodes, and the sub-electrodes of the first sensing electrode and the sub-electrodes of the second sensing electrode are coplanar and alternately allocated in the electrode juxtaposition zones. 12. The control-point sensing panel according to claim 11, wherein at least one of the electrode juxtaposition zones has a width smaller than the tip width of the control object, and the effective area of the sub-electrodes of the first sensing electrode or the second sensing electrode decreases along a specified direction. 13. A control-point sensing panel for sensing a control point thereon in response to an action of a control object, comprising: a substrate; M*N first sensing electrodes formed on a surface of the substrate; a first signal input/output terminal set including M signal input/output terminals, each of which is at least electrically connected to N of the first sensing electrodes in parallel; M*N second sensing electrodes formed on the surface of the substrate; and a second signal input/output terminal set including N signal input/output terminals, each of which is at least electrically connected to M of the second sensing electrodes in series; wherein the first sensing electrodes and the second sensing electrodes are formed on the same plane, and form M*N electrode juxtaposition zones at intersections of the first and second sensing electrodes, and a clearance between every two adjacent ones of the electrode juxtaposition zones is 0.5˜1.5 times the tip width of the control object. 14. The control-point sensing panel according to claim 13, wherein the M first sensing electrodes in the same column are coupled thereto M signal lines, respectively, which are grouped into a set of signal lines so that the control-point sensing panel includes N sets of signal lines, and wherein N signal lines corresponding to the N first sensing electrodes in the same row are electrically connected, in parallel, to a corresponding one of the M signal input/out terminals in the first signal input/output terminal set. 15. The control-point sensing panel according to claim 14, wherein the N sets of signal lines pass through respective columns of wiring zones, each of which is disposed between adjacent two of the electrode juxtaposition zones. 16. The control-point sensing panel according to claim 14, comprising a non-wiring region where dummy transparent wires are formed. 17. The control-point sensing panel according to claim 13, wherein the first sensing electrode and the second sensing electrode respectively include a plurality of sub-electrodes, and the sub-electrodes of the first sensing electrode and the sub-electrodes of the second sensing electrode are coplanar and alternately allocated in the electrode juxtaposition zones. 18. The control-point sensing panel according to claim 17, wherein at least one of the electrode juxtaposition zones has a width smaller than the tip width of the control object, and the effective area of the sub-electrodes of the first sensing electrode or the second sensing electrode decreases along a specified direction. 19. A design method of a control-point sensing panel executable by a digital data processing device to define an electrode layout structure, the control-point sensing panel being used for sensing a control point thereon in response to an action of a control object, and the method comprising: inputting a size of a substrate where the electrode layout structure is to be formed, and a tip width of the control object; and acquiring the electrode layout structure according to the size of the substrate and the tip width of the control object, wherein the electrode layout structure includes M*N first sensing electrodes; M*N second sensing electrodes; a first signal input/output terminal set including M signal input/output terminals, each of which is at least electrically connected to N of the first sensing electrodes in parallel; and a second signal input/output terminal set including N signal input/output terminals, each of which is at least electrically connected to M of the second sensing electrodes in series; wherein the first sensing electrodes and the second sensing electrodes are formed on the same plane, and form M*N electrode juxtaposition zones in M*N sensing cells at intersections of the first and second sensing electrodes, respectively. 20. The design method according to claim 19, wherein each of the electrode juxtaposition zones has a width being 0.5˜4.5 times the tip width of the control object. 21. The design method according to claim 19, wherein a clearance between every two adjacent ones of the electrode juxtaposition zones is 0.5˜1.5 times the tip width of the control object 22. The design method according to claim 19, wherein each of the electrode juxtaposition zones has an area being ⅓˜½ times the area of the corresponding sensing cell. 23. The design method according to claim 19, wherein at least one of the resulting electrode juxtaposition zones has a width smaller than the tip width of the control object, and the effective area of the sub-electrodes of the first sensing electrode or the second sensing electrode decreases along a specified direction.
2,600
10,368
10,368
15,822,605
2,644
A system, computer software and method for collecting, in addition to position data, additional positioning data in a user terminal served by a communication network. The method includes initiating, by generating a message within the user terminal, collection of the positioning data, where the positioning data includes information based on which a physical location of the user terminal is determined; measuring, by the user terminal, at least one parameter related to the physical location of the user terminal in response to the message; producing, within the user terminal, measurement reports that include the at least one parameter; selecting, within the user terminal, one or more measurement reports that were generated in response to the message generated by the user terminal; reporting the selected one or more measurement reports to an interface within the user terminal; and transmitting, from the interface, the reported one or more measurement reports to an external server or to the communication network.
1. A user equipment (UE) configured to be served by a communication network, the user terminal comprising: a software interface operative to instruct the UE to determine additional positioning data independent of any instructions received from the communication network, wherein the additional positioning data is in addition to position data and includes information based on which a physical location of the UE is determined; processing circuitry operative to: determine the additional positioning data; report the additional positioning data to the software interface; and signal the additional positioning data to an external server. 2. The UE of claim 1, wherein the processing circuitry is further operative to measure a parameter related to the physical location of the UE. 3. The UE of claim 2, wherein the parameter is one of a cell ID, broadcast information received by the UE from a cell, quantized path loss of a received signal, signal strength, quantized noise rise, radio connection information, and quantized time. 4. The UE of claim 3, wherein the parameter is measured independent of any instructions received from the communication network. 5. The UE of claim 3, wherein the parameter is measured in response to a request from the communication network. 6. The UE of claim 1, wherein the processing circuitry is further operative to provide information to the external server or to the communication network to build a database of fingerprinted positions, the provided information being assembled based on the position data, which includes global positioning information, and the additional positioning data, wherein the position data and the additional positioning data are measured simultaneously and sent together to the external server or to the communication network. 7. The UE of claim 1, wherein the physical position of the UE is determined in the external server, based only on the additional positioning data transmitted from the UE. 8. The UE of claim 1, wherein the additional positioning data comprises at least one of a result of SFN-SFN type 1 and 2 measurements, a result of E-OTD measurements, and/or a result of inter-radio access technology (inter-RAT) measurements. 9. The UE of claim 2, wherein the parameter requested by the communication network is Assisted Global Positioning System (A-GPS) position information. 10. A method performed by a user equipment (UE) operating in a communication network, the method comprising: receiving, from a software interface residing on the UE, instructions to determine additional positioning data independent of any instructions received from the communication network, wherein the additional positioning data is in addition to position data and includes information based on which a physical location of the UE is determined; determining the additional positioning data; reporting the additional positioning data to the software interface; and signaling the additional positioning data to an external server. 11. The method of claim 10, further comprising measuring a parameter related to the physical location of the UE. 12. The method of claim 11, wherein the parameter is one of a cell ID, broadcast information received by the UE from a cell, quantized path loss of a received signal, signal strength, quantized noise rise, radio connection information, and quantized time. 13. The method of claim 12, wherein the parameter is measured independent of any instructions received from the communication network. 14. The method of claim 12, wherein the parameter is measured in response to a request from the communication network. 15. The method of claim 10, wherein the additional positioning data is related to SFN-SFN type 1 or 2 measurements, E-OTD measurements, or inter-radio access technology measurements. 16. The method of claim 10, further comprising providing information to the external server or to the communication network to build a database of fingerprinted positions, the provided information being assembled based on the position data, which includes global positioning information, and the additional positioning data, wherein the position data and the additional positioning data are measured simultaneously and sent together to the external server or to the communication network. 17. The method of claim 10, wherein the physical position of the UE is determined in the external server, based only on the additional positioning data transmitted from the UE. 18. The method of claim 11, wherein the parameter requested by the communication network is Assisted Global Positioning System (A-GPS) position information.
A system, computer software and method for collecting, in addition to position data, additional positioning data in a user terminal served by a communication network. The method includes initiating, by generating a message within the user terminal, collection of the positioning data, where the positioning data includes information based on which a physical location of the user terminal is determined; measuring, by the user terminal, at least one parameter related to the physical location of the user terminal in response to the message; producing, within the user terminal, measurement reports that include the at least one parameter; selecting, within the user terminal, one or more measurement reports that were generated in response to the message generated by the user terminal; reporting the selected one or more measurement reports to an interface within the user terminal; and transmitting, from the interface, the reported one or more measurement reports to an external server or to the communication network.1. A user equipment (UE) configured to be served by a communication network, the user terminal comprising: a software interface operative to instruct the UE to determine additional positioning data independent of any instructions received from the communication network, wherein the additional positioning data is in addition to position data and includes information based on which a physical location of the UE is determined; processing circuitry operative to: determine the additional positioning data; report the additional positioning data to the software interface; and signal the additional positioning data to an external server. 2. The UE of claim 1, wherein the processing circuitry is further operative to measure a parameter related to the physical location of the UE. 3. The UE of claim 2, wherein the parameter is one of a cell ID, broadcast information received by the UE from a cell, quantized path loss of a received signal, signal strength, quantized noise rise, radio connection information, and quantized time. 4. The UE of claim 3, wherein the parameter is measured independent of any instructions received from the communication network. 5. The UE of claim 3, wherein the parameter is measured in response to a request from the communication network. 6. The UE of claim 1, wherein the processing circuitry is further operative to provide information to the external server or to the communication network to build a database of fingerprinted positions, the provided information being assembled based on the position data, which includes global positioning information, and the additional positioning data, wherein the position data and the additional positioning data are measured simultaneously and sent together to the external server or to the communication network. 7. The UE of claim 1, wherein the physical position of the UE is determined in the external server, based only on the additional positioning data transmitted from the UE. 8. The UE of claim 1, wherein the additional positioning data comprises at least one of a result of SFN-SFN type 1 and 2 measurements, a result of E-OTD measurements, and/or a result of inter-radio access technology (inter-RAT) measurements. 9. The UE of claim 2, wherein the parameter requested by the communication network is Assisted Global Positioning System (A-GPS) position information. 10. A method performed by a user equipment (UE) operating in a communication network, the method comprising: receiving, from a software interface residing on the UE, instructions to determine additional positioning data independent of any instructions received from the communication network, wherein the additional positioning data is in addition to position data and includes information based on which a physical location of the UE is determined; determining the additional positioning data; reporting the additional positioning data to the software interface; and signaling the additional positioning data to an external server. 11. The method of claim 10, further comprising measuring a parameter related to the physical location of the UE. 12. The method of claim 11, wherein the parameter is one of a cell ID, broadcast information received by the UE from a cell, quantized path loss of a received signal, signal strength, quantized noise rise, radio connection information, and quantized time. 13. The method of claim 12, wherein the parameter is measured independent of any instructions received from the communication network. 14. The method of claim 12, wherein the parameter is measured in response to a request from the communication network. 15. The method of claim 10, wherein the additional positioning data is related to SFN-SFN type 1 or 2 measurements, E-OTD measurements, or inter-radio access technology measurements. 16. The method of claim 10, further comprising providing information to the external server or to the communication network to build a database of fingerprinted positions, the provided information being assembled based on the position data, which includes global positioning information, and the additional positioning data, wherein the position data and the additional positioning data are measured simultaneously and sent together to the external server or to the communication network. 17. The method of claim 10, wherein the physical position of the UE is determined in the external server, based only on the additional positioning data transmitted from the UE. 18. The method of claim 11, wherein the parameter requested by the communication network is Assisted Global Positioning System (A-GPS) position information.
2,600
10,369
10,369
14,896,816
2,685
To be able to detect seal breakage of a container portion for containing an article by a simpler scheme, there is provided a signal processing device including: a processor that executes a program; and a memory that stores the program for causing the processor to function as a detection unit that transmits a signal to one or more signal lines formed of a breakable material in such a manner that the signal passes through regions corresponding to one or more respective container portions of a package for containing articles, and detects seal breakage of each container portion on the basis of whether the transmitted signal returns via each region.
1. A signal processing device comprising: a processor that executes a program; and a memory that stores the program for causing the processor to function as a detection unit that transmits a signal to one or more signal lines formed of a breakable material in such a manner that the signal passes through regions corresponding to one or more respective container portions of a package for containing articles, and detects seal breakage of each container portion on the basis of whether the transmitted signal returns via each region. 2. The signal processing device according to claim 1, wherein the detection unit detects seal breakage of a first container portion, in a case in which a first signal does not return via a first region when a predetermined delay time has elapsed since a transmission time of the first signal that is to pass through the first region corresponding to the first container portion. 3. The signal processing device according to claim 2, wherein the one or more signal lines are one or more branch lines that branch from a common line, and the detection unit transmits the first signal to a first branch line that extends through the first region corresponding to the first container portion, and determines whether the first signal returns from the common line. 4. The signal processing device according to claim 2, wherein the one or more signal lines are one or more branch lines that branch from a common line, and the detection unit transmits the first signal to the common line, and determines whether the first signal returns from a first branch line that extends through the first region corresponding to the first container portion. 5. The signal processing device according to claim 2, wherein the first signal is a pulse signal having a predetermined pulse width. 6. The signal processing device according to claim 2, wherein the detection unit does not determine whether the first signal returns via the first region, once the detection unit detects the breakage of the seal of the first container portion. 7. The signal processing device according to claim 1, wherein the processor further functions as a data output unit that outputs seal breakage time data recorded with respect to each container portion by the detection unit, to an external device via a communication interface. 8. The signal processing device according to claim 7, wherein the detection unit records the seal breakage time data in association with a user's identifier acquired in advance via the communication interface. 9. The signal processing device according to claim 1, wherein the article is a medicine, and the processor further functions as an alarm control unit that notifies a user of a timing to take the medicine contained in each container portion, in accordance with administration schedule data acquired in advance. 10. The signal processing device according to claim 1, wherein the article is a medicine, and the processor further functions as an alarm control unit that notifies a user of an administration error of the medicine that is determined using seal breakage time data recorded with respect to each container portion by the detection unit. 11. A seal breakage detecting module comprising: a signal processing device according to claim 1; and one or more connection terminals that connects the signal processing device to the one or more signal lines. 12. A seal breakage detecting module comprising: a signal processing device according to claim 1; and a communication interface that transmits data recorded by the signal processing device to an external device. 13. The seal breakage detecting module according to claim 12, wherein the communication interface is a wireless communication interface, and the seal breakage detecting module further includes an antenna used by the wireless communication interface. 14. A program for causing a processor of a signal processing device to function as a detection unit that transmits a signal to one or more signal lines formed of a breakable material in such a manner that the signal passes through regions corresponding to one or more respective container portions of a package for containing articles, and detects seal breakage of each container portion on the basis of whether the transmitted signal returns via each region. 15. A seal breakage detecting method executed by a processor of a signal processing device, the seal breakage detecting method comprising: transmitting a signal to one or more signal lines formed of a breakable material in such a manner that the signal passes through regions corresponding to one or more respective container portions of a package for containing articles; and detecting seal breakage of each container portion on the basis of whether the transmitted signal returns via each region. 16. An article packing element comprising: a package that includes one or more container portions for containing articles; one or more signal lines that are formed of a breakable material and extend through regions corresponding to the one or more respective container portions of the package; and a seal breakage detecting module that transmits a signal to the one or more signal lines and detects seal breakage of each container portion on the basis of whether the transmitted signal returns via each region.
To be able to detect seal breakage of a container portion for containing an article by a simpler scheme, there is provided a signal processing device including: a processor that executes a program; and a memory that stores the program for causing the processor to function as a detection unit that transmits a signal to one or more signal lines formed of a breakable material in such a manner that the signal passes through regions corresponding to one or more respective container portions of a package for containing articles, and detects seal breakage of each container portion on the basis of whether the transmitted signal returns via each region.1. A signal processing device comprising: a processor that executes a program; and a memory that stores the program for causing the processor to function as a detection unit that transmits a signal to one or more signal lines formed of a breakable material in such a manner that the signal passes through regions corresponding to one or more respective container portions of a package for containing articles, and detects seal breakage of each container portion on the basis of whether the transmitted signal returns via each region. 2. The signal processing device according to claim 1, wherein the detection unit detects seal breakage of a first container portion, in a case in which a first signal does not return via a first region when a predetermined delay time has elapsed since a transmission time of the first signal that is to pass through the first region corresponding to the first container portion. 3. The signal processing device according to claim 2, wherein the one or more signal lines are one or more branch lines that branch from a common line, and the detection unit transmits the first signal to a first branch line that extends through the first region corresponding to the first container portion, and determines whether the first signal returns from the common line. 4. The signal processing device according to claim 2, wherein the one or more signal lines are one or more branch lines that branch from a common line, and the detection unit transmits the first signal to the common line, and determines whether the first signal returns from a first branch line that extends through the first region corresponding to the first container portion. 5. The signal processing device according to claim 2, wherein the first signal is a pulse signal having a predetermined pulse width. 6. The signal processing device according to claim 2, wherein the detection unit does not determine whether the first signal returns via the first region, once the detection unit detects the breakage of the seal of the first container portion. 7. The signal processing device according to claim 1, wherein the processor further functions as a data output unit that outputs seal breakage time data recorded with respect to each container portion by the detection unit, to an external device via a communication interface. 8. The signal processing device according to claim 7, wherein the detection unit records the seal breakage time data in association with a user's identifier acquired in advance via the communication interface. 9. The signal processing device according to claim 1, wherein the article is a medicine, and the processor further functions as an alarm control unit that notifies a user of a timing to take the medicine contained in each container portion, in accordance with administration schedule data acquired in advance. 10. The signal processing device according to claim 1, wherein the article is a medicine, and the processor further functions as an alarm control unit that notifies a user of an administration error of the medicine that is determined using seal breakage time data recorded with respect to each container portion by the detection unit. 11. A seal breakage detecting module comprising: a signal processing device according to claim 1; and one or more connection terminals that connects the signal processing device to the one or more signal lines. 12. A seal breakage detecting module comprising: a signal processing device according to claim 1; and a communication interface that transmits data recorded by the signal processing device to an external device. 13. The seal breakage detecting module according to claim 12, wherein the communication interface is a wireless communication interface, and the seal breakage detecting module further includes an antenna used by the wireless communication interface. 14. A program for causing a processor of a signal processing device to function as a detection unit that transmits a signal to one or more signal lines formed of a breakable material in such a manner that the signal passes through regions corresponding to one or more respective container portions of a package for containing articles, and detects seal breakage of each container portion on the basis of whether the transmitted signal returns via each region. 15. A seal breakage detecting method executed by a processor of a signal processing device, the seal breakage detecting method comprising: transmitting a signal to one or more signal lines formed of a breakable material in such a manner that the signal passes through regions corresponding to one or more respective container portions of a package for containing articles; and detecting seal breakage of each container portion on the basis of whether the transmitted signal returns via each region. 16. An article packing element comprising: a package that includes one or more container portions for containing articles; one or more signal lines that are formed of a breakable material and extend through regions corresponding to the one or more respective container portions of the package; and a seal breakage detecting module that transmits a signal to the one or more signal lines and detects seal breakage of each container portion on the basis of whether the transmitted signal returns via each region.
2,600
10,370
10,370
14,734,904
2,643
An interactive voice response (IVR) system establishes an IVR call connection with a user. The IVR system sends a service recommendation request for the user to a voice recommendation system to acquire one or more voice service modules recommended to the user via the voice recommendation system. The IVR system assembles the voice service modules recommended to the user into a menu for voice display. The voice service modules recommended to the user are voice service modules matched with service demand data of the user among preset voice service modules. The IVR system no longer displays according to a fixed voice display process, but displays the voice service modules in a personalized manner. Therefore, the user, when acquiring corresponding services, is able to locate desirable voice service modules without executing feedback operations repeatedly, which shortens the voice display time and decreases the call loss, thereby reducing the usage of system resource.
1. A method comprising: establishing a call connection with a user; acquiring a number of voice service modules recommended to the user; and assembling the voice service modules into a menu for a voice display. 2. The method of claim 1, wherein the service modules are voice service modules matched with service demand data of the user among preset voice service modules. 3. The method of claim 1, wherein the assembling comprises: performing the voice display for a voice service module in response to determining that the number of the voice service modules is 1. 4. The method of claim 1, wherein the assembling comprises: determining that the number of the voice service modules is greater than 1; performing an association operation between the voice service modules and an interactive interface; and performing the voice display for the voice service modules according to an association relationship between the voice services modules and the interactive interface. 5. The method of claim 1, further comprising: acquiring one or more human service modules recommended to the user. 6. The method of claim 5, wherein the assembling comprises: assembling the voice service modules into the menu for the voice display in response to determining that the number of the voice service modules is not equal to 0. 7. The method of claim 5, further comprising: determining that the number of the voice service modules is equal to 0 and the number of the one or more human service modules is not equal to 0; and turning to one of the one or more human service modules. 8. The method of claim 1, wherein the acquiring comprises: acquiring voice data input by the user; converting the voice data into corresponding text data as service demand data; and acquiring one or more voice service modules recommended to the user according to the service demand data. 9. The method of claim 1, wherein the acquiring comprises: acquiring account information of the user; acquiring network behavior data of the user as service demand data based on the account information; and acquiring one or more voice service modules recommended to the user according to the service demand data. 10. The method of claim 1, wherein the acquiring comprises: acquiring voice data input by the user; converting the voice data into corresponding text data; acquiring account information of the user; acquiring network behavior data of the user based on the account information; using the text data and the network behavior data together as service demand data; and acquiring one or more voice service modules recommended to the user according to the service demand data. 11. The method of claim 10, wherein the one or more voice service modules include a first voice service module matching the text data and a second voice service module matching the network behavior data. 12. The method of claim 11, further comprising: setting a voice service module matching both the text data and the network behavior data with a priority for the voice display. 13. The method of claim 1, further comprising: acquiring interface operation information input by the user; acquiring a service operation corresponding to the interface operation information based on assembling information of the menu; and executing the corresponding service operation. 14. An interactive voice response (IVR) system comprising: an establishing unit that establishes a call connection with a user; a sending unit that sends a service recommendation request for the user to a voice recommendation system to acquire a number of voice service modules recommended to the user, the voice service modules including voice service modules matched with service demand data of the user among preset voice service modules; and an assembling unit that assembles the voice service modules into a menu for a voice display. 15. The IVR system of claim 14, wherein the assembling unit further: determines that the number of the voice service modules is greater than 1; performs an association operation between the voice service modules and an interactive interface; and performs the voice display for the voice service modules according to an association relationship between the voice services modules and the interactive interface. 16. The IVR system of claim 14, wherein the sending unit further: acquires one or more human service modules recommended to the user via the voice recommendation system. 17. The IVR system of claim 16, wherein the assembling unit further: assembles the voice service modules into the menu for the voice display in response to determining that the number of the voice service modules is not equal to 0. 18. The IVR system of claim 16, wherein the assembling unit further: determines that the number of the voice service modules is equal to 0 and the number of the one or more human service modules is not equal to 0; and turns to one of the one or more human service modules. 19. The IVR system of claim 14, wherein the service demand data includes: text data converted from voice data input by the user; network behavior data of the user based on account information of the user; or a combination of the text data and the network behavior data. 20. One or more memories having stored thereon computer executable instructions executable by one or more processors to perform operations comprising: establishing a call connection with a user; acquiring a number of voice service modules recommended to the user; and assembling the voice service modules into a menu for a voice display.
An interactive voice response (IVR) system establishes an IVR call connection with a user. The IVR system sends a service recommendation request for the user to a voice recommendation system to acquire one or more voice service modules recommended to the user via the voice recommendation system. The IVR system assembles the voice service modules recommended to the user into a menu for voice display. The voice service modules recommended to the user are voice service modules matched with service demand data of the user among preset voice service modules. The IVR system no longer displays according to a fixed voice display process, but displays the voice service modules in a personalized manner. Therefore, the user, when acquiring corresponding services, is able to locate desirable voice service modules without executing feedback operations repeatedly, which shortens the voice display time and decreases the call loss, thereby reducing the usage of system resource.1. A method comprising: establishing a call connection with a user; acquiring a number of voice service modules recommended to the user; and assembling the voice service modules into a menu for a voice display. 2. The method of claim 1, wherein the service modules are voice service modules matched with service demand data of the user among preset voice service modules. 3. The method of claim 1, wherein the assembling comprises: performing the voice display for a voice service module in response to determining that the number of the voice service modules is 1. 4. The method of claim 1, wherein the assembling comprises: determining that the number of the voice service modules is greater than 1; performing an association operation between the voice service modules and an interactive interface; and performing the voice display for the voice service modules according to an association relationship between the voice services modules and the interactive interface. 5. The method of claim 1, further comprising: acquiring one or more human service modules recommended to the user. 6. The method of claim 5, wherein the assembling comprises: assembling the voice service modules into the menu for the voice display in response to determining that the number of the voice service modules is not equal to 0. 7. The method of claim 5, further comprising: determining that the number of the voice service modules is equal to 0 and the number of the one or more human service modules is not equal to 0; and turning to one of the one or more human service modules. 8. The method of claim 1, wherein the acquiring comprises: acquiring voice data input by the user; converting the voice data into corresponding text data as service demand data; and acquiring one or more voice service modules recommended to the user according to the service demand data. 9. The method of claim 1, wherein the acquiring comprises: acquiring account information of the user; acquiring network behavior data of the user as service demand data based on the account information; and acquiring one or more voice service modules recommended to the user according to the service demand data. 10. The method of claim 1, wherein the acquiring comprises: acquiring voice data input by the user; converting the voice data into corresponding text data; acquiring account information of the user; acquiring network behavior data of the user based on the account information; using the text data and the network behavior data together as service demand data; and acquiring one or more voice service modules recommended to the user according to the service demand data. 11. The method of claim 10, wherein the one or more voice service modules include a first voice service module matching the text data and a second voice service module matching the network behavior data. 12. The method of claim 11, further comprising: setting a voice service module matching both the text data and the network behavior data with a priority for the voice display. 13. The method of claim 1, further comprising: acquiring interface operation information input by the user; acquiring a service operation corresponding to the interface operation information based on assembling information of the menu; and executing the corresponding service operation. 14. An interactive voice response (IVR) system comprising: an establishing unit that establishes a call connection with a user; a sending unit that sends a service recommendation request for the user to a voice recommendation system to acquire a number of voice service modules recommended to the user, the voice service modules including voice service modules matched with service demand data of the user among preset voice service modules; and an assembling unit that assembles the voice service modules into a menu for a voice display. 15. The IVR system of claim 14, wherein the assembling unit further: determines that the number of the voice service modules is greater than 1; performs an association operation between the voice service modules and an interactive interface; and performs the voice display for the voice service modules according to an association relationship between the voice services modules and the interactive interface. 16. The IVR system of claim 14, wherein the sending unit further: acquires one or more human service modules recommended to the user via the voice recommendation system. 17. The IVR system of claim 16, wherein the assembling unit further: assembles the voice service modules into the menu for the voice display in response to determining that the number of the voice service modules is not equal to 0. 18. The IVR system of claim 16, wherein the assembling unit further: determines that the number of the voice service modules is equal to 0 and the number of the one or more human service modules is not equal to 0; and turns to one of the one or more human service modules. 19. The IVR system of claim 14, wherein the service demand data includes: text data converted from voice data input by the user; network behavior data of the user based on account information of the user; or a combination of the text data and the network behavior data. 20. One or more memories having stored thereon computer executable instructions executable by one or more processors to perform operations comprising: establishing a call connection with a user; acquiring a number of voice service modules recommended to the user; and assembling the voice service modules into a menu for a voice display.
2,600
10,371
10,371
15,815,106
2,642
Systems, methods, apparatuses, and computer program products for managing or monitoring of the control channel in new radio (NR) through blind searches are provided. One method may include configuring, by a network node, multiple search spaces, sets of search spaces, and/or control resource sets, to a user equipment, that results in monitoring occasions where more blind decodings are required than allowed by capability of the user equipment. The method may further include identifying the monitoring occasions for which an allowed number of blind decodings is exceeded, determining a reduced set of blind decodings and/or candidates determined by predefined search space priorities or rules, and transmitting physical downlink control channel(s) to the user equipment given the reduced set.
1. A method, comprising: configuring, by a network node, multiple search spaces, sets of search spaces, and/or control resource sets, to a user equipment, that results in monitoring occasions where more blind decodings are required than allowed by capability of the user equipment; identifying the monitoring occasions for which an allowed number of blind decodings is exceeded; determining a reduced set of blind decodings and/or candidates, wherein the reduced set of blind decodings and/or candidates are determined by predefined search space priorities or rules; transmitting physical downlink control channel(s) to the user equipment given the reduced set. 2. An apparatus, comprising: at least one processor; and at least one memory comprising computer program code, the at least one memory and computer program code configured, with the at least one processor, to cause the apparatus at least to configure multiple search spaces, sets of search spaces, and/or control resource sets, to a user equipment, that results in monitoring occasions where more blind decodings are required than allowed by capability of the user equipment; identify the monitoring occasions for which an allowed number of blind decodings is exceeded; determine a reduced set of blind decodings and/or candidates, wherein the reduced set of blind decodings and/or candidates are determined by predefined search space priorities or rules; transmit physical downlink control channel(s) to the user equipment given the reduced set. 3. The apparatus according to claim 2, wherein the predefined search space priorities or rules comprise rules configured to prioritize the blind decoding attempts on different ones of said multiple search spaces or sets of search spaces. 4. The apparatus according to claim 2, wherein the predefined search space priorities or rules comprise: assigning a priority number to each of the blind decodings and/or candidates that are subject to potential blind decoding reduction; and reducing the number of blind decodings according to the priority number. 5. The apparatus according to claim 4, wherein the predefined search space priorities or rules further comprise dropping the blind decodings with lowest priority numbers until the allowed level of blind decodings is reached. 6. The apparatus according to claim 4, wherein the priority number within an aggregation level (AL) of the search space (SS) depends on a total number of blind decodings per aggregation level (AL) within the search space (SS). 7. The apparatus according to claim 4, wherein the priority number is calculated according to the following equation: p bd  ( SS , AL ) = α  ( SS , AL )  BD   index   ( SS , AL ) Number   of   BDs   ( SS , AL ) , where pbd represents the priority number, the BD index (SS, AL) is the blind decoding index within a search space (SS) and aggregation level (AL), Number of BDs (SS, AL) is the number of blind decodings within the search space (SS) and aggregation level (AL), and α(SS, AL) is a priority scaler. 8. The apparatus according to claim 4, wherein, when multiple blind decodings have the same priority number, the predefined search space priorities or rules further comprise dropping the blind decoding with a lowest search space priority. 9. The apparatus according to claim 2, wherein an order of the search space priority is defined according to at least one of the following criteria: priority order according to aggregation level, priority order between said sets of search spaces, priority order according to blind decoding search space set type, priority order according to downlink control information (DCI) size, or priority order according to radio network temporary identifier (RNTI) associated with the search space; and wherein the predefined search space priorities or rules further comprise dropping blind decodings at the user equipment based on a priority order according to component carrier and/or bandwidth part in the following predefined order: (1) aggregation levels, (2) scheduling types, (3) search space sets, and (4) component carriers. 10. The apparatus according to claim 2, wherein the blind decoding capability of the user equipment is determined per time slot and the identifying of the monitoring occasions is done per time slot. 11. A method, comprising: receiving, by a user equipment, configuration of blind decodings or candidates on multiple search spaces, sets of search spaces and/or control resource sets that results in monitoring occasions where a number of required blind decodings exceeds a capability of the user equipment; identifying the monitoring occasions for which the blind decoding capability of the user equipment is exceeded and reducing the set of blind decodings or candidates based on predefined search space priorities or rules; and receiving, by the user equipment, physical downlink control channel(s) given the reduced set of blind decodings or candidates. 12. An apparatus, comprising: at least one processor; and at least one memory comprising computer program code, the at least one memory and computer program code configured, with the at least one processor, to cause the apparatus at least to receive configuration of blind decodings or candidates on multiple search spaces, sets of search spaces and/or control resource sets that results in monitoring occasions where a number of required blind decodings exceeds a capability of the apparatus; identify the monitoring occasions for which the blind decoding capability of the apparatus is exceeded and reduce the set of blind decodings or candidates based on predefined search space priorities or rules; and receive physical downlink control channel(s) given the reduced set of blind decodings or candidates. 13. The apparatus according to claim 12, wherein the predefined search space priorities or rules comprise rules configured to prioritize the blind decodings on different ones of said multiple search spaces and/or sets of search spaces. 14. The apparatus according to claim 12, wherein the predefined search space priorities or rules comprise: assigning a priority number to each of the blind decodings and/or candidates that are subject to potential blind decoding reduction; and reducing the number of blind decodings according to the priority number. 15. The apparatus according to claim 14, wherein the predefined search space priorities or rules further comprise dropping the blind decodings with lowest priority numbers until the allowed level of blind decodings is reached. 16. The apparatus according to claim 13, wherein the apparatus is configured to reduce the number of blind decodings jointly over blind decodings in said multiple search spaces, sets of search spaces and/or control resource sets, or wherein the apparatus is configured to reduce the number of blind decodings sequentially in different search spaces, sets of search spaces, and/or control resource sets according to the search space priority. 17. The apparatus according to claim 14, wherein the priority number within an aggregation level (AL) of a search space (SS) depends on a total number of blind decodings per aggregation level (AL) within the search space (SS). 18. The apparatus according to claim 14, further comprising calculating the priority number according to the following equation: p bd  ( SS , AL ) = α  ( SS , AL )  BD   index   ( SS , AL ) Number   of   BDs   ( SS , AL ) , where pbd represents the priority number, the BD index (SS, AL) is the blind decoding index within the search space (SS) and aggregation level (AL), Number of BDs (SS, AL) is the number of blind decodings within the search space (SS) and aggregation level (AL), and α(SS, AL) is a priority scaler. 19. The apparatus according to claim 14, wherein, when multiple blind decodings have the same priority number, the predefined search space priorities or rules further comprise dropping the blind decoding attempts with a lowest search space priority. 20. The apparatus according to claim 13, wherein an order of the search space priority is defined according to at least one of the following criteria: priority order according to aggregation level, priority order between said sets of search spaces, priority order according to blind decoding search space set type, priority order according to downlink control information (DCI) size, or priority order according to radio network temporary identifier (RNTI) associated with the search space; or wherein the predefined search space priorities or rules further comprise dropping blind decodings at the user equipment based on a priority order according to component carrier and/or bandwidth part in the following predefined order: (1) aggregation levels, (2) scheduling types, (3) search space sets, and (4) component carriers.
Systems, methods, apparatuses, and computer program products for managing or monitoring of the control channel in new radio (NR) through blind searches are provided. One method may include configuring, by a network node, multiple search spaces, sets of search spaces, and/or control resource sets, to a user equipment, that results in monitoring occasions where more blind decodings are required than allowed by capability of the user equipment. The method may further include identifying the monitoring occasions for which an allowed number of blind decodings is exceeded, determining a reduced set of blind decodings and/or candidates determined by predefined search space priorities or rules, and transmitting physical downlink control channel(s) to the user equipment given the reduced set.1. A method, comprising: configuring, by a network node, multiple search spaces, sets of search spaces, and/or control resource sets, to a user equipment, that results in monitoring occasions where more blind decodings are required than allowed by capability of the user equipment; identifying the monitoring occasions for which an allowed number of blind decodings is exceeded; determining a reduced set of blind decodings and/or candidates, wherein the reduced set of blind decodings and/or candidates are determined by predefined search space priorities or rules; transmitting physical downlink control channel(s) to the user equipment given the reduced set. 2. An apparatus, comprising: at least one processor; and at least one memory comprising computer program code, the at least one memory and computer program code configured, with the at least one processor, to cause the apparatus at least to configure multiple search spaces, sets of search spaces, and/or control resource sets, to a user equipment, that results in monitoring occasions where more blind decodings are required than allowed by capability of the user equipment; identify the monitoring occasions for which an allowed number of blind decodings is exceeded; determine a reduced set of blind decodings and/or candidates, wherein the reduced set of blind decodings and/or candidates are determined by predefined search space priorities or rules; transmit physical downlink control channel(s) to the user equipment given the reduced set. 3. The apparatus according to claim 2, wherein the predefined search space priorities or rules comprise rules configured to prioritize the blind decoding attempts on different ones of said multiple search spaces or sets of search spaces. 4. The apparatus according to claim 2, wherein the predefined search space priorities or rules comprise: assigning a priority number to each of the blind decodings and/or candidates that are subject to potential blind decoding reduction; and reducing the number of blind decodings according to the priority number. 5. The apparatus according to claim 4, wherein the predefined search space priorities or rules further comprise dropping the blind decodings with lowest priority numbers until the allowed level of blind decodings is reached. 6. The apparatus according to claim 4, wherein the priority number within an aggregation level (AL) of the search space (SS) depends on a total number of blind decodings per aggregation level (AL) within the search space (SS). 7. The apparatus according to claim 4, wherein the priority number is calculated according to the following equation: p bd  ( SS , AL ) = α  ( SS , AL )  BD   index   ( SS , AL ) Number   of   BDs   ( SS , AL ) , where pbd represents the priority number, the BD index (SS, AL) is the blind decoding index within a search space (SS) and aggregation level (AL), Number of BDs (SS, AL) is the number of blind decodings within the search space (SS) and aggregation level (AL), and α(SS, AL) is a priority scaler. 8. The apparatus according to claim 4, wherein, when multiple blind decodings have the same priority number, the predefined search space priorities or rules further comprise dropping the blind decoding with a lowest search space priority. 9. The apparatus according to claim 2, wherein an order of the search space priority is defined according to at least one of the following criteria: priority order according to aggregation level, priority order between said sets of search spaces, priority order according to blind decoding search space set type, priority order according to downlink control information (DCI) size, or priority order according to radio network temporary identifier (RNTI) associated with the search space; and wherein the predefined search space priorities or rules further comprise dropping blind decodings at the user equipment based on a priority order according to component carrier and/or bandwidth part in the following predefined order: (1) aggregation levels, (2) scheduling types, (3) search space sets, and (4) component carriers. 10. The apparatus according to claim 2, wherein the blind decoding capability of the user equipment is determined per time slot and the identifying of the monitoring occasions is done per time slot. 11. A method, comprising: receiving, by a user equipment, configuration of blind decodings or candidates on multiple search spaces, sets of search spaces and/or control resource sets that results in monitoring occasions where a number of required blind decodings exceeds a capability of the user equipment; identifying the monitoring occasions for which the blind decoding capability of the user equipment is exceeded and reducing the set of blind decodings or candidates based on predefined search space priorities or rules; and receiving, by the user equipment, physical downlink control channel(s) given the reduced set of blind decodings or candidates. 12. An apparatus, comprising: at least one processor; and at least one memory comprising computer program code, the at least one memory and computer program code configured, with the at least one processor, to cause the apparatus at least to receive configuration of blind decodings or candidates on multiple search spaces, sets of search spaces and/or control resource sets that results in monitoring occasions where a number of required blind decodings exceeds a capability of the apparatus; identify the monitoring occasions for which the blind decoding capability of the apparatus is exceeded and reduce the set of blind decodings or candidates based on predefined search space priorities or rules; and receive physical downlink control channel(s) given the reduced set of blind decodings or candidates. 13. The apparatus according to claim 12, wherein the predefined search space priorities or rules comprise rules configured to prioritize the blind decodings on different ones of said multiple search spaces and/or sets of search spaces. 14. The apparatus according to claim 12, wherein the predefined search space priorities or rules comprise: assigning a priority number to each of the blind decodings and/or candidates that are subject to potential blind decoding reduction; and reducing the number of blind decodings according to the priority number. 15. The apparatus according to claim 14, wherein the predefined search space priorities or rules further comprise dropping the blind decodings with lowest priority numbers until the allowed level of blind decodings is reached. 16. The apparatus according to claim 13, wherein the apparatus is configured to reduce the number of blind decodings jointly over blind decodings in said multiple search spaces, sets of search spaces and/or control resource sets, or wherein the apparatus is configured to reduce the number of blind decodings sequentially in different search spaces, sets of search spaces, and/or control resource sets according to the search space priority. 17. The apparatus according to claim 14, wherein the priority number within an aggregation level (AL) of a search space (SS) depends on a total number of blind decodings per aggregation level (AL) within the search space (SS). 18. The apparatus according to claim 14, further comprising calculating the priority number according to the following equation: p bd  ( SS , AL ) = α  ( SS , AL )  BD   index   ( SS , AL ) Number   of   BDs   ( SS , AL ) , where pbd represents the priority number, the BD index (SS, AL) is the blind decoding index within the search space (SS) and aggregation level (AL), Number of BDs (SS, AL) is the number of blind decodings within the search space (SS) and aggregation level (AL), and α(SS, AL) is a priority scaler. 19. The apparatus according to claim 14, wherein, when multiple blind decodings have the same priority number, the predefined search space priorities or rules further comprise dropping the blind decoding attempts with a lowest search space priority. 20. The apparatus according to claim 13, wherein an order of the search space priority is defined according to at least one of the following criteria: priority order according to aggregation level, priority order between said sets of search spaces, priority order according to blind decoding search space set type, priority order according to downlink control information (DCI) size, or priority order according to radio network temporary identifier (RNTI) associated with the search space; or wherein the predefined search space priorities or rules further comprise dropping blind decodings at the user equipment based on a priority order according to component carrier and/or bandwidth part in the following predefined order: (1) aggregation levels, (2) scheduling types, (3) search space sets, and (4) component carriers.
2,600
10,372
10,372
14,938,694
2,612
The present disclosure relates to methods for displaying facility information. One such method includes causing a client terminal to render on-screen floorplan data at a desired position and resolution, wherein the floorplan data is defined by a plurality of scalable resolution independent vector images, each resolution independent vector image representing a physical space in a facility. As set of rules are executed thereby to apply determined visual characteristics to one or more of the vector images, wherein each of the one or more vector images is associated with a data point in a building management system, and wherein for a given vector image the set of rules defines a relationship between observed data point values and visual characteristics to be displayed.
1. A computer implemented method for displaying facility information, the method including: maintaining access to a repository of floorplan data, wherein the floorplan data is defined by a plurality of scalable resolution independent vector images, each resolution independent vector image representing a physical space in a facility; maintaining access to a database that associates a plurality of the vector images with data points defined in a building management system; in response to a request from a client terminal, enabling the client terminal to render the floorplan data at a desired position and resolution; and configuring the client terminal to render each of the plurality of vector images with graphical characteristics determined by reference to the data points defined in the building management system. 2. A method according to claim 1 wherein the scalable resolution independent vector images include HTML5 Scalable Vector Graphics images. 3. A method according to claim 1 wherein, for a given vector image, the graphical characteristics determined by reference to a relationship between: (i) a measured temperature value at the represented physical space; and (ii) a defined temperature setpoint value defined for the represented physical space. 4. A method according to claim 1 wherein, for a given vector image, the graphical characteristics include a fill for the vector image. 5. A method according to claim 4 wherein the fill is characterized by a colour and/or pattern and/or opacity. 6. A method according to claim 4 wherein the fill is characterized by an alphanumeric information. 7. A method according to claim 1 including enabling a user to define video data representative of navigation of the rendered floorplan. 8. A method according to claim 7 including enabling the user to share the video data with a second user of a further client terminal. 9. A method according to claim 1 wherein configuring the client terminal to render each of the plurality of vector images with graphical characteristics determined by reference to the data points defined in the building management system includes configuring data binding between the client terminal and a remote data source that provides data indicative of instructions to modify the graphical characteristics of one or more of the vector images. 10. A method according to claim 1 wherein configuring the client terminal to render each of the plurality of vector images with graphical characteristics determined by reference to the data points defined in the building management system includes instructing the client terminal to modify the graphical characteristics of one or more of the vector images in response to changes in the associated data points in the building management system. 11. A computer implemented method for displaying facility information, the method including: causing a client terminal to render on-screen floorplan data at a desired position and resolution, wherein the floorplan data is defined by a plurality of scalable resolution independent vector images, each resolution independent vector image representing a physical space in a facility; and executing a set of rules thereby to apply determined visual characteristics to one or more of the vector images, wherein each of the one or more vector images is associated with a data point in a building management system, and wherein for a given vector image the set of rules defines a relationship between observed data point values and visual characteristics to be displayed. 12. A method according to claim 11 wherein the scalable resolution independent vector images include HTML5 Scalable Vector Graphics images. 13. A method according to claim 11 wherein, for a given vector image, the graphical characteristics determined by reference to a relationship between: (i) a measured temperature value at the represented physical space; and (ii) a defined temperature setpoint value defined for the represented physical space. 14. A method according to claim 11 wherein, for a given vector image, the graphical characteristics include a fill for the vector image. 15. A method according to claim 14 wherein the fill is characterized by a colour and/or pattern and/or opacity. 16. A method according to claim 14 wherein the fill is characterized by an alphanumeric information. 17. A method according to claim 1 including enabling a user to define video data representative of navigation of the rendered floorplan. 18. A method according to claim 17 including enabling the user to share the video data with a second user of a further client terminal. 19. A non-transitory computer readable medium containing code that, when executed on one or more processors, causes the processors to perform a method according to claim 1. 20. A computer system configured to perform a method according to claim 1.
The present disclosure relates to methods for displaying facility information. One such method includes causing a client terminal to render on-screen floorplan data at a desired position and resolution, wherein the floorplan data is defined by a plurality of scalable resolution independent vector images, each resolution independent vector image representing a physical space in a facility. As set of rules are executed thereby to apply determined visual characteristics to one or more of the vector images, wherein each of the one or more vector images is associated with a data point in a building management system, and wherein for a given vector image the set of rules defines a relationship between observed data point values and visual characteristics to be displayed.1. A computer implemented method for displaying facility information, the method including: maintaining access to a repository of floorplan data, wherein the floorplan data is defined by a plurality of scalable resolution independent vector images, each resolution independent vector image representing a physical space in a facility; maintaining access to a database that associates a plurality of the vector images with data points defined in a building management system; in response to a request from a client terminal, enabling the client terminal to render the floorplan data at a desired position and resolution; and configuring the client terminal to render each of the plurality of vector images with graphical characteristics determined by reference to the data points defined in the building management system. 2. A method according to claim 1 wherein the scalable resolution independent vector images include HTML5 Scalable Vector Graphics images. 3. A method according to claim 1 wherein, for a given vector image, the graphical characteristics determined by reference to a relationship between: (i) a measured temperature value at the represented physical space; and (ii) a defined temperature setpoint value defined for the represented physical space. 4. A method according to claim 1 wherein, for a given vector image, the graphical characteristics include a fill for the vector image. 5. A method according to claim 4 wherein the fill is characterized by a colour and/or pattern and/or opacity. 6. A method according to claim 4 wherein the fill is characterized by an alphanumeric information. 7. A method according to claim 1 including enabling a user to define video data representative of navigation of the rendered floorplan. 8. A method according to claim 7 including enabling the user to share the video data with a second user of a further client terminal. 9. A method according to claim 1 wherein configuring the client terminal to render each of the plurality of vector images with graphical characteristics determined by reference to the data points defined in the building management system includes configuring data binding between the client terminal and a remote data source that provides data indicative of instructions to modify the graphical characteristics of one or more of the vector images. 10. A method according to claim 1 wherein configuring the client terminal to render each of the plurality of vector images with graphical characteristics determined by reference to the data points defined in the building management system includes instructing the client terminal to modify the graphical characteristics of one or more of the vector images in response to changes in the associated data points in the building management system. 11. A computer implemented method for displaying facility information, the method including: causing a client terminal to render on-screen floorplan data at a desired position and resolution, wherein the floorplan data is defined by a plurality of scalable resolution independent vector images, each resolution independent vector image representing a physical space in a facility; and executing a set of rules thereby to apply determined visual characteristics to one or more of the vector images, wherein each of the one or more vector images is associated with a data point in a building management system, and wherein for a given vector image the set of rules defines a relationship between observed data point values and visual characteristics to be displayed. 12. A method according to claim 11 wherein the scalable resolution independent vector images include HTML5 Scalable Vector Graphics images. 13. A method according to claim 11 wherein, for a given vector image, the graphical characteristics determined by reference to a relationship between: (i) a measured temperature value at the represented physical space; and (ii) a defined temperature setpoint value defined for the represented physical space. 14. A method according to claim 11 wherein, for a given vector image, the graphical characteristics include a fill for the vector image. 15. A method according to claim 14 wherein the fill is characterized by a colour and/or pattern and/or opacity. 16. A method according to claim 14 wherein the fill is characterized by an alphanumeric information. 17. A method according to claim 1 including enabling a user to define video data representative of navigation of the rendered floorplan. 18. A method according to claim 17 including enabling the user to share the video data with a second user of a further client terminal. 19. A non-transitory computer readable medium containing code that, when executed on one or more processors, causes the processors to perform a method according to claim 1. 20. A computer system configured to perform a method according to claim 1.
2,600
10,373
10,373
16,113,301
2,692
A hand-held device with a sensor for providing a signal indicative of a position of the hand-held device relative to an object surface enables power to the sensor at a first time interval when the hand-held device is indicated to be in a position that is stationary and adjacent relative to the object surface, enables power to the sensor at a second time interval shorter than the first time interval when the hand-held device is indicated to be in a position that is moving and adjacent relative to the object surface, and enables power to the sensor at a third time interval when the hand-held device is determined to be in a position that is removed relative to the object surface.
1. A non-transitory, computer readable media having stored thereon instructions for managing a hand-held device having a plurality of input receiving elements, a first wireless command transmission device, a second wireless command transmission device, and a sensor, the instructions, when executed by a processing unit of the hand-held device, performing steps comprising: using signals received from the sensor to determine when the hand-held portable device is positioned proximate to an object surface and to determine when the hand-held portable device is removed from the object surface; and causing the hand-held device to automatically transition from a first operational mode to a second operational mode when it is determined from a signal received from the sensor that the hand-held portable device has been moved proximate to the object surface and to automatically transition from the second operational mode back to the first operational mode when it is determined from a signal received from the sensor that the hand-held portable device has been subsequently moved away from the object surface; wherein, in the first operational mode, the hand-held device is configured to use the first wireless command transmission device when transmitting one or more command communications in response to an activation of one or more of the plurality of input receiving elements and, in the second operational mode, the hand-held device is configured to use the second wireless command transmission device when transmitting one or more command communications in response to an activation of one or more of the plurality of input receiving elements and wherein the first wireless transmission device is different from the second command transmission device. 2. The non-transitory, computer readable media as recited in claim 1, wherein the first wireless transmission device comprises a radio frequency transmission device and wherein the second wireless transmission device comprises an infrared transmission device. 3. The non-transitory, computer readable media as recited in claim 1, wherein, in the first operational mode, the hand-held device is configured to use a first command code set when transmitting one or more command communications in response to an activation of one or more of the plurality of input receiving elements and, in the second operational mode, the hand-held device is configured to use a second command code set when transmitting one or more command communications in response to an activation of one or more of the plurality of input receiving elements and wherein the first command code set is different than the second command code set. 4. The non-transitory, computer readable media as recited in claim 3, wherein the first wireless transmission device comprises a radio frequency transmission device and wherein the second wireless transmission device comprises an infrared transmission device. 5. The non-transitory, computer readable media as recited in claim 3, wherein the plurality of input receiving elements comprise a plurality of soft input elements caused to be displayed in a touch sensitive surface of the hand-held device. 6. The non-transitory, computer readable media as recited in claim 5, wherein, in the first operational mode, the hand-held device is configured to display a first set of the plurality of soft input elements in the touch sensitive surface of the hand-held device and, in the second operational mode, the hand-held device is configured to display a second set of the plurality of soft input elements in the touch sensitive surface of the hand-held device and wherein the first set of the plurality of soft input elements is different than the second set of the plurality of input elements. 7. The non-transitory, computer readable media as recited in claim 3, wherein the instructions use input received into the hand-held device to select the first command code set and the second command code set from a library of command code sets stored in a memory of the hand-held device. 8. The non-transitory, computer readable media as recited in claim 7, wherein the input used to select the first command code set and the second command code set is received via activations of one or more of the plurality of input receiving elements. 9. The non-transitory, computer readable media as recited in claim 3, wherein the first command code set and the second command code set are received into the hand-held device from a device located remotely from the hand-held device. 10. The non-transitory, computer readable media as recited in claim 1, wherein the instructions use a distance between the hand-held device and the object surface as sensed by the sensor to determine when the hand-held device has been moved proximate to and away from the object surface. 11. The non-transitory, computer readable media as recited in claim 1, wherein the sensor comprises an optical sensing system.
A hand-held device with a sensor for providing a signal indicative of a position of the hand-held device relative to an object surface enables power to the sensor at a first time interval when the hand-held device is indicated to be in a position that is stationary and adjacent relative to the object surface, enables power to the sensor at a second time interval shorter than the first time interval when the hand-held device is indicated to be in a position that is moving and adjacent relative to the object surface, and enables power to the sensor at a third time interval when the hand-held device is determined to be in a position that is removed relative to the object surface.1. A non-transitory, computer readable media having stored thereon instructions for managing a hand-held device having a plurality of input receiving elements, a first wireless command transmission device, a second wireless command transmission device, and a sensor, the instructions, when executed by a processing unit of the hand-held device, performing steps comprising: using signals received from the sensor to determine when the hand-held portable device is positioned proximate to an object surface and to determine when the hand-held portable device is removed from the object surface; and causing the hand-held device to automatically transition from a first operational mode to a second operational mode when it is determined from a signal received from the sensor that the hand-held portable device has been moved proximate to the object surface and to automatically transition from the second operational mode back to the first operational mode when it is determined from a signal received from the sensor that the hand-held portable device has been subsequently moved away from the object surface; wherein, in the first operational mode, the hand-held device is configured to use the first wireless command transmission device when transmitting one or more command communications in response to an activation of one or more of the plurality of input receiving elements and, in the second operational mode, the hand-held device is configured to use the second wireless command transmission device when transmitting one or more command communications in response to an activation of one or more of the plurality of input receiving elements and wherein the first wireless transmission device is different from the second command transmission device. 2. The non-transitory, computer readable media as recited in claim 1, wherein the first wireless transmission device comprises a radio frequency transmission device and wherein the second wireless transmission device comprises an infrared transmission device. 3. The non-transitory, computer readable media as recited in claim 1, wherein, in the first operational mode, the hand-held device is configured to use a first command code set when transmitting one or more command communications in response to an activation of one or more of the plurality of input receiving elements and, in the second operational mode, the hand-held device is configured to use a second command code set when transmitting one or more command communications in response to an activation of one or more of the plurality of input receiving elements and wherein the first command code set is different than the second command code set. 4. The non-transitory, computer readable media as recited in claim 3, wherein the first wireless transmission device comprises a radio frequency transmission device and wherein the second wireless transmission device comprises an infrared transmission device. 5. The non-transitory, computer readable media as recited in claim 3, wherein the plurality of input receiving elements comprise a plurality of soft input elements caused to be displayed in a touch sensitive surface of the hand-held device. 6. The non-transitory, computer readable media as recited in claim 5, wherein, in the first operational mode, the hand-held device is configured to display a first set of the plurality of soft input elements in the touch sensitive surface of the hand-held device and, in the second operational mode, the hand-held device is configured to display a second set of the plurality of soft input elements in the touch sensitive surface of the hand-held device and wherein the first set of the plurality of soft input elements is different than the second set of the plurality of input elements. 7. The non-transitory, computer readable media as recited in claim 3, wherein the instructions use input received into the hand-held device to select the first command code set and the second command code set from a library of command code sets stored in a memory of the hand-held device. 8. The non-transitory, computer readable media as recited in claim 7, wherein the input used to select the first command code set and the second command code set is received via activations of one or more of the plurality of input receiving elements. 9. The non-transitory, computer readable media as recited in claim 3, wherein the first command code set and the second command code set are received into the hand-held device from a device located remotely from the hand-held device. 10. The non-transitory, computer readable media as recited in claim 1, wherein the instructions use a distance between the hand-held device and the object surface as sensed by the sensor to determine when the hand-held device has been moved proximate to and away from the object surface. 11. The non-transitory, computer readable media as recited in claim 1, wherein the sensor comprises an optical sensing system.
2,600
10,374
10,374
14,157,090
2,631
Methods and systems are provided for using frequency spreading during communications, in particular communications in which multiple carriers (or subcarriers) are used. The frequency spreading may comprise generating a plurality of spreading data vectors based on transmit data, such as by application of a spreading matrix to portions of the transmit data. Each spreading data vector may comprise a plurality of elements, for assignment to the multiple subcarriers. The receive-side device may then apply frequency de-spreading, to obtain the original transmit data. The frequency de-spreading may comprise use of the same spreading matrix on data extracted from received signals, which (the data) may correspond to the plurality of spreading data vectors.
1. A method, comprising: applying in a first electronic device, frequency spreading to transmit data intended for transmission to a second electronic device, wherein: the transmission comprises use of a plurality of subcarriers; and the frequency spreading comprises generating a plurality of spreading data vectors based on the transmit data, wherein: each spreading data vector is generated by applying a spreading matrix to a portion of the transmit data; and each spreading data vector comprises a plurality of elements, for assignment to the plurality of subcarriers, for transmission to the second electronic device. 2. The method of claim 1, comprising assigning the plurality of spreading data vectors by interleaving onto the plurality of the subcarriers. 3. The method of claim 1, comprising setting a size of the portion of the transmit data to which the spreading matrix is applied based on number of the plurality of subcarriers. 4. The method of claim 1, comprising setting a number of the plurality of elements in each spreading data vector based on a number of the plurality of subcarriers. 5. The method of claim 1, comprising configuring the spreading matrix to distribute power evenly among the plurality of subcarriers. 6. The method of claim 1, wherein the transmit data comprises a sequence of symbols generated by application of a modulation scheme. 7. The method of claim 6, comprising applying the modulation scheme to a sequence of input data, to generate the sequence of symbols. 8. The method of claim 7, comprising generating the sequence of input data based on an encoding of an original bit stream. 9. The method of claim 1, comprising applying in the second electronic device, frequency de-spreading to signals received from the first electronic device, to extract the transmit data. 10. The method of claim 9, comprising generating in the second electronic device the transmit data from the received signal by: extracting the plurality of spreading data vectors from the received signals; and applying the spreading matrix to each one of the extracted plurality of spreading data vectors, to regenerate a portion of the transmit data corresponding to the one of the extracted plurality of spreading data vectors. 11. The method of claim 9, comprising applying in the second electronic device, de-mapping to the transmit data, to enable obtaining a copy of an original input in the first electronic device. 12. A system, comprising: one or more circuits for use in an electronic device, the one or more circuits being operable to apply frequency spreading to transmit data intended for transmission to a second electronic device, wherein: the transmission comprises use of a plurality of subcarriers; and the frequency spreading comprises generating a plurality of spreading data vectors based on the transmit data, wherein: each spreading data vector is generated by applying a spreading matrix to a portion of the transmit data; and each spreading data vector comprises a plurality of elements, for assignment to the plurality of subcarriers, for transmission to the second electronic device. 13. The system of claim 12, wherein the one or more circuits are operable to assign the plurality of spreading data vectors by interleaving onto the plurality of the subcarriers. 14. The system of claim 12, wherein the one or more circuits are operable to set a size of the portion of the transmit data to which the spreading matrix is applied based on number of the plurality of subcarriers. 15. The system of claim 12, wherein the one or more circuits are operable to set a number of the plurality of elements in each spreading data vector based on a number of the plurality of subcarriers. 16. The system of claim 12, wherein the one or more circuits are operable to configure the spreading matrix to distribute power evenly among the plurality of subcarriers. 17. The system of claim 12, wherein the transmit data comprises a sequence of symbols generated by application of a modulation scheme. 18. The system of claim 17, wherein the one or more circuits are operable to apply the modulation scheme to a sequence of input data, to generate the sequence of symbols. 19. The system of claim 18, wherein the one or more circuits are operable to generate the sequence of input data based on an encoding of an original bit stream. 20. A system, comprising: one or more circuits for use in an electronic device, the one or more circuits being operable to: receive a signal received from the another electronic device, wherein the signal is communicated using a plurality of subcarriers; process the received signal to extract receive data corresponding to a plurality of spreading data vectors; and apply frequency de-spreading to the receive data, wherein the frequency de-spreading comprises applying a spreading matrix to each one of the plurality of spreading data vectors, to regenerate a portion of an original transmit data. 21. The system of claim 20, wherein the original transmit data comprises a sequence of symbols generated by application of a modulation scheme. 22. The system of claim 21, wherein the one or more circuits are operable to apply demapping, to extract an original input data from the sequence of symbols. 23. The system of claim 22, wherein the one or more circuits are operable to apply decoding to the extract original input data, to re-generate an original input bit stream.
Methods and systems are provided for using frequency spreading during communications, in particular communications in which multiple carriers (or subcarriers) are used. The frequency spreading may comprise generating a plurality of spreading data vectors based on transmit data, such as by application of a spreading matrix to portions of the transmit data. Each spreading data vector may comprise a plurality of elements, for assignment to the multiple subcarriers. The receive-side device may then apply frequency de-spreading, to obtain the original transmit data. The frequency de-spreading may comprise use of the same spreading matrix on data extracted from received signals, which (the data) may correspond to the plurality of spreading data vectors.1. A method, comprising: applying in a first electronic device, frequency spreading to transmit data intended for transmission to a second electronic device, wherein: the transmission comprises use of a plurality of subcarriers; and the frequency spreading comprises generating a plurality of spreading data vectors based on the transmit data, wherein: each spreading data vector is generated by applying a spreading matrix to a portion of the transmit data; and each spreading data vector comprises a plurality of elements, for assignment to the plurality of subcarriers, for transmission to the second electronic device. 2. The method of claim 1, comprising assigning the plurality of spreading data vectors by interleaving onto the plurality of the subcarriers. 3. The method of claim 1, comprising setting a size of the portion of the transmit data to which the spreading matrix is applied based on number of the plurality of subcarriers. 4. The method of claim 1, comprising setting a number of the plurality of elements in each spreading data vector based on a number of the plurality of subcarriers. 5. The method of claim 1, comprising configuring the spreading matrix to distribute power evenly among the plurality of subcarriers. 6. The method of claim 1, wherein the transmit data comprises a sequence of symbols generated by application of a modulation scheme. 7. The method of claim 6, comprising applying the modulation scheme to a sequence of input data, to generate the sequence of symbols. 8. The method of claim 7, comprising generating the sequence of input data based on an encoding of an original bit stream. 9. The method of claim 1, comprising applying in the second electronic device, frequency de-spreading to signals received from the first electronic device, to extract the transmit data. 10. The method of claim 9, comprising generating in the second electronic device the transmit data from the received signal by: extracting the plurality of spreading data vectors from the received signals; and applying the spreading matrix to each one of the extracted plurality of spreading data vectors, to regenerate a portion of the transmit data corresponding to the one of the extracted plurality of spreading data vectors. 11. The method of claim 9, comprising applying in the second electronic device, de-mapping to the transmit data, to enable obtaining a copy of an original input in the first electronic device. 12. A system, comprising: one or more circuits for use in an electronic device, the one or more circuits being operable to apply frequency spreading to transmit data intended for transmission to a second electronic device, wherein: the transmission comprises use of a plurality of subcarriers; and the frequency spreading comprises generating a plurality of spreading data vectors based on the transmit data, wherein: each spreading data vector is generated by applying a spreading matrix to a portion of the transmit data; and each spreading data vector comprises a plurality of elements, for assignment to the plurality of subcarriers, for transmission to the second electronic device. 13. The system of claim 12, wherein the one or more circuits are operable to assign the plurality of spreading data vectors by interleaving onto the plurality of the subcarriers. 14. The system of claim 12, wherein the one or more circuits are operable to set a size of the portion of the transmit data to which the spreading matrix is applied based on number of the plurality of subcarriers. 15. The system of claim 12, wherein the one or more circuits are operable to set a number of the plurality of elements in each spreading data vector based on a number of the plurality of subcarriers. 16. The system of claim 12, wherein the one or more circuits are operable to configure the spreading matrix to distribute power evenly among the plurality of subcarriers. 17. The system of claim 12, wherein the transmit data comprises a sequence of symbols generated by application of a modulation scheme. 18. The system of claim 17, wherein the one or more circuits are operable to apply the modulation scheme to a sequence of input data, to generate the sequence of symbols. 19. The system of claim 18, wherein the one or more circuits are operable to generate the sequence of input data based on an encoding of an original bit stream. 20. A system, comprising: one or more circuits for use in an electronic device, the one or more circuits being operable to: receive a signal received from the another electronic device, wherein the signal is communicated using a plurality of subcarriers; process the received signal to extract receive data corresponding to a plurality of spreading data vectors; and apply frequency de-spreading to the receive data, wherein the frequency de-spreading comprises applying a spreading matrix to each one of the plurality of spreading data vectors, to regenerate a portion of an original transmit data. 21. The system of claim 20, wherein the original transmit data comprises a sequence of symbols generated by application of a modulation scheme. 22. The system of claim 21, wherein the one or more circuits are operable to apply demapping, to extract an original input data from the sequence of symbols. 23. The system of claim 22, wherein the one or more circuits are operable to apply decoding to the extract original input data, to re-generate an original input bit stream.
2,600
10,375
10,375
15,423,567
2,622
A method executed by a head mounted display moves a virtual object with respect to a real object in order to maintain the virtual object in a field of view of the head mounted display. The head mounted display displays the virtual object and the real object. The virtual object moves or changes when the head mounted display moves to maintain the virtual object in the field of view.
1.-20. (canceled) 21. A method that extends an amount of time that a virtual object appears with a real object in a field of view of a head mounted display, the method comprising: displaying, with the head mounted display, the virtual object on one side of the real object while the virtual object and the real object are simultaneously visible in the field of view of the head mounted display; and moving, with the head mounted display to extend the amount of time that the virtual object appears with the real object in the field of view of the head mounted display, the virtual object from the one side of the real object to an opposite side of the real object when the head mounted display moves and causes a space between the virtual object and a perimeter of the field of view to decrease such that there is no longer sufficient space for the virtual object on the one side of the real object. 22. The method of claim 21 further comprising: reducing a size of the virtual object while the virtual object is on the one side of the real object to compensate for a reduction in size of the space as the perimeter of the field of view moves closer to the virtual object. 23. The method of claim 21 further comprising: moving, with the head mounted display to extend the amount of time that the virtual object appears with the real object in the field of view of the head mounted display, the virtual object closer to the real object as the perimeter of the field of view moves closer to the virtual object. 24. The method of claim 21, wherein the virtual object moves from the one side of the real object to the opposite side of the real object when an available space in the field of view at the one side of the real object becomes too small to include a size and a shape of the virtual object. 25. The method of claim 21, wherein the virtual object automatically moves from being located above the real object on the one side to being located below the real object on the opposite side as the perimeter of the field of view moves toward the one side. 26. The method of claim 21, wherein the virtual object moves from the one side of the real object to the opposite side of the real object when the perimeter of the field of view will collide with the virtual object. 27. The method of claim 21, wherein the virtual object moves from the one side of the real object to the opposite side of the real object where the opposite side of the real object has a largest unoccupied area of free space in the field of view. 28. The method of claim 21 further comprising: contemporaneously decreasing a size of the virtual object as a size of the real object decreases in the field of view of the head mounted display. 29. A method executed by a head mounted display to move a virtual object with respect to a real object in order to maintain the virtual object in a field of view of the head mounted display, the method comprising: displaying, on a display of the head mounted display, the virtual object adjacent to one side of the real object in the field of view of the head mounted display; detecting, by the head mounted display, movement of the head mounted display in which the virtual object will no longer be within the field of view of the head mounted display; and moving, by the head mounted display and in response to detecting the virtual object will no longer be within the field of view, the virtual object to an opposite side of the real object such that the virtual object is within the field of view of the head mounted display on the opposite side of the real object. 30. The method of claim 29 further comprising: reducing a size of the virtual object while the virtual object is on the one side of the real object to compensate for a reduction in size of available space for the virtual object on the one side of the real object as a perimeter of the field of view moves toward the virtual object. 31. The method of claim 29 further comprising: changing a shape of the virtual object while the virtual object is on the one side of the real object to compensate for a reduction in size of available space for the virtual object on the one side of the real object while the head mounted display moves and causes the reduction in size of the available space. 32. The method of claim 29 further comprising: rotating the virtual object about one of an x-axis, a y-axis, and a z-axis while the virtual object is on the one side of the real object to compensate for a reduction in size of available space for the virtual object on the one side of the real object while the head mounted display moves and causes the reduction in size of the available space. 33. The method of claim 29 further comprising: changing the virtual object from being presented as a three-dimensional (3D) object to being presented as a two-dimensional (2D) object while the virtual object is on the one side of the real object to compensate for a reduction in size of available space for the virtual object on the one side of the real object while the head mounted display moves and causes the reduction in size of the available space. 34. The method of claim 29 further comprising: changing the virtual object from being presented in a perspective view to being presented in a plan view while the virtual object is on the one side of the real object to compensate for a reduction in size of available space for the virtual object on the one side of the real object while the head mounted display moves. 35. The method of claim 29 further comprising: decreasing a distance between the virtual object and the real object to compensate for movement of the head mounted display that would result in an edge of the field of view touching the virtual object if the distance were not decreased. 36. A method that changes a virtual object displayed with a real object in order to prevent the virtual object from moving outside a field of view of a head mounted display when the head mounted display moves, the method comprising: displaying, with the head mounted display, the virtual object in a space on one side of the real object while the virtual object and the real object are simultaneously visible in the field of view of the head mounted display; and preventing, as a perimeter of the field of view moves toward the virtual object, the virtual object from moving outside the field of view of the head mounted display by reducing a size of the virtual object while the virtual object remains in the space and on the one side of the real object. 37. The method of claim 36 further comprising: detecting when the virtual object begins to move outside the field of view of the head mounted display as the head mounted display moves; and moving, in response to detecting the virtual object moving outside the field of view, the virtual object from the space to a different location within the field of view in order to maintain the virtual object in the field of view. 38. The method of claim 36 further comprising: extending a length of time that the virtual object is visible in the field of view by moving the virtual object in the field of view in order to avoid colliding with the perimeter of the field of view. 39. The method of claim 36 further comprising: avoiding the virtual object from colliding with the perimeter of the field of view by changing an orientation and a shape of the virtual object as the head mounted display and the field of view move. 40. The method of claim 36 further comprising: repositioning the virtual object in the field of view of the head mounted display to a location with an unobstructed view of the virtual object when another object obstructs a view of the virtual object in the field of view.
A method executed by a head mounted display moves a virtual object with respect to a real object in order to maintain the virtual object in a field of view of the head mounted display. The head mounted display displays the virtual object and the real object. The virtual object moves or changes when the head mounted display moves to maintain the virtual object in the field of view.1.-20. (canceled) 21. A method that extends an amount of time that a virtual object appears with a real object in a field of view of a head mounted display, the method comprising: displaying, with the head mounted display, the virtual object on one side of the real object while the virtual object and the real object are simultaneously visible in the field of view of the head mounted display; and moving, with the head mounted display to extend the amount of time that the virtual object appears with the real object in the field of view of the head mounted display, the virtual object from the one side of the real object to an opposite side of the real object when the head mounted display moves and causes a space between the virtual object and a perimeter of the field of view to decrease such that there is no longer sufficient space for the virtual object on the one side of the real object. 22. The method of claim 21 further comprising: reducing a size of the virtual object while the virtual object is on the one side of the real object to compensate for a reduction in size of the space as the perimeter of the field of view moves closer to the virtual object. 23. The method of claim 21 further comprising: moving, with the head mounted display to extend the amount of time that the virtual object appears with the real object in the field of view of the head mounted display, the virtual object closer to the real object as the perimeter of the field of view moves closer to the virtual object. 24. The method of claim 21, wherein the virtual object moves from the one side of the real object to the opposite side of the real object when an available space in the field of view at the one side of the real object becomes too small to include a size and a shape of the virtual object. 25. The method of claim 21, wherein the virtual object automatically moves from being located above the real object on the one side to being located below the real object on the opposite side as the perimeter of the field of view moves toward the one side. 26. The method of claim 21, wherein the virtual object moves from the one side of the real object to the opposite side of the real object when the perimeter of the field of view will collide with the virtual object. 27. The method of claim 21, wherein the virtual object moves from the one side of the real object to the opposite side of the real object where the opposite side of the real object has a largest unoccupied area of free space in the field of view. 28. The method of claim 21 further comprising: contemporaneously decreasing a size of the virtual object as a size of the real object decreases in the field of view of the head mounted display. 29. A method executed by a head mounted display to move a virtual object with respect to a real object in order to maintain the virtual object in a field of view of the head mounted display, the method comprising: displaying, on a display of the head mounted display, the virtual object adjacent to one side of the real object in the field of view of the head mounted display; detecting, by the head mounted display, movement of the head mounted display in which the virtual object will no longer be within the field of view of the head mounted display; and moving, by the head mounted display and in response to detecting the virtual object will no longer be within the field of view, the virtual object to an opposite side of the real object such that the virtual object is within the field of view of the head mounted display on the opposite side of the real object. 30. The method of claim 29 further comprising: reducing a size of the virtual object while the virtual object is on the one side of the real object to compensate for a reduction in size of available space for the virtual object on the one side of the real object as a perimeter of the field of view moves toward the virtual object. 31. The method of claim 29 further comprising: changing a shape of the virtual object while the virtual object is on the one side of the real object to compensate for a reduction in size of available space for the virtual object on the one side of the real object while the head mounted display moves and causes the reduction in size of the available space. 32. The method of claim 29 further comprising: rotating the virtual object about one of an x-axis, a y-axis, and a z-axis while the virtual object is on the one side of the real object to compensate for a reduction in size of available space for the virtual object on the one side of the real object while the head mounted display moves and causes the reduction in size of the available space. 33. The method of claim 29 further comprising: changing the virtual object from being presented as a three-dimensional (3D) object to being presented as a two-dimensional (2D) object while the virtual object is on the one side of the real object to compensate for a reduction in size of available space for the virtual object on the one side of the real object while the head mounted display moves and causes the reduction in size of the available space. 34. The method of claim 29 further comprising: changing the virtual object from being presented in a perspective view to being presented in a plan view while the virtual object is on the one side of the real object to compensate for a reduction in size of available space for the virtual object on the one side of the real object while the head mounted display moves. 35. The method of claim 29 further comprising: decreasing a distance between the virtual object and the real object to compensate for movement of the head mounted display that would result in an edge of the field of view touching the virtual object if the distance were not decreased. 36. A method that changes a virtual object displayed with a real object in order to prevent the virtual object from moving outside a field of view of a head mounted display when the head mounted display moves, the method comprising: displaying, with the head mounted display, the virtual object in a space on one side of the real object while the virtual object and the real object are simultaneously visible in the field of view of the head mounted display; and preventing, as a perimeter of the field of view moves toward the virtual object, the virtual object from moving outside the field of view of the head mounted display by reducing a size of the virtual object while the virtual object remains in the space and on the one side of the real object. 37. The method of claim 36 further comprising: detecting when the virtual object begins to move outside the field of view of the head mounted display as the head mounted display moves; and moving, in response to detecting the virtual object moving outside the field of view, the virtual object from the space to a different location within the field of view in order to maintain the virtual object in the field of view. 38. The method of claim 36 further comprising: extending a length of time that the virtual object is visible in the field of view by moving the virtual object in the field of view in order to avoid colliding with the perimeter of the field of view. 39. The method of claim 36 further comprising: avoiding the virtual object from colliding with the perimeter of the field of view by changing an orientation and a shape of the virtual object as the head mounted display and the field of view move. 40. The method of claim 36 further comprising: repositioning the virtual object in the field of view of the head mounted display to a location with an unobstructed view of the virtual object when another object obstructs a view of the virtual object in the field of view.
2,600
10,376
10,376
15,273,924
2,613
Various embodiments associated with a composite image are described. In one embodiment, a handheld device comprises a launch component configured to cause a launch of a projectile. The projectile is configured to capture a plurality of images. Individual images of the plurality of images are of different segments of an area. The system also comprises an image stitch component configured to stitch the plurality of images into a composite image. The composite image is of a higher resolution than a resolution of individual images of the plurality of images.
1-13. (canceled) 14. A system, comprising: an access component configured to access a stitched image of a location and an offline image of the location; and an alignment component configured to align the stitched image and the offline image, where the stitched image is a real-time image and the offline image is a non-real-time image. 15. The system of claim 14, where the stitched image is a compound image of segment images and where the segment images are of a lower resolution then a resolution of the compound image and where the offline image is a map of an area taken aerially and where the segment images are images of the area taken from a machine-launched projectile. 16. The system of claim 14, comprising: a comparison component configured to make a comparison between the stitched image against the offline image; an analysis component configured to perform an analysis on a result of the comparison; and a feature component configured to find a common feature set among the stitched image and the offline image through employment of a result of the analysis, where the alignment component uses the common feature set to align the stitched image and the offline image. 17-20. (canceled) 21. The system of claim 14, where the alignment component is configured to combine the stitched image and the offline image to form into an aligned image and where the aligned image is presented on a display. 22. The system of claim 21, where the alignment component is configured to combine the stitched image and the offline image through use of feature extraction. 23. The system of claim 21, where the alignment component is configured to combine the stitched image and the offline image through use of feature mapping. 24. The system of claim 14, where the stitched image and the offline image aligned together are concurrently displayed on a handheld device. 25. The system of claim 24, where geo-location data is displayed concurrently with the stitched image and the offline image on the handheld device. 26. The system of claim 25, where the geo-location data is displayed concurrently with the stitched image in response to a request for the geo-location data after the stitched image and the offline image aligned together are concurrently displayed on the handheld device. 27. The system of claim 24, where the stitched image is superimposed over the offline image. 28. The system of claim 24, where the offline image is superimposed over the stitched image 29. The system of claim 14, comprising: an analysis component configured to: perform an evaluation on the stitched image, search through a plurality of offline images, and locate the offline image from the plurality of offline images based, at least in part, on a result of the evaluation and a result of the search. 30. The system of claim 14, comprising: a comparison component configured to make a comparison between the stitched image against the offline image to produce a comparison result that indicates at least one difference between the stitched image and the offline image; and an update component configured to update the offline image such that the at least one difference is eliminated. 31. A system, comprising: an access component configured to access a stitched image of a location that is formed from segment images obtained by a launched projectile and an offline image of the location; an alignment component configured to align the stitched image and the offline image, where the stitched image is a real-time image and the offline image is a non-real-time image; a comparison component configured to make a comparison between the stitched image against the offline image; an analysis component configured to perform an analysis on a result of the comparison; and a feature component configured to find a common feature set among the stitched image and the offline image through employment of a result of the analysis, where the alignment component uses the common feature set to align the stitched image and the offline image. 32. The system of claim 31, where the stitched image and the offline image aligned together are concurrently presented upon a display of a handheld device and where the projectile is launched from the handheld device. 33. The system of claim 32, where the offline image is preprocessed with the geo-location data and where geo-location data is displayed concurrently with the stitched image and the offline image on the handheld device. 34. A system, comprising: an access component configured to access a stitched image of a location that is formed from segment images obtained by a launched projectile and an non-stitched image of the location; an alignment component configured to align the stitched image and the offline image, where the stitched image is a real-time image and the offline image is a non-real-time image and where the stitched image and the offline image aligned together are concurrently presented upon a display; a comparison component configured to make a comparison between the stitched image against the offline image; an analysis component configured to perform an analysis on a result of the comparison; and a feature component configured to find a common feature set among the stitched image and the offline image through employment of a result of the analysis, where the alignment component uses the common feature set to align the stitched image and the offline image. 35. The system of claim 34, where the display is a display of a handheld device and where geo-location data is displayed concurrently with the stitched image and the offline image on the handheld device. 36. The system of claim 34, where the non-stitched image of the location is a non-real time image of the location retained in a memory of a handheld device that launches the projectile. 37. The system of claim 34, where the non-stitched image of the location is a real time image of the location produced for a source other than the projectile.
Various embodiments associated with a composite image are described. In one embodiment, a handheld device comprises a launch component configured to cause a launch of a projectile. The projectile is configured to capture a plurality of images. Individual images of the plurality of images are of different segments of an area. The system also comprises an image stitch component configured to stitch the plurality of images into a composite image. The composite image is of a higher resolution than a resolution of individual images of the plurality of images.1-13. (canceled) 14. A system, comprising: an access component configured to access a stitched image of a location and an offline image of the location; and an alignment component configured to align the stitched image and the offline image, where the stitched image is a real-time image and the offline image is a non-real-time image. 15. The system of claim 14, where the stitched image is a compound image of segment images and where the segment images are of a lower resolution then a resolution of the compound image and where the offline image is a map of an area taken aerially and where the segment images are images of the area taken from a machine-launched projectile. 16. The system of claim 14, comprising: a comparison component configured to make a comparison between the stitched image against the offline image; an analysis component configured to perform an analysis on a result of the comparison; and a feature component configured to find a common feature set among the stitched image and the offline image through employment of a result of the analysis, where the alignment component uses the common feature set to align the stitched image and the offline image. 17-20. (canceled) 21. The system of claim 14, where the alignment component is configured to combine the stitched image and the offline image to form into an aligned image and where the aligned image is presented on a display. 22. The system of claim 21, where the alignment component is configured to combine the stitched image and the offline image through use of feature extraction. 23. The system of claim 21, where the alignment component is configured to combine the stitched image and the offline image through use of feature mapping. 24. The system of claim 14, where the stitched image and the offline image aligned together are concurrently displayed on a handheld device. 25. The system of claim 24, where geo-location data is displayed concurrently with the stitched image and the offline image on the handheld device. 26. The system of claim 25, where the geo-location data is displayed concurrently with the stitched image in response to a request for the geo-location data after the stitched image and the offline image aligned together are concurrently displayed on the handheld device. 27. The system of claim 24, where the stitched image is superimposed over the offline image. 28. The system of claim 24, where the offline image is superimposed over the stitched image 29. The system of claim 14, comprising: an analysis component configured to: perform an evaluation on the stitched image, search through a plurality of offline images, and locate the offline image from the plurality of offline images based, at least in part, on a result of the evaluation and a result of the search. 30. The system of claim 14, comprising: a comparison component configured to make a comparison between the stitched image against the offline image to produce a comparison result that indicates at least one difference between the stitched image and the offline image; and an update component configured to update the offline image such that the at least one difference is eliminated. 31. A system, comprising: an access component configured to access a stitched image of a location that is formed from segment images obtained by a launched projectile and an offline image of the location; an alignment component configured to align the stitched image and the offline image, where the stitched image is a real-time image and the offline image is a non-real-time image; a comparison component configured to make a comparison between the stitched image against the offline image; an analysis component configured to perform an analysis on a result of the comparison; and a feature component configured to find a common feature set among the stitched image and the offline image through employment of a result of the analysis, where the alignment component uses the common feature set to align the stitched image and the offline image. 32. The system of claim 31, where the stitched image and the offline image aligned together are concurrently presented upon a display of a handheld device and where the projectile is launched from the handheld device. 33. The system of claim 32, where the offline image is preprocessed with the geo-location data and where geo-location data is displayed concurrently with the stitched image and the offline image on the handheld device. 34. A system, comprising: an access component configured to access a stitched image of a location that is formed from segment images obtained by a launched projectile and an non-stitched image of the location; an alignment component configured to align the stitched image and the offline image, where the stitched image is a real-time image and the offline image is a non-real-time image and where the stitched image and the offline image aligned together are concurrently presented upon a display; a comparison component configured to make a comparison between the stitched image against the offline image; an analysis component configured to perform an analysis on a result of the comparison; and a feature component configured to find a common feature set among the stitched image and the offline image through employment of a result of the analysis, where the alignment component uses the common feature set to align the stitched image and the offline image. 35. The system of claim 34, where the display is a display of a handheld device and where geo-location data is displayed concurrently with the stitched image and the offline image on the handheld device. 36. The system of claim 34, where the non-stitched image of the location is a non-real time image of the location retained in a memory of a handheld device that launches the projectile. 37. The system of claim 34, where the non-stitched image of the location is a real time image of the location produced for a source other than the projectile.
2,600
10,377
10,377
16,409,389
2,659
In one example, a method includes method comprising: receiving audio data generated by a microphone of a current computing device; identifying, based on the audio data, one or more computing devices that each emitted a respective audio signal in response to speech reception being activated at the current computing device; and selecting either the current computing device or a particular computing device from the identified one or more computing devices to satisfy a spoken utterance determined based on the audio data.
1. A method comprising: receiving audio data generated by a microphone of a current computing device; identifying, based on the audio data, one or more computing devices that each emitted a respective audio signal in response to speech reception being activated at the current computing device; and selecting, based on the identified one or more computing devices and a spoken utterance determined from the audio data, either the current computing device or a particular computing device from the identified one or more computing devices to perform a task or service based on the spoken utterance. 2. The method of claim 1, further comprising: outputting, by the current computing device, an indication that speech reception has been activated at the current computing device. 3. The method of claim 2, wherein the current computing device is connected to a particular network, and wherein outputting the indication that speech reception has been activated at the current computing device comprises: causing, by the current computing device, one or more other computing devices connected to the particular network to emit respective audio signals, wherein the one or more other computing devices connected to the particular network include the identified one or more computing devices. 4. The method of claim 3, wherein the indication that speech reception has been activated at the current computing device is output to a server device, and wherein causing the one or more other computing devices connected to the particular network to emit the respective audio signals comprises: causing, by the current computing device, the server device to output a request to the one or more other computing devices connected to the particular network to emit respective audio signals. 5. The method of claim 1, wherein the current computing device is associated with a particular user account, and wherein outputting the indication that speech reception has been activated at the current computing device comprises: causing, by the current computing device, one or more other computing devices associated with the particular user account to emit respective audio signals, wherein the one or more other computing devices associated with the particular user account include the identified one or more computing devices. 6. The method of claim 5, wherein the indication that speech reception has been activated at the current computing device is output to a server device, and wherein causing the one or more other computing devices associated with the particular user account to emit the respective audio signals comprises: causing, by the current computing device, the server device to output a request to the one or more other computing devices associated with the particular user account to emit respective audio signals. 7. The method of claim 1, wherein the current computing device is connected to a particular network and is associated with a particular user account, and wherein outputting the indication that speech reception has been activated at the current computing device comprises: causing, by the current computing device, one or more other computing devices connected to the particular network that are associated with the particular user account to emit respective audio signals, wherein the one or more other computing devices connected to the particular network that are associated with the particular user account include the identified one or more computing devices. 8. The method of claim 1, further comprising: identifying, by a server device, one or more computing devices related to the current computing device; and in response to receiving an indication that speech reception has been activated at the current computing device, outputting, by the server device and to each computing device of the identified one or more computing devices related to the current computing device, a request to emit a respective audio signal. 9. The method of claim 8, wherein identifying the one or more computing devices related to the current computing device comprises: identifying, by the server device, one or more computing devices that are one or both of: connected to a same network as the current computing device; and associated with a same user account as the current computing device. 10. The method of claim 1, wherein identifying comprises: determining, based on the respective audio signals emitted by the one or more respective computing devices, a respective proximity of each respective computing device relative to the current computing device. 11. The method of claim 1, wherein each audio signal of the respective audio signals has one or more unique audio characteristics. 12. The method of claim 1, wherein the current computing device does not include a display, and wherein selecting comprises: responsive to determining that a display is needed to satisfy the spoken utterance, selecting the particular computing device from computing devices included in the identified one or more computing devices that include a display. 13. The method of claim 1, wherein the current computing device includes a display, and wherein selecting a computing device from the identified one or more computing devices comprises: selecting the particular computing device from computing devices included in the identified one or more computing devices that include a display that is larger than the display of the current computing device. 14. A device comprising: one or more microphones; and one or more processors configured to perform a method comprising: receiving audio data generated by the microphones; identifying, based on the audio data, one or more computing devices that each emitted a respective audio signal in response to speech reception being activated at the current computing device; and selecting, based on the identified one or more computing devices and a spoken utterance determined from the audio data, either the device or a particular computing device from the identified one or more computing devices to perform a task or service based on the spoken utterance. 15. A non-transitory computer-readable storage medium storing instructions that, when executed, cause one or more processors of a computing device to perform a method comprising: receiving audio data generated by a microphone of a current computing device; identifying, based on the audio data, one or more computing devices that each emitted a respective audio signal in response to speech reception being activated at the current computing device; and selecting, based on the identified one or more computing devices and a spoken utterance determined from the audio data, either the current computing device or a particular computing device from the identified one or more computing devices to perform a task or service based on the spoken utterance.
In one example, a method includes method comprising: receiving audio data generated by a microphone of a current computing device; identifying, based on the audio data, one or more computing devices that each emitted a respective audio signal in response to speech reception being activated at the current computing device; and selecting either the current computing device or a particular computing device from the identified one or more computing devices to satisfy a spoken utterance determined based on the audio data.1. A method comprising: receiving audio data generated by a microphone of a current computing device; identifying, based on the audio data, one or more computing devices that each emitted a respective audio signal in response to speech reception being activated at the current computing device; and selecting, based on the identified one or more computing devices and a spoken utterance determined from the audio data, either the current computing device or a particular computing device from the identified one or more computing devices to perform a task or service based on the spoken utterance. 2. The method of claim 1, further comprising: outputting, by the current computing device, an indication that speech reception has been activated at the current computing device. 3. The method of claim 2, wherein the current computing device is connected to a particular network, and wherein outputting the indication that speech reception has been activated at the current computing device comprises: causing, by the current computing device, one or more other computing devices connected to the particular network to emit respective audio signals, wherein the one or more other computing devices connected to the particular network include the identified one or more computing devices. 4. The method of claim 3, wherein the indication that speech reception has been activated at the current computing device is output to a server device, and wherein causing the one or more other computing devices connected to the particular network to emit the respective audio signals comprises: causing, by the current computing device, the server device to output a request to the one or more other computing devices connected to the particular network to emit respective audio signals. 5. The method of claim 1, wherein the current computing device is associated with a particular user account, and wherein outputting the indication that speech reception has been activated at the current computing device comprises: causing, by the current computing device, one or more other computing devices associated with the particular user account to emit respective audio signals, wherein the one or more other computing devices associated with the particular user account include the identified one or more computing devices. 6. The method of claim 5, wherein the indication that speech reception has been activated at the current computing device is output to a server device, and wherein causing the one or more other computing devices associated with the particular user account to emit the respective audio signals comprises: causing, by the current computing device, the server device to output a request to the one or more other computing devices associated with the particular user account to emit respective audio signals. 7. The method of claim 1, wherein the current computing device is connected to a particular network and is associated with a particular user account, and wherein outputting the indication that speech reception has been activated at the current computing device comprises: causing, by the current computing device, one or more other computing devices connected to the particular network that are associated with the particular user account to emit respective audio signals, wherein the one or more other computing devices connected to the particular network that are associated with the particular user account include the identified one or more computing devices. 8. The method of claim 1, further comprising: identifying, by a server device, one or more computing devices related to the current computing device; and in response to receiving an indication that speech reception has been activated at the current computing device, outputting, by the server device and to each computing device of the identified one or more computing devices related to the current computing device, a request to emit a respective audio signal. 9. The method of claim 8, wherein identifying the one or more computing devices related to the current computing device comprises: identifying, by the server device, one or more computing devices that are one or both of: connected to a same network as the current computing device; and associated with a same user account as the current computing device. 10. The method of claim 1, wherein identifying comprises: determining, based on the respective audio signals emitted by the one or more respective computing devices, a respective proximity of each respective computing device relative to the current computing device. 11. The method of claim 1, wherein each audio signal of the respective audio signals has one or more unique audio characteristics. 12. The method of claim 1, wherein the current computing device does not include a display, and wherein selecting comprises: responsive to determining that a display is needed to satisfy the spoken utterance, selecting the particular computing device from computing devices included in the identified one or more computing devices that include a display. 13. The method of claim 1, wherein the current computing device includes a display, and wherein selecting a computing device from the identified one or more computing devices comprises: selecting the particular computing device from computing devices included in the identified one or more computing devices that include a display that is larger than the display of the current computing device. 14. A device comprising: one or more microphones; and one or more processors configured to perform a method comprising: receiving audio data generated by the microphones; identifying, based on the audio data, one or more computing devices that each emitted a respective audio signal in response to speech reception being activated at the current computing device; and selecting, based on the identified one or more computing devices and a spoken utterance determined from the audio data, either the device or a particular computing device from the identified one or more computing devices to perform a task or service based on the spoken utterance. 15. A non-transitory computer-readable storage medium storing instructions that, when executed, cause one or more processors of a computing device to perform a method comprising: receiving audio data generated by a microphone of a current computing device; identifying, based on the audio data, one or more computing devices that each emitted a respective audio signal in response to speech reception being activated at the current computing device; and selecting, based on the identified one or more computing devices and a spoken utterance determined from the audio data, either the current computing device or a particular computing device from the identified one or more computing devices to perform a task or service based on the spoken utterance.
2,600
10,378
10,378
15,262,470
2,656
A method and apparatus for channel estimation for a three-phase communication system. In one embodiment, the method comprises generating a first plurality of preamble patterns for use in a first data stream of two independent data streams; generating a second plurality of preamble patterns for use in a second data stream of the two independent data streams; transmitting the first and the second data streams via a communications channel comprising a three-wire three-phase system; receiving a version of the first data stream comprising the first plurality of preamble patterns and a version of the second data stream comprising the second plurality of preamble patterns; and generating, based on the received version of the first plurality of preamble patterns and the received version of the second plurality of preamble patterns, a channel estimation matrix for estimating the imbalance of the communications channel.
1. A method for channel estimation for a three-phase communication system, comprising: generating a first plurality of preamble patterns for use in a first data stream of two independent data streams; generating a second plurality of preamble patterns for use in a second data stream of the two independent data streams; transmitting the first and the second data streams via a communications channel comprising a three-wire three-phase system; receiving a version of the first data stream comprising the first plurality of preamble patterns and a version of the second data stream comprising the second plurality of preamble patterns; and generating, based on the received version of the first plurality of preamble patterns and the received version of the second plurality of preamble patterns, a channel estimation matrix for estimating the imbalance of the communications channel. 2. The method of claim 1, further comprising using the channel estimation matrix for compensating the received versions of the first and the second data streams to recover the first and the second data streams. 3. The method of claim 1, wherein transmitting the first and the second data streams comprises modulating the first and the second data streams onto the three-wire, three-phase system. 4. The method of claim 3, wherein the first and the second data streams are coupled to the three-wire, three-phase system by a first Scott-T transformer, and the received versions of the first and the second data streams are coupled to the three-wire, three phase system by a second Scott-T transformer. 5. The method of claim 1, wherein the imbalance of the communications channel comprises an amplitude imbalance and a phase imbalance. 6. The method of claim 1, further comprising performing precoding on the first and the second data streams prior to transmitting the first and the second data streams. 7. The method of claim 1, where the first and the second data streams are transmitted without any precoding. 8. The method of claim 1, wherein the three-wire three-phase system is a power line communications (PLC) system. 9. An apparatus for channel estimation for a three-phase communication system, comprising: a transmitter for (i) generating a first plurality of preamble patterns for use in a first data stream of two independent data streams; (ii) generating a second plurality of preamble patterns for use in a second data stream of the two independent data streams; and (iii) transmitting the first and the second data streams via a communications channel comprising a three-wire three-phase system; and a receiver for (iv) receiving a version of the first data stream comprising the first plurality of preamble patterns and a version of the second data stream comprising the second plurality of preamble patterns; and (v) generating, based on the received version of the first plurality of preamble patterns and the received version of the second plurality of preamble patterns, a channel estimation matrix for estimating the imbalance of the communications channel. 10. The apparatus of claim 9, where the receiver further uses the channel estimation matrix for compensating the received versions of the first and the second data streams to recover the first and the second data streams. 11. The apparatus of claim 9, wherein transmitting the first and the second data streams comprises modulating the first and the second data streams onto the three-wire, three-phase system. 12. The apparatus of claim 11, further comprising a first Scott-T transformer for coupling the first and the second data streams to the three-wire three-phase system, and a second Scott-T transformer for coupling the received versions of the first and the second data streams to the three-wire three phase system. 13. The apparatus of claim 9, wherein the imbalance of the communications channel comprises an amplitude imbalance and a phase imbalance. 14. The apparatus of claim 9, wherein the transmitter performs precoding on the first and the second data streams prior to transmitting the first and the second data streams. 15. The apparatus of claim 9, where the transmitter does not perform on first and the second data streams. 16. The apparatus of claim 9, wherein the three-wire three-phase system is a power line communications (PLC) system. 17. The apparatus of claim 12, wherein the transmitter and the first Scott-T transformer are components of a first power line communications transceiver (PLCT). 18. The apparatus of claim 17, wherein the first PLCT is a component of a power conditioning unit (PCU). 19. The apparatus of claim 18, wherein the receiver and the second Scott-T transformer are components of a second power line communications transceiver (PLCT). 20. The apparatus of claim 20, wherein the second PLCT is a component of a controller in a distributed generator (DG).
A method and apparatus for channel estimation for a three-phase communication system. In one embodiment, the method comprises generating a first plurality of preamble patterns for use in a first data stream of two independent data streams; generating a second plurality of preamble patterns for use in a second data stream of the two independent data streams; transmitting the first and the second data streams via a communications channel comprising a three-wire three-phase system; receiving a version of the first data stream comprising the first plurality of preamble patterns and a version of the second data stream comprising the second plurality of preamble patterns; and generating, based on the received version of the first plurality of preamble patterns and the received version of the second plurality of preamble patterns, a channel estimation matrix for estimating the imbalance of the communications channel.1. A method for channel estimation for a three-phase communication system, comprising: generating a first plurality of preamble patterns for use in a first data stream of two independent data streams; generating a second plurality of preamble patterns for use in a second data stream of the two independent data streams; transmitting the first and the second data streams via a communications channel comprising a three-wire three-phase system; receiving a version of the first data stream comprising the first plurality of preamble patterns and a version of the second data stream comprising the second plurality of preamble patterns; and generating, based on the received version of the first plurality of preamble patterns and the received version of the second plurality of preamble patterns, a channel estimation matrix for estimating the imbalance of the communications channel. 2. The method of claim 1, further comprising using the channel estimation matrix for compensating the received versions of the first and the second data streams to recover the first and the second data streams. 3. The method of claim 1, wherein transmitting the first and the second data streams comprises modulating the first and the second data streams onto the three-wire, three-phase system. 4. The method of claim 3, wherein the first and the second data streams are coupled to the three-wire, three-phase system by a first Scott-T transformer, and the received versions of the first and the second data streams are coupled to the three-wire, three phase system by a second Scott-T transformer. 5. The method of claim 1, wherein the imbalance of the communications channel comprises an amplitude imbalance and a phase imbalance. 6. The method of claim 1, further comprising performing precoding on the first and the second data streams prior to transmitting the first and the second data streams. 7. The method of claim 1, where the first and the second data streams are transmitted without any precoding. 8. The method of claim 1, wherein the three-wire three-phase system is a power line communications (PLC) system. 9. An apparatus for channel estimation for a three-phase communication system, comprising: a transmitter for (i) generating a first plurality of preamble patterns for use in a first data stream of two independent data streams; (ii) generating a second plurality of preamble patterns for use in a second data stream of the two independent data streams; and (iii) transmitting the first and the second data streams via a communications channel comprising a three-wire three-phase system; and a receiver for (iv) receiving a version of the first data stream comprising the first plurality of preamble patterns and a version of the second data stream comprising the second plurality of preamble patterns; and (v) generating, based on the received version of the first plurality of preamble patterns and the received version of the second plurality of preamble patterns, a channel estimation matrix for estimating the imbalance of the communications channel. 10. The apparatus of claim 9, where the receiver further uses the channel estimation matrix for compensating the received versions of the first and the second data streams to recover the first and the second data streams. 11. The apparatus of claim 9, wherein transmitting the first and the second data streams comprises modulating the first and the second data streams onto the three-wire, three-phase system. 12. The apparatus of claim 11, further comprising a first Scott-T transformer for coupling the first and the second data streams to the three-wire three-phase system, and a second Scott-T transformer for coupling the received versions of the first and the second data streams to the three-wire three phase system. 13. The apparatus of claim 9, wherein the imbalance of the communications channel comprises an amplitude imbalance and a phase imbalance. 14. The apparatus of claim 9, wherein the transmitter performs precoding on the first and the second data streams prior to transmitting the first and the second data streams. 15. The apparatus of claim 9, where the transmitter does not perform on first and the second data streams. 16. The apparatus of claim 9, wherein the three-wire three-phase system is a power line communications (PLC) system. 17. The apparatus of claim 12, wherein the transmitter and the first Scott-T transformer are components of a first power line communications transceiver (PLCT). 18. The apparatus of claim 17, wherein the first PLCT is a component of a power conditioning unit (PCU). 19. The apparatus of claim 18, wherein the receiver and the second Scott-T transformer are components of a second power line communications transceiver (PLCT). 20. The apparatus of claim 20, wherein the second PLCT is a component of a controller in a distributed generator (DG).
2,600
10,379
10,379
14,708,663
2,684
The invention relates to a method for signaling the danger of a crane tipping, comprising at least three alarm systems for signaling tipping warnings using a respective safety programme. At least one measurement value is obtained from the crane, at least two alarm systems for signaling the tipping warning are selected in accordance with the at least one measurement value and if a warning of tipping is signaled by at least two of the at least two selected alarm signals, a tipping warning is signalled.
1. A method of signalling the danger of a crane tipping, which has at least three alarm systems for signalling tipping warnings using a respective safety program, wherein at least one measurement value is obtained on the crane, wherein at least two alarm systems for signalling tipping warnings are selected in dependence on the at least one measurement value, and when there are signallings of tipping warnings from at least two of the at least two selected alarm systems the tipping danger is signalled. 2. A method as set forth in claim 1 for signalling the danger of a crane tipping, which has at least one crane jib, wherein the position of the crane jib is measured as one of the measurement values. 3. A method as set forth in claim 1 for signalling the danger of a crane tipping, which has at least one support leg, wherein at least one position of at least one support extension of the at least one support leg is measured as one of the measurement values. 4. A method as set forth in claim 1 for signalling the danger of a crane tipping, which has at least one support leg, wherein the supporting force in the at least one support leg is measured as one of the measurement values. 5. A method as set forth in claim 1 for signalling the danger of a crane tipping, which has a crane base, wherein an inclination of the crane base is measured as one of the measurement values. 6. A method as set forth in claim 1 for signalling the danger of a crane tipping, which has at least one crane jib having at least one stroke cylinder, wherein in relation to at least one of the alarm systems for signalling tipping warnings in dependence on the position of the at least one movable crane jib a calculation of a tipping moment and a stand moment of the crane is performed and a maximum permissible limit value is determined therefrom for a stroke cylinder force in the stroke cylinder and a tipping warning is signalled when said limit value is exceeded. 7. A method as set forth in claim 6, wherein an elasticity of the carrier vehicle of the crane, which has been previously measured or previously determined in some other way is incorporated into the calculation of the limit value for the stroke cylinder force. 8. A method as set forth in claim 1 for signalling the danger of a crane tipping, which has a crane base, wherein in relation to one of the alarm systems for signalling tipping warnings the inclination of the crane base is measured and a tipping warning is signalled when an inclination limit value is exceeded by the inclination. 9. A method as set forth in claim 8 for signalling the danger of a crane tipping, which has at least one crane jib and at least one support leg with a support extension, wherein the inclination limit value is selected in dependence on the position of the crane jib and/or the position of support extensions of the support legs of the crane. 10. A method as set forth in claim 1 for signalling the danger of a crane tipping, which has at least one support leg and which is mounted on a vehicle with wheels, wherein in relation to one of the alarm systems for signalling tipping warnings at least one supporting force in the at least one support leg and/or wheel forces in the wheels are measured. 11. A method as set forth in claim 1, wherein the tipping danger is signalled when there are signallings of tipping warnings by all selected alarm systems. 12. A method as set forth in claim 1, wherein movements of the crane which increase the tipping moment are blocked when there is a signalling of the danger of tipping. 13. A safety device for a crane having a storage means in which at least three safety programs for signalling tipping warnings can be stored, and at least one measuring device for measuring at least one measurement value, wherein at least two safety programs for signalling tipping warnings can be selected by the safety device in dependence on the at least one measurement value and that a signalling of the danger of tipping can be sent by the safety device in the existence of at least two signallings of tipping warnings by the at least two selected safety programs. 14. A crane having a safety device as set forth in claim 13.
The invention relates to a method for signaling the danger of a crane tipping, comprising at least three alarm systems for signaling tipping warnings using a respective safety programme. At least one measurement value is obtained from the crane, at least two alarm systems for signaling the tipping warning are selected in accordance with the at least one measurement value and if a warning of tipping is signaled by at least two of the at least two selected alarm signals, a tipping warning is signalled.1. A method of signalling the danger of a crane tipping, which has at least three alarm systems for signalling tipping warnings using a respective safety program, wherein at least one measurement value is obtained on the crane, wherein at least two alarm systems for signalling tipping warnings are selected in dependence on the at least one measurement value, and when there are signallings of tipping warnings from at least two of the at least two selected alarm systems the tipping danger is signalled. 2. A method as set forth in claim 1 for signalling the danger of a crane tipping, which has at least one crane jib, wherein the position of the crane jib is measured as one of the measurement values. 3. A method as set forth in claim 1 for signalling the danger of a crane tipping, which has at least one support leg, wherein at least one position of at least one support extension of the at least one support leg is measured as one of the measurement values. 4. A method as set forth in claim 1 for signalling the danger of a crane tipping, which has at least one support leg, wherein the supporting force in the at least one support leg is measured as one of the measurement values. 5. A method as set forth in claim 1 for signalling the danger of a crane tipping, which has a crane base, wherein an inclination of the crane base is measured as one of the measurement values. 6. A method as set forth in claim 1 for signalling the danger of a crane tipping, which has at least one crane jib having at least one stroke cylinder, wherein in relation to at least one of the alarm systems for signalling tipping warnings in dependence on the position of the at least one movable crane jib a calculation of a tipping moment and a stand moment of the crane is performed and a maximum permissible limit value is determined therefrom for a stroke cylinder force in the stroke cylinder and a tipping warning is signalled when said limit value is exceeded. 7. A method as set forth in claim 6, wherein an elasticity of the carrier vehicle of the crane, which has been previously measured or previously determined in some other way is incorporated into the calculation of the limit value for the stroke cylinder force. 8. A method as set forth in claim 1 for signalling the danger of a crane tipping, which has a crane base, wherein in relation to one of the alarm systems for signalling tipping warnings the inclination of the crane base is measured and a tipping warning is signalled when an inclination limit value is exceeded by the inclination. 9. A method as set forth in claim 8 for signalling the danger of a crane tipping, which has at least one crane jib and at least one support leg with a support extension, wherein the inclination limit value is selected in dependence on the position of the crane jib and/or the position of support extensions of the support legs of the crane. 10. A method as set forth in claim 1 for signalling the danger of a crane tipping, which has at least one support leg and which is mounted on a vehicle with wheels, wherein in relation to one of the alarm systems for signalling tipping warnings at least one supporting force in the at least one support leg and/or wheel forces in the wheels are measured. 11. A method as set forth in claim 1, wherein the tipping danger is signalled when there are signallings of tipping warnings by all selected alarm systems. 12. A method as set forth in claim 1, wherein movements of the crane which increase the tipping moment are blocked when there is a signalling of the danger of tipping. 13. A safety device for a crane having a storage means in which at least three safety programs for signalling tipping warnings can be stored, and at least one measuring device for measuring at least one measurement value, wherein at least two safety programs for signalling tipping warnings can be selected by the safety device in dependence on the at least one measurement value and that a signalling of the danger of tipping can be sent by the safety device in the existence of at least two signallings of tipping warnings by the at least two selected safety programs. 14. A crane having a safety device as set forth in claim 13.
2,600
10,380
10,380
15,443,260
2,632
An envelope tracking system includes an instantaneous amplitude circuitry, an instantaneous frequency circuitry, and a two-dimensional (2D) bias voltage selection circuitry. The instantaneous amplitude circuitry is configured to determine an instantaneous amplitude of a transmit signal. The instantaneous frequency circuitry is configured to determine an instantaneous frequency of the transmit signal. The two-dimensional (2D) bias voltage selection circuitry is configured to determine a bias voltage based on both the instantaneous amplitude and the instantaneous frequency of the transmit signal, and control power supply circuitry to supply the determined bias voltage to a power amplifier that is configured to amplify the transmit signal.
1. An envelope tracking system, comprising: an instantaneous frequency circuitry configured to evaluate a transmit signal to determine an instantaneous frequency of the transmit signal; a two-dimensional (2D) bias voltage selection circuitry configured to: select a bias voltage to be supplied to a power amplifier based on both an instantaneous amplitude of the transmit signal and the instantaneous frequency of the transmit signal, and control power supply circuitry to provide the selected bias voltage to the power amplifier for amplification of the transmit signal. 2. The envelope tracking system of claim 1, further comprising instantaneous amplitude circuitry that includes a Coordinate Rotation Digital Computer (CORDIC) configured to compute an amplitude of the transmit signal and to generate an amplitude signal that communicates the computed amplitude. 3. The envelope tracking system of claim 1, wherein the instantaneous frequency circuitry comprises: a Coordinate Rotation Digital Computer (CORDIC) configured to compute a phase of the transmit signal and to generate a phase signal that communicates the computed phase; and a differentiator circuitry configured to generate an instantaneous frequency signal based on a rate of change of the phase signal, wherein the instantaneous frequency signal communicates the instantaneous frequency. 4. The envelope tracking system of claim 1, wherein the 2D bias voltage selection circuitry further comprises a memory configured to store a lookup table (LUT) that maps instantaneous amplitude and instantaneous frequency value pairs to respective instantaneous bias voltages, wherein the 2D bias voltage selection circuitry is further configured to select a bias voltage that is mapped to a present value of the instantaneous amplitude and the instantaneous frequency. 5. The envelope tracking system of claim 1, wherein the 2D bias voltage selection circuitry further comprises: a combination circuitry configured to combine a present value of the instantaneous amplitude and a present value of the instantaneous frequency to generate a combination value; a memory configured to store a LUT that maps combination values to respective instantaneous bias voltages, wherein the 2D bias voltage selection circuitry is further configured to select a bias voltage that is mapped to a present value of the combination value. 6. The envelope tracking system of claim 1, further comprising: memory configured to store a LUT; and calibration circuitry configured to populate the LUT by: supplying a test signal that varies in amplitude and frequency; for each of at least two combinations of amplitude values and frequency value: controlling the power supply circuitry to adjust the bias voltage to the power amplifier to obtain a desired gain; and recording, in the LUT, the amplitude value and the frequency value mapped to the value of the bias voltage that obtains the desired gain. 7. The envelope tracking system of claim 6 wherein the calibration circuitry is configured to: interpolate, based at least on the amplitude values and the frequency values, an interpolated bias voltage value that is associated with an interpolated amplitude value and an interpolated frequency value; and record, in the LUT, the interpolated amplitude value and the interpolated frequency value mapped to the interpolated bias voltage value. 8. A method configured to control a power amplifier bias voltage based on an envelope of a transmit signal, comprising: receiving a transmit signal; determining an instantaneous amplitude of the transmit signal; evaluating the transmit signal to determine an instantaneous frequency of the transmit signal; selecting a bias voltage to be supplied to the power amplifier based on both the instantaneous amplitude and the instantaneous frequency of the transmit signal, and controlling power supply circuitry to provide the selected bias voltage to the power amplifier for amplification of the transmit signal. 9. The method of claim 8, wherein determining the instantaneous amplitude comprises: computing, with a Coordinate Rotation Digital Computer (CORDIC), an amplitude of the transmit signal; and generating an amplitude signal that communicates the computed amplitude. 10. The method of claim 8, wherein determining the instantaneous frequency circuitry comprises: computing, with a Coordinate Rotation Digital Computer (CORDIC), a phase of the transmit signal; generating a phase signal that communicates the computed phase; and generating an instantaneous frequency signal based on a rate of change of the phase signal, wherein the instantaneous frequency signal communicates the instantaneous frequency. 11. The method of claim 8, further comprising: reading a stored a lookup table (LUT) that maps instantaneous amplitude and instantaneous frequency value pairs to respective instantaneous bias voltages; and selecting a bias voltage that is mapped to a present value of the instantaneous amplitude and the instantaneous frequency. 12. The method of claim 8, further comprising: combining a present value of the instantaneous amplitude and a present value of the instantaneous frequency to generate a combination value; reading a stored LUT that maps combination values to respective instantaneous bias voltages; and selecting a bias voltage that is mapped to a present value of the combination value. 13. The method of claim 8, further comprising: populating a LUT by: supplying a test signal that varies in amplitude and frequency; for each of at least two combinations of amplitude values and frequency value: controlling the power supply circuitry to adjust the bias voltage to the power amplifier to obtain a desired gain; and recording, in the LUT, the amplitude value and the frequency value mapped to the value of the bias voltage that obtains the desired gain; and storing the populated LUT. 14. The method of claim 13, further comprising: interpolating, based at least on the amplitude values and the frequency values, an interpolated bias voltage value that is associated with an interpolated amplitude value and an interpolated frequency value; and recording, in the LUT, the interpolated amplitude value and the interpolated frequency value mapped to the interpolated bias voltage value. 15. A transmitter, comprising: a transmit chain configured to process a transmit baseband signal to generate a transmit radio frequency (RF) signal, wherein the transmit chain includes a power amplifier that amplifies the transmit RF signal to generate an uplink signal; an instantaneous frequency circuitry configured to evaluate a transmit signal to determine an instantaneous frequency of the transmit signal; and a two-dimensional (2D) bias voltage selection circuitry configured to: select a bias voltage to be supplied to a power amplifier based on both an instantaneous amplitude and an instantaneous frequency of the transmit baseband signal, and control power supply circuitry to provide the selected bias voltage to the power amplifier. 16. The transmitter of claim 15, further comprising an instantaneous amplitude circuitry configured to determine the instantaneous amplitude of the transmit signal. 17. The transmitter of claim 16, wherein the instantaneous amplitude circuitry comprises a Coordinate Rotation Digital Computer (CORDIC) configured to compute an amplitude of the transmit signal and to generate an amplitude signal that communicates the computed amplitude. 18. The transmitter of claim 15, further comprising an instantaneous frequency circuitry configured to determine the instantaneous frequency of the transmit signal. 19. The transmitter of claim 18, wherein the instantaneous frequency circuitry comprises: a Coordinate Rotation Digital Computer (CORDIC) configured to compute a phase of the transmit signal and to generate a phase signal that communicates the computed phase; and a differentiator circuitry configured to generate an instantaneous frequency signal based on a rate of change of the phase signal, wherein the instantaneous frequency signal communicates the instantaneous frequency. 20. The transmitter of claim 15, wherein the 2D bias voltage selection circuitry further comprises a memory configured to store a lookup table (LUT) that maps instantaneous amplitude and instantaneous frequency value pairs to respective instantaneous bias voltages, wherein the 2D bias voltage selection circuitry is further configured to select a bias voltage that is mapped to a present value of the instantaneous amplitude and the instantaneous frequency. 21. The transmitter of claim 15, wherein the 2D bias voltage selection circuitry further comprises: a combination circuitry configured to combine a present value of the instantaneous amplitude and a present value of the instantaneous frequency to generate a combination value; and a memory configured to store a LUT that maps combination values to respective instantaneous bias voltages, wherein the 2D bias voltage selection circuitry is further configured to select a bias voltage that is mapped to a present value of the combination value. 22. The transmitter of claim 15, further comprising: memory configured to store a LUT; and calibration circuitry configured to populate the LUT by: supplying a test signal that varies in amplitude and frequency; for each of at least two combinations of amplitude values and frequency value: controlling the power supply circuitry to adjust the bias voltage to the power amplifier to obtain a desired gain; and recording, in the LUT, the amplitude value and the frequency value mapped to the value of the bias voltage that obtains the desired gain. 23. The transmitter of claim 22, wherein the calibration circuitry is configured to: interpolate, based at least on the amplitude values and the frequency values, an interpolated bias voltage value that is associated with an interpolated amplitude value and an interpolated frequency value; and record, in the LUT, the interpolated amplitude value and the interpolated frequency value mapped to the interpolated bias voltage value.
An envelope tracking system includes an instantaneous amplitude circuitry, an instantaneous frequency circuitry, and a two-dimensional (2D) bias voltage selection circuitry. The instantaneous amplitude circuitry is configured to determine an instantaneous amplitude of a transmit signal. The instantaneous frequency circuitry is configured to determine an instantaneous frequency of the transmit signal. The two-dimensional (2D) bias voltage selection circuitry is configured to determine a bias voltage based on both the instantaneous amplitude and the instantaneous frequency of the transmit signal, and control power supply circuitry to supply the determined bias voltage to a power amplifier that is configured to amplify the transmit signal.1. An envelope tracking system, comprising: an instantaneous frequency circuitry configured to evaluate a transmit signal to determine an instantaneous frequency of the transmit signal; a two-dimensional (2D) bias voltage selection circuitry configured to: select a bias voltage to be supplied to a power amplifier based on both an instantaneous amplitude of the transmit signal and the instantaneous frequency of the transmit signal, and control power supply circuitry to provide the selected bias voltage to the power amplifier for amplification of the transmit signal. 2. The envelope tracking system of claim 1, further comprising instantaneous amplitude circuitry that includes a Coordinate Rotation Digital Computer (CORDIC) configured to compute an amplitude of the transmit signal and to generate an amplitude signal that communicates the computed amplitude. 3. The envelope tracking system of claim 1, wherein the instantaneous frequency circuitry comprises: a Coordinate Rotation Digital Computer (CORDIC) configured to compute a phase of the transmit signal and to generate a phase signal that communicates the computed phase; and a differentiator circuitry configured to generate an instantaneous frequency signal based on a rate of change of the phase signal, wherein the instantaneous frequency signal communicates the instantaneous frequency. 4. The envelope tracking system of claim 1, wherein the 2D bias voltage selection circuitry further comprises a memory configured to store a lookup table (LUT) that maps instantaneous amplitude and instantaneous frequency value pairs to respective instantaneous bias voltages, wherein the 2D bias voltage selection circuitry is further configured to select a bias voltage that is mapped to a present value of the instantaneous amplitude and the instantaneous frequency. 5. The envelope tracking system of claim 1, wherein the 2D bias voltage selection circuitry further comprises: a combination circuitry configured to combine a present value of the instantaneous amplitude and a present value of the instantaneous frequency to generate a combination value; a memory configured to store a LUT that maps combination values to respective instantaneous bias voltages, wherein the 2D bias voltage selection circuitry is further configured to select a bias voltage that is mapped to a present value of the combination value. 6. The envelope tracking system of claim 1, further comprising: memory configured to store a LUT; and calibration circuitry configured to populate the LUT by: supplying a test signal that varies in amplitude and frequency; for each of at least two combinations of amplitude values and frequency value: controlling the power supply circuitry to adjust the bias voltage to the power amplifier to obtain a desired gain; and recording, in the LUT, the amplitude value and the frequency value mapped to the value of the bias voltage that obtains the desired gain. 7. The envelope tracking system of claim 6 wherein the calibration circuitry is configured to: interpolate, based at least on the amplitude values and the frequency values, an interpolated bias voltage value that is associated with an interpolated amplitude value and an interpolated frequency value; and record, in the LUT, the interpolated amplitude value and the interpolated frequency value mapped to the interpolated bias voltage value. 8. A method configured to control a power amplifier bias voltage based on an envelope of a transmit signal, comprising: receiving a transmit signal; determining an instantaneous amplitude of the transmit signal; evaluating the transmit signal to determine an instantaneous frequency of the transmit signal; selecting a bias voltage to be supplied to the power amplifier based on both the instantaneous amplitude and the instantaneous frequency of the transmit signal, and controlling power supply circuitry to provide the selected bias voltage to the power amplifier for amplification of the transmit signal. 9. The method of claim 8, wherein determining the instantaneous amplitude comprises: computing, with a Coordinate Rotation Digital Computer (CORDIC), an amplitude of the transmit signal; and generating an amplitude signal that communicates the computed amplitude. 10. The method of claim 8, wherein determining the instantaneous frequency circuitry comprises: computing, with a Coordinate Rotation Digital Computer (CORDIC), a phase of the transmit signal; generating a phase signal that communicates the computed phase; and generating an instantaneous frequency signal based on a rate of change of the phase signal, wherein the instantaneous frequency signal communicates the instantaneous frequency. 11. The method of claim 8, further comprising: reading a stored a lookup table (LUT) that maps instantaneous amplitude and instantaneous frequency value pairs to respective instantaneous bias voltages; and selecting a bias voltage that is mapped to a present value of the instantaneous amplitude and the instantaneous frequency. 12. The method of claim 8, further comprising: combining a present value of the instantaneous amplitude and a present value of the instantaneous frequency to generate a combination value; reading a stored LUT that maps combination values to respective instantaneous bias voltages; and selecting a bias voltage that is mapped to a present value of the combination value. 13. The method of claim 8, further comprising: populating a LUT by: supplying a test signal that varies in amplitude and frequency; for each of at least two combinations of amplitude values and frequency value: controlling the power supply circuitry to adjust the bias voltage to the power amplifier to obtain a desired gain; and recording, in the LUT, the amplitude value and the frequency value mapped to the value of the bias voltage that obtains the desired gain; and storing the populated LUT. 14. The method of claim 13, further comprising: interpolating, based at least on the amplitude values and the frequency values, an interpolated bias voltage value that is associated with an interpolated amplitude value and an interpolated frequency value; and recording, in the LUT, the interpolated amplitude value and the interpolated frequency value mapped to the interpolated bias voltage value. 15. A transmitter, comprising: a transmit chain configured to process a transmit baseband signal to generate a transmit radio frequency (RF) signal, wherein the transmit chain includes a power amplifier that amplifies the transmit RF signal to generate an uplink signal; an instantaneous frequency circuitry configured to evaluate a transmit signal to determine an instantaneous frequency of the transmit signal; and a two-dimensional (2D) bias voltage selection circuitry configured to: select a bias voltage to be supplied to a power amplifier based on both an instantaneous amplitude and an instantaneous frequency of the transmit baseband signal, and control power supply circuitry to provide the selected bias voltage to the power amplifier. 16. The transmitter of claim 15, further comprising an instantaneous amplitude circuitry configured to determine the instantaneous amplitude of the transmit signal. 17. The transmitter of claim 16, wherein the instantaneous amplitude circuitry comprises a Coordinate Rotation Digital Computer (CORDIC) configured to compute an amplitude of the transmit signal and to generate an amplitude signal that communicates the computed amplitude. 18. The transmitter of claim 15, further comprising an instantaneous frequency circuitry configured to determine the instantaneous frequency of the transmit signal. 19. The transmitter of claim 18, wherein the instantaneous frequency circuitry comprises: a Coordinate Rotation Digital Computer (CORDIC) configured to compute a phase of the transmit signal and to generate a phase signal that communicates the computed phase; and a differentiator circuitry configured to generate an instantaneous frequency signal based on a rate of change of the phase signal, wherein the instantaneous frequency signal communicates the instantaneous frequency. 20. The transmitter of claim 15, wherein the 2D bias voltage selection circuitry further comprises a memory configured to store a lookup table (LUT) that maps instantaneous amplitude and instantaneous frequency value pairs to respective instantaneous bias voltages, wherein the 2D bias voltage selection circuitry is further configured to select a bias voltage that is mapped to a present value of the instantaneous amplitude and the instantaneous frequency. 21. The transmitter of claim 15, wherein the 2D bias voltage selection circuitry further comprises: a combination circuitry configured to combine a present value of the instantaneous amplitude and a present value of the instantaneous frequency to generate a combination value; and a memory configured to store a LUT that maps combination values to respective instantaneous bias voltages, wherein the 2D bias voltage selection circuitry is further configured to select a bias voltage that is mapped to a present value of the combination value. 22. The transmitter of claim 15, further comprising: memory configured to store a LUT; and calibration circuitry configured to populate the LUT by: supplying a test signal that varies in amplitude and frequency; for each of at least two combinations of amplitude values and frequency value: controlling the power supply circuitry to adjust the bias voltage to the power amplifier to obtain a desired gain; and recording, in the LUT, the amplitude value and the frequency value mapped to the value of the bias voltage that obtains the desired gain. 23. The transmitter of claim 22, wherein the calibration circuitry is configured to: interpolate, based at least on the amplitude values and the frequency values, an interpolated bias voltage value that is associated with an interpolated amplitude value and an interpolated frequency value; and record, in the LUT, the interpolated amplitude value and the interpolated frequency value mapped to the interpolated bias voltage value.
2,600
10,381
10,381
14,699,176
2,624
A method is provided for determining an input position on a touch-sensitive display by a user. The method includes the following steps: detecting the position of the eyes of the user, in particular with the aid of a camera; detecting the position of a contact with the touch-sensitive display; and determining the input position based on the detected position of the eyes relative to at least a portion of the touch-sensitive display, in particular in relation to a graphical element shown on the display, and the detected position of the contact.
1. A method for determining an input position on a touch-sensitive display by a user, the method comprising the acts of: detecting a position of eyes of the user; detecting a position of a contact by the user with the touch-sensitive display; and determining the input position based on the detected position of the eyes relative to at least a portion of the touch-sensitive display and the detected position of the contact. 2. The method according to claim 1, wherein the act of detecting the position of the eyes of the user is carried out via a camera. 3. The method according to claim 1, wherein the portion of the touch-sensitive display is a graphical element shown on the touch-sensitive display. 4. The method according to claim 2, wherein the portion of the touch-sensitive display is a graphical element shown on the touch-sensitive display. 5. The method according to claim 1, wherein the input position is determined proceeding from the position of the contact, which is adapted based on an angle that is associated with the position of the eyes relative to the at least one portion of the display such that the smaller the angle the greater the adaptation, wherein the angle is measured proceeding from the display. 6. The method according to claim 5, wherein the angle is determined based on an angle between the viewing direction, as determined by the position of the eyes and the at least one portion of the display, and the at least one portion of the display, wherein a normal of the at least one portion of the display is considered in determining the angle. 7. The method according to claim 6, wherein the angle is determined based on the angle between a projection of the viewing direction onto the display according to the direction of the normal and the viewing direction. 8. The method according to claim 5, wherein the angle is determined based on the angle between a viewing direction, as determined by the position of the eyes and the at least one portion of the display, and a predetermined direction associated with the display, wherein the predetermined direction is perpendicular to a normal of the at least one portion of the display. 9. The method according to claim 8, wherein the angle is determined based on the angle between a projection of the viewing direction onto the display according to the direction of the normal and the predetermined direction associated with the display. 10. The method according to claim 6, wherein a further angle is determined based on an angle between a viewing direction, as determined by the position of the eyes and the at least one portion of the display, and a predetermined direction associated with the display, wherein the predetermined direction is perpendicular to a normal of the at least one portion of the display. 11. The method according to claim 8, wherein the further angle is determined based on an angle between a projection of the viewing direction onto the display according to the direction of the normal and the predetermined direction associated with the display. 12. A device for determining an input position on an apparatus by a user, the device comprising: a touch-sensitive display; a camera configured to detect the position of eyes of a user; and an electronic processing unit; wherein the electronic processing unit executes a program to: detect a position of eyes of the user via the camera; detect a position of a contact by the user with the touch-sensitive display; and determine the input position based on the detected position of the eyes relative to at least a portion of the touch-sensitive display and the detected position of the contact.
A method is provided for determining an input position on a touch-sensitive display by a user. The method includes the following steps: detecting the position of the eyes of the user, in particular with the aid of a camera; detecting the position of a contact with the touch-sensitive display; and determining the input position based on the detected position of the eyes relative to at least a portion of the touch-sensitive display, in particular in relation to a graphical element shown on the display, and the detected position of the contact.1. A method for determining an input position on a touch-sensitive display by a user, the method comprising the acts of: detecting a position of eyes of the user; detecting a position of a contact by the user with the touch-sensitive display; and determining the input position based on the detected position of the eyes relative to at least a portion of the touch-sensitive display and the detected position of the contact. 2. The method according to claim 1, wherein the act of detecting the position of the eyes of the user is carried out via a camera. 3. The method according to claim 1, wherein the portion of the touch-sensitive display is a graphical element shown on the touch-sensitive display. 4. The method according to claim 2, wherein the portion of the touch-sensitive display is a graphical element shown on the touch-sensitive display. 5. The method according to claim 1, wherein the input position is determined proceeding from the position of the contact, which is adapted based on an angle that is associated with the position of the eyes relative to the at least one portion of the display such that the smaller the angle the greater the adaptation, wherein the angle is measured proceeding from the display. 6. The method according to claim 5, wherein the angle is determined based on an angle between the viewing direction, as determined by the position of the eyes and the at least one portion of the display, and the at least one portion of the display, wherein a normal of the at least one portion of the display is considered in determining the angle. 7. The method according to claim 6, wherein the angle is determined based on the angle between a projection of the viewing direction onto the display according to the direction of the normal and the viewing direction. 8. The method according to claim 5, wherein the angle is determined based on the angle between a viewing direction, as determined by the position of the eyes and the at least one portion of the display, and a predetermined direction associated with the display, wherein the predetermined direction is perpendicular to a normal of the at least one portion of the display. 9. The method according to claim 8, wherein the angle is determined based on the angle between a projection of the viewing direction onto the display according to the direction of the normal and the predetermined direction associated with the display. 10. The method according to claim 6, wherein a further angle is determined based on an angle between a viewing direction, as determined by the position of the eyes and the at least one portion of the display, and a predetermined direction associated with the display, wherein the predetermined direction is perpendicular to a normal of the at least one portion of the display. 11. The method according to claim 8, wherein the further angle is determined based on an angle between a projection of the viewing direction onto the display according to the direction of the normal and the predetermined direction associated with the display. 12. A device for determining an input position on an apparatus by a user, the device comprising: a touch-sensitive display; a camera configured to detect the position of eyes of a user; and an electronic processing unit; wherein the electronic processing unit executes a program to: detect a position of eyes of the user via the camera; detect a position of a contact by the user with the touch-sensitive display; and determine the input position based on the detected position of the eyes relative to at least a portion of the touch-sensitive display and the detected position of the contact.
2,600
10,382
10,382
15,618,898
2,616
In one embodiment, a method includes a system accessing an image, which may comprise covered and uncovered portions, and an overlay image comprising opaque pixels. The covered portion may be configured to be covered by the opaque pixels of the overlay image. The system may generate a data structure comprising data elements associated with pixels of the image. Each of the data elements associated with a covered pixel in the covered portion of the image may be configured to identify an uncovered pixel in the uncovered portion of the image that is closest to the covered pixel. Each covered pixel in the covered portion of the image may be modified by accessing the data element associated with the covered pixel, determining a distance between the covered pixel and an associated closest uncovered pixel using the accessed data element, and modifying a color of the covered pixel based on the distance.
1. A method, comprising: by a computing system, accessing an image and an overlay image, wherein the overlay image comprises opaque pixels, wherein the image comprises a covered portion and an uncovered portion, wherein the covered portion is configured to be covered by the opaque pixels of the overlay image; by the computing system, generating a data structure comprising data elements associated with pixels of the image, wherein each of the data elements associated with a covered pixel in the covered portion of the image is configured to identify an uncovered pixel in the uncovered portion of the image that is closest to the covered pixel; by the computing system, modifying each covered pixel in the covered portion of the image by: accessing the data element associated with the covered pixel; determining a distance between the covered pixel and an associated closest uncovered pixel using the accessed data element; and modifying a color of the covered pixel based on the distance. 2. The method of claim 1, further comprising: by the computing system, compressing the modified image; and by the computing system, transmitting the compressed modified image and the overlay image to a user device; wherein the overlay image is configured to be displayed on top of the compressed modified image. 3. The method of claim 1, wherein each of the opaque pixels in the overlay image is associated with one of the data elements in the data structure. 4. The method of claim 1, wherein the overlay image comprises one or more rows of pixels and one or more columns of pixels; wherein the generating of the data structure comprises: for each row of the pixels in the overlay image, sequentially processing the pixels in the row from a first direction and from a second direction; and for each column of the pixels in the overlay image, sequentially processing the pixels in the column from a third direction and from a fourth direction. 5. The method of claim 1, wherein the overlay image comprises one or more rows of pixels and one or more columns of pixels; wherein each of the opaque pixels in the overlay image is associated with one of the rows of pixels and one of the columns of the pixels; wherein the generating of the data structure comprises: for at least one of the opaque pixels in the overlay image: determining a first distance between the opaque pixel and a first non-opaque pixel in the associated row of pixels, the first non-opaque pixel being a closest non-opaque pixel that is located left of the opaque pixel; determining a second distance between the opaque pixel and a second non-opaque pixel in the associated row of pixels, the second non-opaque pixel being a closest non-opaque pixel that is located right of the opaque pixel; determining a third distance between the opaque pixel and a third non-opaque pixel in the associated column of pixels, the third non-opaque pixel being a closest non-opaque pixel that is located above the opaque pixel; determining a fourth distance between the opaque pixel and a fourth non-opaque pixel in the associated column of pixels, the fourth non-opaque pixel being a closest non-opaque pixel that is located below the opaque pixel; selecting a closest non-opaque pixel based on the first distance, the second distance, the third distance, and the fourth distance; and generating one of the data elements based on the selected closest non-opaque pixel. 6. The method of claim 5, wherein the generated data element comprises a direction value and a distance value associated with the selected closest non-opaque pixel. 7. The method of claim 1, wherein the overlay image is one of a plurality of frames in an animated overlay effect; wherein each of the plurality of frames comprises opaque pixels; and wherein locations of the opaque pixels in the overlay image correspond to locations of the opaque pixels in each of the plurality of frames. 8. The method of claim 7, wherein the image is one of a plurality of frames in a video, the method further comprising: modifying each of the other frames in the video using the data structure. 9. The method of claim 1, wherein the overlay image is one of a plurality of frames in an animated overlay effect; wherein the image is one of a plurality of frames in a video; wherein each of the plurality of frames in the video is configured to be covered by an associated frame of the plurality of frames in the animated overlay effect; and wherein each of the plurality of frames in the video comprises a covered portion that is covered by opaque pixels of the associated frame in the animated overlay effect. 10. The method of claim 9, further comprising: for each of the plurality of frames in the video: generating a corresponding data structure using the associated frame in the animated overlay effect; and modifying the frame in the video using the corresponding data structure. 11. The method of claim 9, further comprising: determining that the location of the covered portion of the image corresponds to the locations of the covered portions of the other frames in the video. 12. The method of claim 1, further comprising: determining that a region defined by the opaque pixels of the overlay image satisfies predetermined conditions associated with size and shape. 13. The method of claim 1, wherein the modifying of the color of the covered pixel comprises: modifying the color of the covered pixel to a predetermined masking color when the distance exceeds a threshold; and modifying the color of the covered pixel to a color of the closest uncovered pixel when the distance does not exceed the threshold. 14. One or more computer-readable non-transitory storage media embodying software that is operable when executed to: access an image and an overlay image, wherein the overlay image comprises opaque pixels, wherein the image comprises a covered portion and an uncovered portion, wherein the covered portion is configured to be covered by the opaque pixels of the overlay image; generate a data structure comprising data elements associated with pixels of the image, wherein each of the data elements associated with a covered pixel in the covered portion of the image is configured to identify an uncovered pixel in the uncovered portion of the image that is closest to the covered pixel; modify each covered pixel in the covered portion of the image by: accessing the data element associated with the covered pixel; determining a distance between the covered pixel and an associated closest uncovered pixel using the accessed data element; and modifying a color of the covered pixel based on the distance. 15. The media of claim 14, wherein the software is further operable when executed to: compress the modified image; and transmit the compressed modified image and the overlay image to a user device; wherein the overlay image is configured to be displayed on top of the compressed modified image. 16. The media of claim 14, wherein the overlay image comprises one or more rows of pixels and one or more columns of pixels; wherein the software, when generating the data structure, is further operable to: for each row of the pixels in the overlay image, sequentially process the pixels in the row from a first direction and from a second direction; and for each column of the pixels in the overlay image, sequentially process the pixels in the column from a third direction and from a fourth direction. 17. The media of claim 14, wherein the software, when modifying the color of the covered pixel, is further operable to: modify the color of the covered pixel to a predetermined masking color when the distance exceeds a threshold; and modify the color of the covered pixel to a color of the closest uncovered pixel when the distance does not exceed the threshold. 18. A system comprising: one or more processors; and a memory coupled to the processors comprising instructions executable by the processors, the processors being operable when executing the instructions to: access an image and an overlay image, wherein the overlay image comprises opaque pixels, wherein the image comprises a covered portion and an uncovered portion, wherein the covered portion is configured to be covered by the opaque pixels of the overlay image; generate a data structure comprising data elements associated with pixels of the image, wherein each of the data elements associated with a covered pixel in the covered portion of the image is configured to identify an uncovered pixel in the uncovered portion of the image that is closest to the covered pixel; modify each covered pixel in the covered portion of the image by: accessing the data element associated with the covered pixel; determining a distance between the covered pixel and an associated closest uncovered pixel using the accessed data element; and modifying a color of the covered pixel based on the distance. 19. The system of claim 18, wherein the overlay image comprises one or more rows of pixels and one or more columns of pixels; wherein the processors, when executing the instructions to generate the data structure, is operable to: for each row of the pixels in the overlay image, sequentially process the pixels in the row from a first direction and from a second direction; and for each column of the pixels in the overlay image, sequentially process the pixels in the column from a third direction and from a fourth direction. 20. The system of claim 18, wherein the processors, when executing the instructions to modify the color of the covered pixel, is operable to: modify the color of the covered pixel to a predetermined masking color when the distance exceeds a threshold; and modify the color of the covered pixel to a color of the closest uncovered pixel when the distance does not exceed the threshold.
In one embodiment, a method includes a system accessing an image, which may comprise covered and uncovered portions, and an overlay image comprising opaque pixels. The covered portion may be configured to be covered by the opaque pixels of the overlay image. The system may generate a data structure comprising data elements associated with pixels of the image. Each of the data elements associated with a covered pixel in the covered portion of the image may be configured to identify an uncovered pixel in the uncovered portion of the image that is closest to the covered pixel. Each covered pixel in the covered portion of the image may be modified by accessing the data element associated with the covered pixel, determining a distance between the covered pixel and an associated closest uncovered pixel using the accessed data element, and modifying a color of the covered pixel based on the distance.1. A method, comprising: by a computing system, accessing an image and an overlay image, wherein the overlay image comprises opaque pixels, wherein the image comprises a covered portion and an uncovered portion, wherein the covered portion is configured to be covered by the opaque pixels of the overlay image; by the computing system, generating a data structure comprising data elements associated with pixels of the image, wherein each of the data elements associated with a covered pixel in the covered portion of the image is configured to identify an uncovered pixel in the uncovered portion of the image that is closest to the covered pixel; by the computing system, modifying each covered pixel in the covered portion of the image by: accessing the data element associated with the covered pixel; determining a distance between the covered pixel and an associated closest uncovered pixel using the accessed data element; and modifying a color of the covered pixel based on the distance. 2. The method of claim 1, further comprising: by the computing system, compressing the modified image; and by the computing system, transmitting the compressed modified image and the overlay image to a user device; wherein the overlay image is configured to be displayed on top of the compressed modified image. 3. The method of claim 1, wherein each of the opaque pixels in the overlay image is associated with one of the data elements in the data structure. 4. The method of claim 1, wherein the overlay image comprises one or more rows of pixels and one or more columns of pixels; wherein the generating of the data structure comprises: for each row of the pixels in the overlay image, sequentially processing the pixels in the row from a first direction and from a second direction; and for each column of the pixels in the overlay image, sequentially processing the pixels in the column from a third direction and from a fourth direction. 5. The method of claim 1, wherein the overlay image comprises one or more rows of pixels and one or more columns of pixels; wherein each of the opaque pixels in the overlay image is associated with one of the rows of pixels and one of the columns of the pixels; wherein the generating of the data structure comprises: for at least one of the opaque pixels in the overlay image: determining a first distance between the opaque pixel and a first non-opaque pixel in the associated row of pixels, the first non-opaque pixel being a closest non-opaque pixel that is located left of the opaque pixel; determining a second distance between the opaque pixel and a second non-opaque pixel in the associated row of pixels, the second non-opaque pixel being a closest non-opaque pixel that is located right of the opaque pixel; determining a third distance between the opaque pixel and a third non-opaque pixel in the associated column of pixels, the third non-opaque pixel being a closest non-opaque pixel that is located above the opaque pixel; determining a fourth distance between the opaque pixel and a fourth non-opaque pixel in the associated column of pixels, the fourth non-opaque pixel being a closest non-opaque pixel that is located below the opaque pixel; selecting a closest non-opaque pixel based on the first distance, the second distance, the third distance, and the fourth distance; and generating one of the data elements based on the selected closest non-opaque pixel. 6. The method of claim 5, wherein the generated data element comprises a direction value and a distance value associated with the selected closest non-opaque pixel. 7. The method of claim 1, wherein the overlay image is one of a plurality of frames in an animated overlay effect; wherein each of the plurality of frames comprises opaque pixels; and wherein locations of the opaque pixels in the overlay image correspond to locations of the opaque pixels in each of the plurality of frames. 8. The method of claim 7, wherein the image is one of a plurality of frames in a video, the method further comprising: modifying each of the other frames in the video using the data structure. 9. The method of claim 1, wherein the overlay image is one of a plurality of frames in an animated overlay effect; wherein the image is one of a plurality of frames in a video; wherein each of the plurality of frames in the video is configured to be covered by an associated frame of the plurality of frames in the animated overlay effect; and wherein each of the plurality of frames in the video comprises a covered portion that is covered by opaque pixels of the associated frame in the animated overlay effect. 10. The method of claim 9, further comprising: for each of the plurality of frames in the video: generating a corresponding data structure using the associated frame in the animated overlay effect; and modifying the frame in the video using the corresponding data structure. 11. The method of claim 9, further comprising: determining that the location of the covered portion of the image corresponds to the locations of the covered portions of the other frames in the video. 12. The method of claim 1, further comprising: determining that a region defined by the opaque pixels of the overlay image satisfies predetermined conditions associated with size and shape. 13. The method of claim 1, wherein the modifying of the color of the covered pixel comprises: modifying the color of the covered pixel to a predetermined masking color when the distance exceeds a threshold; and modifying the color of the covered pixel to a color of the closest uncovered pixel when the distance does not exceed the threshold. 14. One or more computer-readable non-transitory storage media embodying software that is operable when executed to: access an image and an overlay image, wherein the overlay image comprises opaque pixels, wherein the image comprises a covered portion and an uncovered portion, wherein the covered portion is configured to be covered by the opaque pixels of the overlay image; generate a data structure comprising data elements associated with pixels of the image, wherein each of the data elements associated with a covered pixel in the covered portion of the image is configured to identify an uncovered pixel in the uncovered portion of the image that is closest to the covered pixel; modify each covered pixel in the covered portion of the image by: accessing the data element associated with the covered pixel; determining a distance between the covered pixel and an associated closest uncovered pixel using the accessed data element; and modifying a color of the covered pixel based on the distance. 15. The media of claim 14, wherein the software is further operable when executed to: compress the modified image; and transmit the compressed modified image and the overlay image to a user device; wherein the overlay image is configured to be displayed on top of the compressed modified image. 16. The media of claim 14, wherein the overlay image comprises one or more rows of pixels and one or more columns of pixels; wherein the software, when generating the data structure, is further operable to: for each row of the pixels in the overlay image, sequentially process the pixels in the row from a first direction and from a second direction; and for each column of the pixels in the overlay image, sequentially process the pixels in the column from a third direction and from a fourth direction. 17. The media of claim 14, wherein the software, when modifying the color of the covered pixel, is further operable to: modify the color of the covered pixel to a predetermined masking color when the distance exceeds a threshold; and modify the color of the covered pixel to a color of the closest uncovered pixel when the distance does not exceed the threshold. 18. A system comprising: one or more processors; and a memory coupled to the processors comprising instructions executable by the processors, the processors being operable when executing the instructions to: access an image and an overlay image, wherein the overlay image comprises opaque pixels, wherein the image comprises a covered portion and an uncovered portion, wherein the covered portion is configured to be covered by the opaque pixels of the overlay image; generate a data structure comprising data elements associated with pixels of the image, wherein each of the data elements associated with a covered pixel in the covered portion of the image is configured to identify an uncovered pixel in the uncovered portion of the image that is closest to the covered pixel; modify each covered pixel in the covered portion of the image by: accessing the data element associated with the covered pixel; determining a distance between the covered pixel and an associated closest uncovered pixel using the accessed data element; and modifying a color of the covered pixel based on the distance. 19. The system of claim 18, wherein the overlay image comprises one or more rows of pixels and one or more columns of pixels; wherein the processors, when executing the instructions to generate the data structure, is operable to: for each row of the pixels in the overlay image, sequentially process the pixels in the row from a first direction and from a second direction; and for each column of the pixels in the overlay image, sequentially process the pixels in the column from a third direction and from a fourth direction. 20. The system of claim 18, wherein the processors, when executing the instructions to modify the color of the covered pixel, is operable to: modify the color of the covered pixel to a predetermined masking color when the distance exceeds a threshold; and modify the color of the covered pixel to a color of the closest uncovered pixel when the distance does not exceed the threshold.
2,600
10,383
10,383
15,154,313
2,625
The present application provides a transmitter including a first sensor and a processing module electrically coupled to the first sensor. The processing module is configured to transmit a first electric signal during a first time period. The first electric signal represents at least one part of a digital sensing value of the first sensor.
1. A transmitter, comprising: a tip section; a first sensor; and a processing module, electrically coupled to the first sensor and the tip section, configured to transmit a first electric signal during a first time period, wherein the first electric signal represents a first digital of a digital sensing value of the first sensor in number system based on N, where N is bigger than or equal to 2. 2. The transmitter of claim 1, wherein the digital sensing value is a pressure digital value the first sensor senses the tip section of the transmitter being pressed. 3. The transmitter of claim 1, wherein the first electric signal includes at least N−1 state values. 4. The transmitter of claim 1, wherein the first electric signal further includes any or any combination of redundancy code, error correcting code, and check code of the state value thereof. 5. The transmitter of claim 1, wherein a transmitting time length of the first electric signal relates to the first digital's value and a unit time length. 6. The transmitter of claim 1, wherein a pulse-repeating frequency of the first electric signal relates to the first digital's value. 7. The transmitter of claim 1, wherein the processing module is used to transmits a second electric signal during a second time period, the second electric signal is used to represent a second digital of the digital sensing value in number system based on M, where M is bigger than or equal to 2. 8. The transmitter of claim 7, wherein M is not equal to N. 9. The transmitter of claim 7, wherein the second electric signal includes at least M−1 state values. 10. The transmitter of claim 7, wherein the second electric signal further includes any or any combination of redundancy code, error correcting code, and check code of the state value thereof. 11. The transmitter of claim 7, wherein a transmitting time length of the second electric signal relates to the second digital's value and a unit time length, the electric signal is not transmitted in an interval between the first time period and the second time period. 12. The transmitter of claim 7, wherein a pulse-repeating frequency of the second electric signal relates to the second digital's value. 13. The transmitter of claim 12, wherein the length of the first time period is equal to the length of the second time period. 14. The transmitter of claim 1, wherein the transmitter further includes a second sensor, the processing module is used to transmit a third electric signal during a third time period, the third electric signal is used to represent a digital sensing value of the second sensor. 15. The transmitter of claim 14, wherein the transmitter further includes a third sensor, and the third electric signal is further to represent a digital sensing value of the third sensor. 16. The transmitter of claim 14, wherein the third time period is after the transmitter receiving a beacon signal, the first time period is after the third time period. 17. The transmitter of claim 14, wherein the transmitter further includes a ring electrode surrounding the tip section, the third electric signal is transmitted by the tip section or the ring electrode or the combination of them. 18. A method for controlling a transmitter including a first sensor and a tip section, the method comprising: transmitting a first electric signal through the tip section during a first time period, wherein the first electric signal represents a first digital of a digital sensing value of the first sensor in number system based on N, where N is bigger than or equal to 2. 19. The method of claim 18, further comprising: using the first sensor to sense a pressure digital value of the tip section of the transmitter being pressed so as to get a digital sensing value of the first sensor. 20. The method of claim 18, wherein the first electric signal includes at least N−1 state values. 21. The method of claim 18, wherein the first electric signal further includes any or any combination of redundancy code, error correcting code, and check code of the state value thereof. 22. The method of claim 18, wherein a transmitting time length of the first electric signal relates to the first digital's value and a unit time length. 23. The method of claim 18, wherein a pulse-repeating frequency of the first electric signal relates to the first digital's value. 24. The method of claim 18, further comprising: transmitting a second electric signal during a second time period, the second electric signal is used to represent a second digital of the digital sensing value in number system based on M, where M is bigger than or equal to 2. 25. The method of claim 24, wherein M is not equal to N. 26. The method of claim 24, wherein the second electric signal includes at least M−1 state values. 27. The method of claim 24, wherein the second electric signal further includes any or any combination of redundancy code, error correcting code, and check code of the state value thereof. 28. The method of claim 24, wherein a transmitting time length of the second electric signal relates to the second digital's value and a unit time length, the electric signal is not transmitted in an interval between the first time period and the second time period. 29. The method of claim 24, wherein a pulse-repeating frequency of the second electric signal relates to the second digital's value. 30. The method of claim 29, wherein the length of the first time period is equal to the length of the second time period. 31. The method of claim 18, wherein the transmitter further includes a second sensor, the method further includes transmitting a third electric signal during a third time period, the third electric signal is used to represent a digital sensing value of the second sensor. 32. The method of claim 31, wherein the transmitter further includes a third sensor, and the third electric signal is further to represent a digital sensing value of the third sensor. 33. The method of claim 31, wherein the third time period is after the transmitter receiving a beacon signal, the first time period is after the third time period. 34. The method of claim 31, wherein the transmitter further includes a ring electrode surrounding the tip section, and the third electric signal is transmitted by the tip section or the ring electrode or the combination of them.
The present application provides a transmitter including a first sensor and a processing module electrically coupled to the first sensor. The processing module is configured to transmit a first electric signal during a first time period. The first electric signal represents at least one part of a digital sensing value of the first sensor.1. A transmitter, comprising: a tip section; a first sensor; and a processing module, electrically coupled to the first sensor and the tip section, configured to transmit a first electric signal during a first time period, wherein the first electric signal represents a first digital of a digital sensing value of the first sensor in number system based on N, where N is bigger than or equal to 2. 2. The transmitter of claim 1, wherein the digital sensing value is a pressure digital value the first sensor senses the tip section of the transmitter being pressed. 3. The transmitter of claim 1, wherein the first electric signal includes at least N−1 state values. 4. The transmitter of claim 1, wherein the first electric signal further includes any or any combination of redundancy code, error correcting code, and check code of the state value thereof. 5. The transmitter of claim 1, wherein a transmitting time length of the first electric signal relates to the first digital's value and a unit time length. 6. The transmitter of claim 1, wherein a pulse-repeating frequency of the first electric signal relates to the first digital's value. 7. The transmitter of claim 1, wherein the processing module is used to transmits a second electric signal during a second time period, the second electric signal is used to represent a second digital of the digital sensing value in number system based on M, where M is bigger than or equal to 2. 8. The transmitter of claim 7, wherein M is not equal to N. 9. The transmitter of claim 7, wherein the second electric signal includes at least M−1 state values. 10. The transmitter of claim 7, wherein the second electric signal further includes any or any combination of redundancy code, error correcting code, and check code of the state value thereof. 11. The transmitter of claim 7, wherein a transmitting time length of the second electric signal relates to the second digital's value and a unit time length, the electric signal is not transmitted in an interval between the first time period and the second time period. 12. The transmitter of claim 7, wherein a pulse-repeating frequency of the second electric signal relates to the second digital's value. 13. The transmitter of claim 12, wherein the length of the first time period is equal to the length of the second time period. 14. The transmitter of claim 1, wherein the transmitter further includes a second sensor, the processing module is used to transmit a third electric signal during a third time period, the third electric signal is used to represent a digital sensing value of the second sensor. 15. The transmitter of claim 14, wherein the transmitter further includes a third sensor, and the third electric signal is further to represent a digital sensing value of the third sensor. 16. The transmitter of claim 14, wherein the third time period is after the transmitter receiving a beacon signal, the first time period is after the third time period. 17. The transmitter of claim 14, wherein the transmitter further includes a ring electrode surrounding the tip section, the third electric signal is transmitted by the tip section or the ring electrode or the combination of them. 18. A method for controlling a transmitter including a first sensor and a tip section, the method comprising: transmitting a first electric signal through the tip section during a first time period, wherein the first electric signal represents a first digital of a digital sensing value of the first sensor in number system based on N, where N is bigger than or equal to 2. 19. The method of claim 18, further comprising: using the first sensor to sense a pressure digital value of the tip section of the transmitter being pressed so as to get a digital sensing value of the first sensor. 20. The method of claim 18, wherein the first electric signal includes at least N−1 state values. 21. The method of claim 18, wherein the first electric signal further includes any or any combination of redundancy code, error correcting code, and check code of the state value thereof. 22. The method of claim 18, wherein a transmitting time length of the first electric signal relates to the first digital's value and a unit time length. 23. The method of claim 18, wherein a pulse-repeating frequency of the first electric signal relates to the first digital's value. 24. The method of claim 18, further comprising: transmitting a second electric signal during a second time period, the second electric signal is used to represent a second digital of the digital sensing value in number system based on M, where M is bigger than or equal to 2. 25. The method of claim 24, wherein M is not equal to N. 26. The method of claim 24, wherein the second electric signal includes at least M−1 state values. 27. The method of claim 24, wherein the second electric signal further includes any or any combination of redundancy code, error correcting code, and check code of the state value thereof. 28. The method of claim 24, wherein a transmitting time length of the second electric signal relates to the second digital's value and a unit time length, the electric signal is not transmitted in an interval between the first time period and the second time period. 29. The method of claim 24, wherein a pulse-repeating frequency of the second electric signal relates to the second digital's value. 30. The method of claim 29, wherein the length of the first time period is equal to the length of the second time period. 31. The method of claim 18, wherein the transmitter further includes a second sensor, the method further includes transmitting a third electric signal during a third time period, the third electric signal is used to represent a digital sensing value of the second sensor. 32. The method of claim 31, wherein the transmitter further includes a third sensor, and the third electric signal is further to represent a digital sensing value of the third sensor. 33. The method of claim 31, wherein the third time period is after the transmitter receiving a beacon signal, the first time period is after the third time period. 34. The method of claim 31, wherein the transmitter further includes a ring electrode surrounding the tip section, and the third electric signal is transmitted by the tip section or the ring electrode or the combination of them.
2,600
10,384
10,384
15,869,333
2,646
A system includes a processor configured to determine that a boundary parameter associated with an automatic action has been met. The processor is also configured to determine that one or more secondary vehicle-related predefined conditions pre-associated with the action have been met, responsive to determining that the boundary parameter has been met. The processor is additionally configured to instruct the automatic action responsive to the secondary vehicle-related predefined conditions being met.
1. A system comprising: a processor configured to: determine that a boundary parameter pre-associated with a device activation command has been met; responsive to determining that the boundary parameter has been met, determine that a vehicle destination, identified by a navigation unit and pre-associated with the device activation command, has been met; and instruct the device activation command responsive to the destination being met. 2. The system of claim 1, wherein the boundary parameter includes crossing a geographic boundary. 3-10. (canceled) 11. The system of claim 1, wherein the boundary parameter includes passing a temporal boundary. 12. The system of claim 1, wherein the boundary parameter includes passing a weather-related boundary. 13. A computer-implemented method comprising: responsive to vehicle-data variables indicating meeting both a boundary that is pre-associated with a consumer device, and a vehicle heading that is predefinedly associated with the boundary parameter, as a variable to be considered upon reaching the boundary parameter, and which is considered responsive to meeting the bounding parameter, the results of which indicate a direction of crossing the boundary, engaging in a predefined automatic interaction with a consumer device remote from the vehicle when the direction of crossing the boundary meets a predefined direction of crossing the boundary pre-associated with the predefined automatic interaction. 14. The method of claim 13, wherein the boundary includes a geographic boundary. 15-16. (canceled) 17. The method of claim 13, wherein the automatic interaction includes instructing activation of the consumer device. 18. The method of claim 13, wherein the automatic interaction includes transmitting vehicle information to the consumer device. 19. The method of claim 18, wherein the vehicle information includes arrival time. 20. A computer-implemented method comprising: responsive to crossing a predefined geographic boundary having a predefined relationship with both a consumer device and a predefined specific vehicle-detectable environmental condition defined by a user, both the boundary and the condition also predefinedly associated with the consumer device, determining when the environmental condition is met; and responsive to determining that the environmental conditions is met, executing a predefined interaction with the consumer device. 21. The system of claim 1, wherein the determination determine that a vehicle destination, identified by a navigation unit and pre-associated with the device activation command, has been met is further responsive to determining that the boundary parameter has been met while the navigation unit indicates a current heading indicating meeting the boundary parameter while moving towards the destination.
A system includes a processor configured to determine that a boundary parameter associated with an automatic action has been met. The processor is also configured to determine that one or more secondary vehicle-related predefined conditions pre-associated with the action have been met, responsive to determining that the boundary parameter has been met. The processor is additionally configured to instruct the automatic action responsive to the secondary vehicle-related predefined conditions being met.1. A system comprising: a processor configured to: determine that a boundary parameter pre-associated with a device activation command has been met; responsive to determining that the boundary parameter has been met, determine that a vehicle destination, identified by a navigation unit and pre-associated with the device activation command, has been met; and instruct the device activation command responsive to the destination being met. 2. The system of claim 1, wherein the boundary parameter includes crossing a geographic boundary. 3-10. (canceled) 11. The system of claim 1, wherein the boundary parameter includes passing a temporal boundary. 12. The system of claim 1, wherein the boundary parameter includes passing a weather-related boundary. 13. A computer-implemented method comprising: responsive to vehicle-data variables indicating meeting both a boundary that is pre-associated with a consumer device, and a vehicle heading that is predefinedly associated with the boundary parameter, as a variable to be considered upon reaching the boundary parameter, and which is considered responsive to meeting the bounding parameter, the results of which indicate a direction of crossing the boundary, engaging in a predefined automatic interaction with a consumer device remote from the vehicle when the direction of crossing the boundary meets a predefined direction of crossing the boundary pre-associated with the predefined automatic interaction. 14. The method of claim 13, wherein the boundary includes a geographic boundary. 15-16. (canceled) 17. The method of claim 13, wherein the automatic interaction includes instructing activation of the consumer device. 18. The method of claim 13, wherein the automatic interaction includes transmitting vehicle information to the consumer device. 19. The method of claim 18, wherein the vehicle information includes arrival time. 20. A computer-implemented method comprising: responsive to crossing a predefined geographic boundary having a predefined relationship with both a consumer device and a predefined specific vehicle-detectable environmental condition defined by a user, both the boundary and the condition also predefinedly associated with the consumer device, determining when the environmental condition is met; and responsive to determining that the environmental conditions is met, executing a predefined interaction with the consumer device. 21. The system of claim 1, wherein the determination determine that a vehicle destination, identified by a navigation unit and pre-associated with the device activation command, has been met is further responsive to determining that the boundary parameter has been met while the navigation unit indicates a current heading indicating meeting the boundary parameter while moving towards the destination.
2,600
10,385
10,385
15,373,929
2,622
In described examples, an apparatus includes a metal plate having a plurality of defined areas forming touch sensors on a first planar surface, and having an opposing planar surface. The metal plate is arranged to be deformable in the plurality of defined areas by a human touch. A circuit board has a plurality of conductive sensors on a first surface arranged with the plurality of conductive sensors facing and spaced from the opposing planar surface of the metal plate, the conductive sensors placed in correspondence with the defined areas on the metal plate so that deflection sensors are formed in the defined areas by the conductive sensors and the opposing planar surface of the metal plate. Methods are described.
1. An apparatus, comprising: a metal plate having a plurality of defined areas forming touch sensors on an first planar surface, and having an opposing planar surface, the metal plate configured to be deformable in the plurality of defined areas by a human touch, and the metal plate having non-touch areas in areas other than the defined areas; and a circuit board having a plurality of conductive sensors on a first surface arranged with the plurality of conductive sensors facing and spaced from the opposing planar surface of the metal plate, the conductive sensors placed in correspondence with the defined areas on the metal plate so that deflection sensors are formed in the defined areas by the conductive sensors and the opposing planar surface of the metal plate. 2. The apparatus of claim 1, in which the metal plate has a first thickness and includes a plurality of blind holes extending into the metal plate at the opposing planar surface to provide a second thickness of the metal plate less than the first thickness in the plurality of defined areas. 3. The apparatus of claim 2, comprising: a plurality of pillars on the circuit board extending into the plurality of blind holes and having at least one of the plurality of conductive sensors at a top surface of the pillars facing and spaced from the opposing planar surface of the metal plate, a deflection sensor being formed between the at least one of the defined areas of the metal plate and at least one of the plurality of conductive sensors at the top surface of the pillar. 4. The apparatus of claim 2, comprising: a plurality of spring pillars on the circuit board extending into the plurality of blind holes in the metal plate and having at least one of the plurality of conductive sensors at a top portion of the spring pillars facing and spaced from the opposing planar surface of the metal plate, at least one deflection sensor being formed between the opposing planar surface of the metal plate in the defined areas and the at least one of the plurality of conductive sensors at the top portion of the spring pillars. 5. The apparatus of claim 1, in which the metal plate comprises a plurality of posts formed on the opposing planar surface of the metal plate and extending away from the opposing planar surface a predetermined distance, and having blind openings extending into a top surface of the plurality of posts for receiving a fastener. 6. The apparatus of claim 5, in which the plurality of posts are placed around the defined areas and are configured to prevent the metal plate from deforming in non-touch areas other than the defined areas. 7. The apparatus of claim 6, and further including fasteners inserted in the blind openings in the plurality of posts to join a backing component covering a second planar surface of the circuit board to the metal plate. 8. The apparatus of claim 7, in which the fasteners are ones selected from a group consisting essentially of screws, rivets, brads and pins. 9. The apparatus of claim 1, in which the metal plate comprises a metal selected from a group consisting essentially of stainless steel and aluminum. 10. The apparatus of claim 1, in which the conductive sensors are one selected from a group consisting essentially of capacitive sensors and inductive sensors. 11. An apparatus, comprising: a metal plate having at least one defined area forming a touch sensor on a first planar surface, and having an opposing planar surface, the metal plate being deformable in the defined area by a human touch on the first planar surface; a recessed portion on the opposing planar surface of the metal plate having a recess depth, the recess depth defining a spacing distance; flange portions surrounding the recessed portion on the opposing planar surface of the metal plate and not having the recess depth; a circuit board having a plurality of sensors on an upper surface, the sensors arranged in rows and columns, the plurality of sensors placed facing and in correspondence with the recessed portion of the opposing planar surface of the metal plate; and the flange portions on the opposing planar surface of the metal plate contacting the upper surface of the circuit board, and the sensors being spaced from the opposing planar surface of the metal plate by the spacing distance. 12. The apparatus of claim 11, in which the touch sensor of the metal plate forms a gesture sensor area. 13. The apparatus of claim 11, in which the touch sensor of the metal plate forms a sliding sensor area. 14. The apparatus of claim 11, in which the touch sensor of the metal plate forms a wheel sensor area. 15. The apparatus of claim 11, in which the plurality of sensors comprise capacitive sensors that change capacitance when an area of the metal plate is deflected by a human touch. 16. The apparatus of claim 11, in which the plurality of sensors comprise inductive sensors that form an electric field that changes when an area of the metal plate is deflected by a human touch. 17. The apparatus of claim 11, in which the defined area further comprises a plurality of defined button areas forming touch sensor buttons, spaced apart by areas on the metal plate forming non-touch areas. 18. The apparatus of claim 17, and further comprising a processor coupled to the plurality of sensors, configured to detect a change in capacitance in the plurality of sensors indicating a touch deflecting the metal plate, and configured to determine whether the touch is within a defined button area. 19. A method for detecting a human touch at a metal touch sensor, comprising: defining a touch area on a first planar surface of a metal plate, the metal plate having a second planar surface opposing the first planar surface, the metal plate having a thickness in the touch area such that the metal plate can be deflected in the touch area by a human touch; placing a plurality of sensors on a circuit board disposed facing and spaced from the second planar surface of the metal plate; coupling the plurality of sensors to a processor configured to detect a signal from the sensors corresponding to deflection of the metal plate in the touch area due to a human touch; scanning the plurality of sensors to detect a deflection in the metal plate caused by a human touch; and operating the processor to determine where in the touch area the touch occurred. 20. The method of claim 19, and further comprising: defining touch button areas within the touch area on the first planar surface of the metal plate, and further defining non-touch areas; and operating the processor to determine whether a deflection in the metal plate detected by the plurality of sensors corresponds to a touch in a defined touch button area.
In described examples, an apparatus includes a metal plate having a plurality of defined areas forming touch sensors on a first planar surface, and having an opposing planar surface. The metal plate is arranged to be deformable in the plurality of defined areas by a human touch. A circuit board has a plurality of conductive sensors on a first surface arranged with the plurality of conductive sensors facing and spaced from the opposing planar surface of the metal plate, the conductive sensors placed in correspondence with the defined areas on the metal plate so that deflection sensors are formed in the defined areas by the conductive sensors and the opposing planar surface of the metal plate. Methods are described.1. An apparatus, comprising: a metal plate having a plurality of defined areas forming touch sensors on an first planar surface, and having an opposing planar surface, the metal plate configured to be deformable in the plurality of defined areas by a human touch, and the metal plate having non-touch areas in areas other than the defined areas; and a circuit board having a plurality of conductive sensors on a first surface arranged with the plurality of conductive sensors facing and spaced from the opposing planar surface of the metal plate, the conductive sensors placed in correspondence with the defined areas on the metal plate so that deflection sensors are formed in the defined areas by the conductive sensors and the opposing planar surface of the metal plate. 2. The apparatus of claim 1, in which the metal plate has a first thickness and includes a plurality of blind holes extending into the metal plate at the opposing planar surface to provide a second thickness of the metal plate less than the first thickness in the plurality of defined areas. 3. The apparatus of claim 2, comprising: a plurality of pillars on the circuit board extending into the plurality of blind holes and having at least one of the plurality of conductive sensors at a top surface of the pillars facing and spaced from the opposing planar surface of the metal plate, a deflection sensor being formed between the at least one of the defined areas of the metal plate and at least one of the plurality of conductive sensors at the top surface of the pillar. 4. The apparatus of claim 2, comprising: a plurality of spring pillars on the circuit board extending into the plurality of blind holes in the metal plate and having at least one of the plurality of conductive sensors at a top portion of the spring pillars facing and spaced from the opposing planar surface of the metal plate, at least one deflection sensor being formed between the opposing planar surface of the metal plate in the defined areas and the at least one of the plurality of conductive sensors at the top portion of the spring pillars. 5. The apparatus of claim 1, in which the metal plate comprises a plurality of posts formed on the opposing planar surface of the metal plate and extending away from the opposing planar surface a predetermined distance, and having blind openings extending into a top surface of the plurality of posts for receiving a fastener. 6. The apparatus of claim 5, in which the plurality of posts are placed around the defined areas and are configured to prevent the metal plate from deforming in non-touch areas other than the defined areas. 7. The apparatus of claim 6, and further including fasteners inserted in the blind openings in the plurality of posts to join a backing component covering a second planar surface of the circuit board to the metal plate. 8. The apparatus of claim 7, in which the fasteners are ones selected from a group consisting essentially of screws, rivets, brads and pins. 9. The apparatus of claim 1, in which the metal plate comprises a metal selected from a group consisting essentially of stainless steel and aluminum. 10. The apparatus of claim 1, in which the conductive sensors are one selected from a group consisting essentially of capacitive sensors and inductive sensors. 11. An apparatus, comprising: a metal plate having at least one defined area forming a touch sensor on a first planar surface, and having an opposing planar surface, the metal plate being deformable in the defined area by a human touch on the first planar surface; a recessed portion on the opposing planar surface of the metal plate having a recess depth, the recess depth defining a spacing distance; flange portions surrounding the recessed portion on the opposing planar surface of the metal plate and not having the recess depth; a circuit board having a plurality of sensors on an upper surface, the sensors arranged in rows and columns, the plurality of sensors placed facing and in correspondence with the recessed portion of the opposing planar surface of the metal plate; and the flange portions on the opposing planar surface of the metal plate contacting the upper surface of the circuit board, and the sensors being spaced from the opposing planar surface of the metal plate by the spacing distance. 12. The apparatus of claim 11, in which the touch sensor of the metal plate forms a gesture sensor area. 13. The apparatus of claim 11, in which the touch sensor of the metal plate forms a sliding sensor area. 14. The apparatus of claim 11, in which the touch sensor of the metal plate forms a wheel sensor area. 15. The apparatus of claim 11, in which the plurality of sensors comprise capacitive sensors that change capacitance when an area of the metal plate is deflected by a human touch. 16. The apparatus of claim 11, in which the plurality of sensors comprise inductive sensors that form an electric field that changes when an area of the metal plate is deflected by a human touch. 17. The apparatus of claim 11, in which the defined area further comprises a plurality of defined button areas forming touch sensor buttons, spaced apart by areas on the metal plate forming non-touch areas. 18. The apparatus of claim 17, and further comprising a processor coupled to the plurality of sensors, configured to detect a change in capacitance in the plurality of sensors indicating a touch deflecting the metal plate, and configured to determine whether the touch is within a defined button area. 19. A method for detecting a human touch at a metal touch sensor, comprising: defining a touch area on a first planar surface of a metal plate, the metal plate having a second planar surface opposing the first planar surface, the metal plate having a thickness in the touch area such that the metal plate can be deflected in the touch area by a human touch; placing a plurality of sensors on a circuit board disposed facing and spaced from the second planar surface of the metal plate; coupling the plurality of sensors to a processor configured to detect a signal from the sensors corresponding to deflection of the metal plate in the touch area due to a human touch; scanning the plurality of sensors to detect a deflection in the metal plate caused by a human touch; and operating the processor to determine where in the touch area the touch occurred. 20. The method of claim 19, and further comprising: defining touch button areas within the touch area on the first planar surface of the metal plate, and further defining non-touch areas; and operating the processor to determine whether a deflection in the metal plate detected by the plurality of sensors corresponds to a touch in a defined touch button area.
2,600
10,386
10,386
15,488,939
2,612
The present disclosure provides at least an apparatus for a depth enhanced image editing. In an example, the apparatus includes a processor and a storage, comprising instructions that when executed with the processor cause the apparatus to store a received depth enhanced image. In an example, the depth enhanced image comprises image data, depth data, and calibration data. The apparatus may apply an edit to the image data and the calibration data without editing the depth data in response to a request for an image edit. The apparatus may further return an edited depth enhanced image.
1. An image editing apparatus for a depth enhanced image comprising: a processor; a storage comprising instructions that when executed by the processor cause the apparatus to: store a received depth enhanced image comprising image data, depth data, and calibration data; apply an edit to the image data and the calibration data without editing the depth data in response to a request for an image edit; and return an edited depth enhanced image. 2. The apparatus of claim 1, wherein the edit is a linear transformation of an intrinsic parameter through matrix multiplication. 3. The apparatus of claim 1, wherein the edit is a linear transformation of an extrinsic parameter through matrix multiplication. 4. The apparatus of claim 1, wherein the depth data indicates a distance from a fixed point, where the fixed point is used as a principal point in the calibration data. 5. The apparatus of claim 1, wherein the edit is at least one or more of rotation, cropping, flipping, skewing, or shearing. 6. The apparatus of claim 1, comprising storage to store the calibration data as a floating point number. 7. The apparatus of claim 1, wherein the image data is stored as integer values. 8. The apparatus of claim 1, wherein the edited depth enhanced image comprises an original set of calibration data with linear transformation operations made on the depth data, the image data, and calibration data for the reversion to an original depth enhanced image. 9. The apparatus of claim 1, wherein the edited depth enhanced image is returned in a container comprising an indicator that the edited calibration data is read prior to the depth data and edited image data. 10. The apparatus of claim 1, wherein the edited depth enhanced image is returned in a container formatted to have an extensible device metadata format. 11. A method for editing depth enhanced images comprising: storing a received depth enhanced image comprising image data, depth data, and calibration data; applying an edit to the image data and the calibration data without editing the depth data in response to a request for an image edit; and returning an edited depth enhanced image. 12. The method of claim 11, wherein the edit is a linear transformation of intrinsic parameter through matrix multiplication. 13. The method of claim 11, wherein the edit is a linear transformation of extrinsic parameter through matrix multiplication. 14. The method of claim 11, wherein the depth data indicates a distance from a fixed point, where the fixed point is used as a principal point in the calibration data. 15. The method of claim 11, wherein the edit is at least one or more of rotation, cropping, flipping, skewing, or shearing. 16. The method of claim 11, comprising storing the calibration data in a storage as a floating point number. 17. The method of claim 11, wherein the image data is stored as integer values. 18. The method of claim 11, wherein the edited depth enhanced image comprises an original set of calibration data with linear transformation operations made on the depth data, the image data, and calibration data for the reversion to an original depth enhanced image. 19. The method of claim 11, wherein the edited depth enhanced image is returned in a container comprising an indicator that the edited calibration data is read prior to the depth data and edited image data. 20. The method of claim 11, wherein the edited depth enhanced image is returned in a container formatted to have an extensible device metadata format. 21. A tangible, non-transitory, computer-readable medium comprising instructions that, when executed by a processor, direct the processor to: store a received depth enhanced image comprising image data, depth data, and calibration data; apply an edit to the image data and the calibration data without editing the depth data in response to a request for an image edit; and return an edited depth enhanced image. 22. The computer-readable medium of claim 21, wherein the edit is a linear transformation of intrinsic parameter through matrix multiplication. 23. The computer-readable medium of claim 21, wherein the edit is a linear transformation of extrinsic parameter through matrix multiplication. 24. The computer-readable medium of claim 21, wherein the depth data indicates a distance from a fixed point, where the fixed point is used as a principal point in the calibration data. 25. The computer-readable medium of claim 21, wherein the edit is at least one or more of rotation, cropping, flipping, skewing, or shearing.
The present disclosure provides at least an apparatus for a depth enhanced image editing. In an example, the apparatus includes a processor and a storage, comprising instructions that when executed with the processor cause the apparatus to store a received depth enhanced image. In an example, the depth enhanced image comprises image data, depth data, and calibration data. The apparatus may apply an edit to the image data and the calibration data without editing the depth data in response to a request for an image edit. The apparatus may further return an edited depth enhanced image.1. An image editing apparatus for a depth enhanced image comprising: a processor; a storage comprising instructions that when executed by the processor cause the apparatus to: store a received depth enhanced image comprising image data, depth data, and calibration data; apply an edit to the image data and the calibration data without editing the depth data in response to a request for an image edit; and return an edited depth enhanced image. 2. The apparatus of claim 1, wherein the edit is a linear transformation of an intrinsic parameter through matrix multiplication. 3. The apparatus of claim 1, wherein the edit is a linear transformation of an extrinsic parameter through matrix multiplication. 4. The apparatus of claim 1, wherein the depth data indicates a distance from a fixed point, where the fixed point is used as a principal point in the calibration data. 5. The apparatus of claim 1, wherein the edit is at least one or more of rotation, cropping, flipping, skewing, or shearing. 6. The apparatus of claim 1, comprising storage to store the calibration data as a floating point number. 7. The apparatus of claim 1, wherein the image data is stored as integer values. 8. The apparatus of claim 1, wherein the edited depth enhanced image comprises an original set of calibration data with linear transformation operations made on the depth data, the image data, and calibration data for the reversion to an original depth enhanced image. 9. The apparatus of claim 1, wherein the edited depth enhanced image is returned in a container comprising an indicator that the edited calibration data is read prior to the depth data and edited image data. 10. The apparatus of claim 1, wherein the edited depth enhanced image is returned in a container formatted to have an extensible device metadata format. 11. A method for editing depth enhanced images comprising: storing a received depth enhanced image comprising image data, depth data, and calibration data; applying an edit to the image data and the calibration data without editing the depth data in response to a request for an image edit; and returning an edited depth enhanced image. 12. The method of claim 11, wherein the edit is a linear transformation of intrinsic parameter through matrix multiplication. 13. The method of claim 11, wherein the edit is a linear transformation of extrinsic parameter through matrix multiplication. 14. The method of claim 11, wherein the depth data indicates a distance from a fixed point, where the fixed point is used as a principal point in the calibration data. 15. The method of claim 11, wherein the edit is at least one or more of rotation, cropping, flipping, skewing, or shearing. 16. The method of claim 11, comprising storing the calibration data in a storage as a floating point number. 17. The method of claim 11, wherein the image data is stored as integer values. 18. The method of claim 11, wherein the edited depth enhanced image comprises an original set of calibration data with linear transformation operations made on the depth data, the image data, and calibration data for the reversion to an original depth enhanced image. 19. The method of claim 11, wherein the edited depth enhanced image is returned in a container comprising an indicator that the edited calibration data is read prior to the depth data and edited image data. 20. The method of claim 11, wherein the edited depth enhanced image is returned in a container formatted to have an extensible device metadata format. 21. A tangible, non-transitory, computer-readable medium comprising instructions that, when executed by a processor, direct the processor to: store a received depth enhanced image comprising image data, depth data, and calibration data; apply an edit to the image data and the calibration data without editing the depth data in response to a request for an image edit; and return an edited depth enhanced image. 22. The computer-readable medium of claim 21, wherein the edit is a linear transformation of intrinsic parameter through matrix multiplication. 23. The computer-readable medium of claim 21, wherein the edit is a linear transformation of extrinsic parameter through matrix multiplication. 24. The computer-readable medium of claim 21, wherein the depth data indicates a distance from a fixed point, where the fixed point is used as a principal point in the calibration data. 25. The computer-readable medium of claim 21, wherein the edit is at least one or more of rotation, cropping, flipping, skewing, or shearing.
2,600
10,387
10,387
15,900,317
2,628
A data entry device including a housing, data entry circuitry located within the housing, a keypad mounted in the housing and having a plurality of movable key elements which, when depressed, are displaced to at least a predetermined extent from a first location within the housing to a second location within the housing and Optical Finger Navigation (OFN) circuitry mounted inside the housing, being operative for sensing at least some of the plurality of movable key elements when depressed and displaced to at least the predetermined extent from the first location within the housing to the second location within the housing and providing a key displacement output indicating key displacement to the data entry circuitry.
1-30. (canceled) 31. A device, comprising: a housing defining an enclosed space; an optical sensor disposed in the housing and operative to sense a first level of light received from a location within the housing at a first time and a second level of light received from the location within the housing at a second time, and anti-tampering detection circuitry operative to indicate a tampering event based on a difference between the first level of light and the second level of light.
A data entry device including a housing, data entry circuitry located within the housing, a keypad mounted in the housing and having a plurality of movable key elements which, when depressed, are displaced to at least a predetermined extent from a first location within the housing to a second location within the housing and Optical Finger Navigation (OFN) circuitry mounted inside the housing, being operative for sensing at least some of the plurality of movable key elements when depressed and displaced to at least the predetermined extent from the first location within the housing to the second location within the housing and providing a key displacement output indicating key displacement to the data entry circuitry.1-30. (canceled) 31. A device, comprising: a housing defining an enclosed space; an optical sensor disposed in the housing and operative to sense a first level of light received from a location within the housing at a first time and a second level of light received from the location within the housing at a second time, and anti-tampering detection circuitry operative to indicate a tampering event based on a difference between the first level of light and the second level of light.
2,600
10,388
10,388
15,963,385
2,684
A signaling system provides at least one emergency services signal for fastening to a roof of a motor vehicle. The signaling system includes a housing that is directed in the travel direction and has a low aerodynamic resistance. The signaling system furthermore has a signaling unit that is disposed within the housing and is configured for providing emergency services signals. The housing has at least two housing sides, wherein one housing side faces the roof, and the other housing side faces away from the roof. The side that faces the roof is adapted to the shape of the roof. The housing side that faces away from the roof corresponds to the suction side of an airfoil profile.
1. A signaling system for providing at least one emergency services signal and for fastening to a roof of a motor vehicle, comprising: a housing and a signaling unit that is disposed within the housing and is configured for providing said at least one emergency services signal, wherein the housing has a first housing side that faces the motor vehicle and is adapted to the roof, and a second housing side that faces away from the roof, wherein said first housing side corresponds to a suction side of an airfoil profile. 2. The signaling system as claimed in claim 1, wherein an air stream that during travel of the motor vehicle is created above the roof of the motor vehicle generates a low-pressure region, wherein the signaling system is disposed on the roof within the low-pressure region. 3. The signaling system as claimed in claim 2, wherein said second housing side corresponds to the suction side of a NACA profile. 4. The signaling system as claimed in claim 3, wherein said second housing side has a transparent rough layer that is configured for reducing lift. 5. The signaling system as claimed in claim 4, wherein said transparent rough layer has a Reynolds number of greater than 300,000. 6. The signaling system as claimed in claim 5, wherein said first housing side has an elastic layer that is configured for matching a shape of the roof. 7. The signaling system as claimed in claim 6, wherein said first housing side has a non-woven material layer that is configured for protecting paintwork of the roof. 8. The signaling system as claimed in claim 7, wherein a spacing of 0.45 m to 0.60 m is maintained between the housing and a windshield of said motor vehicle. 9. The signaling system as claimed in claim 8, wherein the spacing between the housing and the windshield is dependent on a vehicle type and on an angle between the windshield and a mean roof profile. 10. The signaling system as claimed in claim 9, wherein the signaling unit has an emergency-vehicle light unit and a tone-sequence horn. 11. The signaling system of claim 10, wherein the housing has end sides with a fastening device that is configured for fastening the signaling system to the roof of the motor vehicle. 12. A motor vehicle having the signaling system as claimed in claim 1. 13. A motor vehicle having the signaling system as claimed in claim 3. 14. A motor vehicle having the signaling system as claimed in claim 5. 15. A motor vehicle having the signaling system as claimed in claim 7. 16. A motor vehicle having the signaling system as claimed in claim 9. 17. A motor vehicle having the signaling system as claimed in claim 11.
A signaling system provides at least one emergency services signal for fastening to a roof of a motor vehicle. The signaling system includes a housing that is directed in the travel direction and has a low aerodynamic resistance. The signaling system furthermore has a signaling unit that is disposed within the housing and is configured for providing emergency services signals. The housing has at least two housing sides, wherein one housing side faces the roof, and the other housing side faces away from the roof. The side that faces the roof is adapted to the shape of the roof. The housing side that faces away from the roof corresponds to the suction side of an airfoil profile.1. A signaling system for providing at least one emergency services signal and for fastening to a roof of a motor vehicle, comprising: a housing and a signaling unit that is disposed within the housing and is configured for providing said at least one emergency services signal, wherein the housing has a first housing side that faces the motor vehicle and is adapted to the roof, and a second housing side that faces away from the roof, wherein said first housing side corresponds to a suction side of an airfoil profile. 2. The signaling system as claimed in claim 1, wherein an air stream that during travel of the motor vehicle is created above the roof of the motor vehicle generates a low-pressure region, wherein the signaling system is disposed on the roof within the low-pressure region. 3. The signaling system as claimed in claim 2, wherein said second housing side corresponds to the suction side of a NACA profile. 4. The signaling system as claimed in claim 3, wherein said second housing side has a transparent rough layer that is configured for reducing lift. 5. The signaling system as claimed in claim 4, wherein said transparent rough layer has a Reynolds number of greater than 300,000. 6. The signaling system as claimed in claim 5, wherein said first housing side has an elastic layer that is configured for matching a shape of the roof. 7. The signaling system as claimed in claim 6, wherein said first housing side has a non-woven material layer that is configured for protecting paintwork of the roof. 8. The signaling system as claimed in claim 7, wherein a spacing of 0.45 m to 0.60 m is maintained between the housing and a windshield of said motor vehicle. 9. The signaling system as claimed in claim 8, wherein the spacing between the housing and the windshield is dependent on a vehicle type and on an angle between the windshield and a mean roof profile. 10. The signaling system as claimed in claim 9, wherein the signaling unit has an emergency-vehicle light unit and a tone-sequence horn. 11. The signaling system of claim 10, wherein the housing has end sides with a fastening device that is configured for fastening the signaling system to the roof of the motor vehicle. 12. A motor vehicle having the signaling system as claimed in claim 1. 13. A motor vehicle having the signaling system as claimed in claim 3. 14. A motor vehicle having the signaling system as claimed in claim 5. 15. A motor vehicle having the signaling system as claimed in claim 7. 16. A motor vehicle having the signaling system as claimed in claim 9. 17. A motor vehicle having the signaling system as claimed in claim 11.
2,600
10,389
10,389
15,607,508
2,616
A companion object to a media player, such as a video player, is responsive to an event associated with the video player. The event may be associated with the data stream displayed by the video player. The event may be associated with an object displayed by the video player. The companion object is displayed outside the display layout of the video player. The companion object and the video player may be displayed by a web browser in a web page. The companion object and the video player may be executed in a securely separated manner.
1-20. (canceled) 21. A method comprising: displaying in a first display region a video, wherein the video is displayed using a media player, wherein the media player is executed by a virtual machine, wherein said displaying comprises displaying in the media player the video and an animated interactive object as an overlay over the video that is visible to a viewer of the video without prior interaction therewith; in response to an interaction by a user with the interactive object, displaying in a second display region, a graphical animation, wherein the graphical animation is configured to be executed separately and independently from the media player in a separate sandbox, while maintaining synchronization between the graphical animation and the video displayed by the video media player; and after said displaying the graphical animation of the movement, displaying an advertisement in the second display region. 22. The method of claim 21 further comprises: obtaining the video from a video storage; and obtaining the animated interactive object from an object storage. 23. The method of claim 22, wherein said obtaining the interactive object comprises selecting the animated interactive object from a plurality of objects. 24. The method of claim 21 further comprises displaying coordinated sub-animations in the first display region and in the second display region. 25. The method of claim 21, wherein the second display region is used to display graphical content prior to said displaying the graphical animation. 26. The method of claim 21, wherein the graphical animation is synchronized, at least in part, with a tempo of a soundtrack of the video. 27. The method of claim 21, wherein the animated interactive object is a representation of a physical object shown in the video and located in a location corresponding the physical object. 28. A method comprising: provisioning of an animated interactive object to be displayed as an overlay over a video content that is being displayed to a user, wherein the video content is being displayed in a first display region by a media player that is executed by a virtual machine, wherein the animated interactive object is visible to the user without prior interaction therewith; wherein the animated interactive object is configured to respond to an interaction by the user, wherein in response to a detected interaction, a graphical animation is displayed in a second display region, wherein the graphical animation is configured for being executed separately and independently from the media player in a separate sandbox, while maintaining synchronization between the graphical animation and the video content being displayed by the media player; and wherein following the display of the graphical animation, an advertisement is displayed in the second display region. 29. The method of claim 28, wherein the animated interactive object is a representation of a physical object shown in the video content and located in a location corresponding the physical object. 30. The method of claim 28 further comprising: matching the advertisement to the video content and the user, wherein the animated interactive object is selected from a repository in accordance with the advertisement. 31. The method of claim 28 further comprising: matching the advertisement to the video content and the user, wherein the animated interactive object is determined based upon the advertisement.
A companion object to a media player, such as a video player, is responsive to an event associated with the video player. The event may be associated with the data stream displayed by the video player. The event may be associated with an object displayed by the video player. The companion object is displayed outside the display layout of the video player. The companion object and the video player may be displayed by a web browser in a web page. The companion object and the video player may be executed in a securely separated manner.1-20. (canceled) 21. A method comprising: displaying in a first display region a video, wherein the video is displayed using a media player, wherein the media player is executed by a virtual machine, wherein said displaying comprises displaying in the media player the video and an animated interactive object as an overlay over the video that is visible to a viewer of the video without prior interaction therewith; in response to an interaction by a user with the interactive object, displaying in a second display region, a graphical animation, wherein the graphical animation is configured to be executed separately and independently from the media player in a separate sandbox, while maintaining synchronization between the graphical animation and the video displayed by the video media player; and after said displaying the graphical animation of the movement, displaying an advertisement in the second display region. 22. The method of claim 21 further comprises: obtaining the video from a video storage; and obtaining the animated interactive object from an object storage. 23. The method of claim 22, wherein said obtaining the interactive object comprises selecting the animated interactive object from a plurality of objects. 24. The method of claim 21 further comprises displaying coordinated sub-animations in the first display region and in the second display region. 25. The method of claim 21, wherein the second display region is used to display graphical content prior to said displaying the graphical animation. 26. The method of claim 21, wherein the graphical animation is synchronized, at least in part, with a tempo of a soundtrack of the video. 27. The method of claim 21, wherein the animated interactive object is a representation of a physical object shown in the video and located in a location corresponding the physical object. 28. A method comprising: provisioning of an animated interactive object to be displayed as an overlay over a video content that is being displayed to a user, wherein the video content is being displayed in a first display region by a media player that is executed by a virtual machine, wherein the animated interactive object is visible to the user without prior interaction therewith; wherein the animated interactive object is configured to respond to an interaction by the user, wherein in response to a detected interaction, a graphical animation is displayed in a second display region, wherein the graphical animation is configured for being executed separately and independently from the media player in a separate sandbox, while maintaining synchronization between the graphical animation and the video content being displayed by the media player; and wherein following the display of the graphical animation, an advertisement is displayed in the second display region. 29. The method of claim 28, wherein the animated interactive object is a representation of a physical object shown in the video content and located in a location corresponding the physical object. 30. The method of claim 28 further comprising: matching the advertisement to the video content and the user, wherein the animated interactive object is selected from a repository in accordance with the advertisement. 31. The method of claim 28 further comprising: matching the advertisement to the video content and the user, wherein the animated interactive object is determined based upon the advertisement.
2,600
10,390
10,390
15,694,733
2,657
A speech translation system and methods for cross-lingual communication that enable users to improve and customize content and usage of the system and easily. The methods include, in response to receiving an utterance including a first term associated with a field, translating the utterance into a second language. In response to receiving an indication to add the first term associated with the field to a first recognition lexicon, adding the first term associated with the field and the determined translation to a first machine translation module and to a shared database for a community associated with the field of the first term associated with the field, wherein the first term associated with the field added to the shared database is accessible by the community.
1. A method comprising: receiving, from a user, an utterance in a first language, the utterance including a first term; determining word class information for the first term; determining an intra-class probability for the first term, the intra-class probability reflecting a relevance of the first term to the user relative to other terms in a lexicon of the user sharing the word class information; and adding the first term, the word class information, and the intra-class probability to the lexicon of the user. 2. The method of claim 1, wherein the intra-class probability is determined based on when the utterance was received from the user. 3. The method of claim 1, wherein the intra-class probability is determined based on reference information about the first term. 4. The method of claim 3, wherein the first term is a name of a city and the reference information is a population of the city. 5. The method of claim 3, wherein the reference information is one or more references to the first term in media. 6. The method of claim 1, wherein the intra-class probability is determined based on one or more activities of the user. 7. The method of claim 6, wherein the one or more activities includes a current location of the user. 8. The method of claim 7, wherein the first term is a place and the intra-class probability is determined based on a distance between the place and the current location of the user. 9. The method of claim 7, wherein the intra-class probability is determined based on a correlation between the first term and the current location of the user. 10. The method of claim 6, wherein the one or more activities includes past use of the first term by the user. 11. The method of claim 1, wherein the intra-class probability decays over time. 12. The method of claim 1, further comprising: updating the intra-class probability responsive to new information. 13. The method of claim 12, wherein the new information is an updated location of the user. 14. The method of claim 1, further comprising: translating the first term from the first language to a second language, wherein the translated first term is also added to the lexicon of the user. 15. A non-transitory computer-readable medium comprising instructions that when executed by a processor cause the processor to perform steps comprising: receiving, from a user, an utterance in a first language, the utterance including a first term; determining word class information for the first term; determining an intra-class probability for the first term, the intra-class probability reflecting a relevance of the first term to the user relative to other terms in a lexicon of the user sharing the word class information; and adding the first term, the word class information, and the intra-class probability to the lexicon of the user. 16. The non-transitory computer-readable medium of claim 15, wherein the intra-class probability is determined based on when the utterance was received from the user. 17. The non-transitory computer-readable medium of claim 15, wherein the first term is a name of a city and the intra-class probability is determined based on a population of the city. 18. The non-transitory computer-readable medium of claim 15, wherein the intra-class probability is determined based on a current location of the user. 19. The non-transitory computer-readable medium of claim 18, wherein the first term is a place and the intra-class probability is determined based on a distance between the place and the current location of the user. 20. The non-transitory computer-readable medium of claim 15, further comprising: translating the first term from the first language to a second language, wherein the translated first term is also added to the lexicon of the user.
A speech translation system and methods for cross-lingual communication that enable users to improve and customize content and usage of the system and easily. The methods include, in response to receiving an utterance including a first term associated with a field, translating the utterance into a second language. In response to receiving an indication to add the first term associated with the field to a first recognition lexicon, adding the first term associated with the field and the determined translation to a first machine translation module and to a shared database for a community associated with the field of the first term associated with the field, wherein the first term associated with the field added to the shared database is accessible by the community.1. A method comprising: receiving, from a user, an utterance in a first language, the utterance including a first term; determining word class information for the first term; determining an intra-class probability for the first term, the intra-class probability reflecting a relevance of the first term to the user relative to other terms in a lexicon of the user sharing the word class information; and adding the first term, the word class information, and the intra-class probability to the lexicon of the user. 2. The method of claim 1, wherein the intra-class probability is determined based on when the utterance was received from the user. 3. The method of claim 1, wherein the intra-class probability is determined based on reference information about the first term. 4. The method of claim 3, wherein the first term is a name of a city and the reference information is a population of the city. 5. The method of claim 3, wherein the reference information is one or more references to the first term in media. 6. The method of claim 1, wherein the intra-class probability is determined based on one or more activities of the user. 7. The method of claim 6, wherein the one or more activities includes a current location of the user. 8. The method of claim 7, wherein the first term is a place and the intra-class probability is determined based on a distance between the place and the current location of the user. 9. The method of claim 7, wherein the intra-class probability is determined based on a correlation between the first term and the current location of the user. 10. The method of claim 6, wherein the one or more activities includes past use of the first term by the user. 11. The method of claim 1, wherein the intra-class probability decays over time. 12. The method of claim 1, further comprising: updating the intra-class probability responsive to new information. 13. The method of claim 12, wherein the new information is an updated location of the user. 14. The method of claim 1, further comprising: translating the first term from the first language to a second language, wherein the translated first term is also added to the lexicon of the user. 15. A non-transitory computer-readable medium comprising instructions that when executed by a processor cause the processor to perform steps comprising: receiving, from a user, an utterance in a first language, the utterance including a first term; determining word class information for the first term; determining an intra-class probability for the first term, the intra-class probability reflecting a relevance of the first term to the user relative to other terms in a lexicon of the user sharing the word class information; and adding the first term, the word class information, and the intra-class probability to the lexicon of the user. 16. The non-transitory computer-readable medium of claim 15, wherein the intra-class probability is determined based on when the utterance was received from the user. 17. The non-transitory computer-readable medium of claim 15, wherein the first term is a name of a city and the intra-class probability is determined based on a population of the city. 18. The non-transitory computer-readable medium of claim 15, wherein the intra-class probability is determined based on a current location of the user. 19. The non-transitory computer-readable medium of claim 18, wherein the first term is a place and the intra-class probability is determined based on a distance between the place and the current location of the user. 20. The non-transitory computer-readable medium of claim 15, further comprising: translating the first term from the first language to a second language, wherein the translated first term is also added to the lexicon of the user.
2,600
10,391
10,391
15,136,232
2,689
A home security and automation system includes multi-sensor sensor devices that communicate with a base unit. Each sensor device is a self-identifying multi-function device that includes a housing, multiple sensors disposed in the housing, and a communication device disposed in the housing that permits communication between the sensor device and the base unit. Each sensor device is configured to be operated in multiple operating modes, and an active operating mode of each sensor device, corresponding to one operating mode selected from the several possible operating modes, is determined by the orientation in space of the sensor device housing. The base unit is configured to receive from each sensor device information identifying the active operating mode.
1. A sensor device comprising a housing, an attribute sensor associated with the housing, the attribute sensor configured to detect an orientation of the housing, functional sensors associated with the housing, the functional sensors configured, to provide multiple possible sensor device operating modes, and a communication device associated with the housing, the communication device configured to communicate output from at least one of the attribute sensor and the fictional sensors to a remote controller, wherein the sensor device is configured to be operated in an active operating mode corresponding to one operating mode selected from the multiple possible sensor device operating modes, and the active operating mode is determined by the orientation of the housing. 2. The sensor device of claim 1, wherein the housing is multi-sided and a unique functional sensor is associated with each side of the housing. 3. The sensor device of claim 1, wherein the functional sensors are configurable for use individually and in combination to provide the multiple possible sensor device operating modes. 4. The sensor device of claim 1, wherein a first sensor function is associated with a first orientation of the housing, and a second sensor function is associated with a second orientation of the housing, where the second orientation is different than the first orientation. 5. The sensor device of claim 4, wherein a combination of functional sensors is associated with the first orientation of the housing. 6. The sensor device of claim 1, wherein the housing has an outer surface that includes a first region and a second region that is spaced apart from the first region, and the first region and the second region are configured to permit the housing to be supported in a predetermined orientation. 7. The sensor device of claim 6, wherein the housing is a polyhedron and the first region corresponds to a first side of the polyhedron, and the second region corresponds to a second side of the polyhedron. 8. The sensor device of claim 1, wherein the attribute sensor is an accelerometer. 9. Sensor device of claim 1, wherein the attribute sensor is configured to detect an orientation of the housing with respect to one of space and a support surface. 10. The sensor device of claim 1, wherein the functional sensors are selected from the group comprising a proximity sensor, a visible light sensor, a temperature sensor, a pressure sensor, a motion sensor, and a contact sensor. 11. The sensor device of claim 1, wherein the sensor device includes a microcontroller that is configured to determine at least one of an orientation of the housing and a sensor function of the sensor device. 12. The sensor device of claim 1, wherein at least one of an orientation of the housing and a sensor fuction of the sensor device is determined by a remote system controller based on a signal emitted by the sensor device. 13. A security system comprising a base unit including a controller and a transceiver, and a sensor device, the sensor device having a housing, multiple sensors associated with the housing, and a communication device associated with the housing that permits communication with the base unit, wherein the sensor device is operable in multiple operating modes, and an active operating mode of the sensor device, corresponding to one operating mode selected from the multiple operating modes, is determined by the orientation in space of the sensor device housing, and the base unit is configured to receive from the sensor device information corresponding to the active operating mode. 14. The security system of claim 13, wherein information corresponding to the active operating mode includes at least one of sensor device orientation, sensor device function, and sensor output. 15. The security system of claim 13, wherein the base unit is configured to configure the security system to operate based on the active operating mode of the sensor device. 16. The security system of claim 13, wherein the sensor device includes a unique identification number, and is configured to communicate the active operating mode and the unique identification number to the base unit. 17. The security system of claim 13, wherein the controller is configured to recieve an output signal from the sensor device and perform a notification function corresponding to the output signal. 18. The security system of claim 13, wherein the sensor device comprises a first sensor device having a first plurality of sensors, and a second sensor device having a second plurality of sensors that are different from the first plurality of sensors. 19. A security system comprising a base unit including a controller and a first communication device, a sensor device, the sensor device including, a housing, multiple sensors associated with the housing including a first sensor that is configured to determine an orientation of the housing, and a second communication device associated with the housing that permits communication with the first communication device, wherein a location of the sensor device within a region being secured by the security system is determined by the orientation in space of the sensor device housing as determined by the first sensor, the sensor device is configured to be operated in a predetermined operating mode based on the location of the sensor device within a region being secured by the security system, and the base unit is configured to receive from the sensor device information identifying the location of the sensor device within a region being secured by the security system, and to configure the security system to operate in the predetermined operating mode. 20. The security system of claim 19, wherein the base unit is configured to receive from the sensor device information identifying the location of the sensor device within a region being secured by the security system, and to configure the security system to operate in the predetermined operating mode based on the the information and on a number of finger taps detected by the sensor device.
A home security and automation system includes multi-sensor sensor devices that communicate with a base unit. Each sensor device is a self-identifying multi-function device that includes a housing, multiple sensors disposed in the housing, and a communication device disposed in the housing that permits communication between the sensor device and the base unit. Each sensor device is configured to be operated in multiple operating modes, and an active operating mode of each sensor device, corresponding to one operating mode selected from the several possible operating modes, is determined by the orientation in space of the sensor device housing. The base unit is configured to receive from each sensor device information identifying the active operating mode.1. A sensor device comprising a housing, an attribute sensor associated with the housing, the attribute sensor configured to detect an orientation of the housing, functional sensors associated with the housing, the functional sensors configured, to provide multiple possible sensor device operating modes, and a communication device associated with the housing, the communication device configured to communicate output from at least one of the attribute sensor and the fictional sensors to a remote controller, wherein the sensor device is configured to be operated in an active operating mode corresponding to one operating mode selected from the multiple possible sensor device operating modes, and the active operating mode is determined by the orientation of the housing. 2. The sensor device of claim 1, wherein the housing is multi-sided and a unique functional sensor is associated with each side of the housing. 3. The sensor device of claim 1, wherein the functional sensors are configurable for use individually and in combination to provide the multiple possible sensor device operating modes. 4. The sensor device of claim 1, wherein a first sensor function is associated with a first orientation of the housing, and a second sensor function is associated with a second orientation of the housing, where the second orientation is different than the first orientation. 5. The sensor device of claim 4, wherein a combination of functional sensors is associated with the first orientation of the housing. 6. The sensor device of claim 1, wherein the housing has an outer surface that includes a first region and a second region that is spaced apart from the first region, and the first region and the second region are configured to permit the housing to be supported in a predetermined orientation. 7. The sensor device of claim 6, wherein the housing is a polyhedron and the first region corresponds to a first side of the polyhedron, and the second region corresponds to a second side of the polyhedron. 8. The sensor device of claim 1, wherein the attribute sensor is an accelerometer. 9. Sensor device of claim 1, wherein the attribute sensor is configured to detect an orientation of the housing with respect to one of space and a support surface. 10. The sensor device of claim 1, wherein the functional sensors are selected from the group comprising a proximity sensor, a visible light sensor, a temperature sensor, a pressure sensor, a motion sensor, and a contact sensor. 11. The sensor device of claim 1, wherein the sensor device includes a microcontroller that is configured to determine at least one of an orientation of the housing and a sensor function of the sensor device. 12. The sensor device of claim 1, wherein at least one of an orientation of the housing and a sensor fuction of the sensor device is determined by a remote system controller based on a signal emitted by the sensor device. 13. A security system comprising a base unit including a controller and a transceiver, and a sensor device, the sensor device having a housing, multiple sensors associated with the housing, and a communication device associated with the housing that permits communication with the base unit, wherein the sensor device is operable in multiple operating modes, and an active operating mode of the sensor device, corresponding to one operating mode selected from the multiple operating modes, is determined by the orientation in space of the sensor device housing, and the base unit is configured to receive from the sensor device information corresponding to the active operating mode. 14. The security system of claim 13, wherein information corresponding to the active operating mode includes at least one of sensor device orientation, sensor device function, and sensor output. 15. The security system of claim 13, wherein the base unit is configured to configure the security system to operate based on the active operating mode of the sensor device. 16. The security system of claim 13, wherein the sensor device includes a unique identification number, and is configured to communicate the active operating mode and the unique identification number to the base unit. 17. The security system of claim 13, wherein the controller is configured to recieve an output signal from the sensor device and perform a notification function corresponding to the output signal. 18. The security system of claim 13, wherein the sensor device comprises a first sensor device having a first plurality of sensors, and a second sensor device having a second plurality of sensors that are different from the first plurality of sensors. 19. A security system comprising a base unit including a controller and a first communication device, a sensor device, the sensor device including, a housing, multiple sensors associated with the housing including a first sensor that is configured to determine an orientation of the housing, and a second communication device associated with the housing that permits communication with the first communication device, wherein a location of the sensor device within a region being secured by the security system is determined by the orientation in space of the sensor device housing as determined by the first sensor, the sensor device is configured to be operated in a predetermined operating mode based on the location of the sensor device within a region being secured by the security system, and the base unit is configured to receive from the sensor device information identifying the location of the sensor device within a region being secured by the security system, and to configure the security system to operate in the predetermined operating mode. 20. The security system of claim 19, wherein the base unit is configured to receive from the sensor device information identifying the location of the sensor device within a region being secured by the security system, and to configure the security system to operate in the predetermined operating mode based on the the information and on a number of finger taps detected by the sensor device.
2,600
10,392
10,392
15,219,432
2,672
Disclosed are various embodiments for capturing an image within the context of a document. The computing device identifies an image field within the document displayed in a user interface (UI). The image field is operable to display live input received from a camera. In response to receiving a first input from a user selecting the image field of the document, the UI of the computing device displays the live input from the camera within the image field of the document in the context of other portions of the document displayed outside of the image field. Controls are provided by the computing device with which the user can adjust various characteristics of the live input. In response to receiving a second input from the user, the computing device captures the image from the live input displayed within the image field of the document.
1. A method for capturing an image within a context of a document, the method comprising: identifying an image field within the document displayed in a graphical user interface (GUI) of a computing device, the image field operable to display live input received from a camera accessible by the computing device; in response to receiving a first input from a user selecting the image field of the document, displaying, within the GUI of the computing device, the live input from the camera within the image field of the document in the context of other portions of the document displayed outside of the image field; displaying, within the GUI of the computing device, controls with which the user can adjust scaling of the live input from the camera displayed within the document; and in response to receiving a second input from the user, capturing by the camera and storing, within a memory of the computing device, the image from the live input displayed within the image field of the document, wherein the image is stored as a component part of the document. 2. The method of claim 1, further comprising displaying, in the GUI of the computing device, the document including displaying the captured image in the image field of the document, the document stored as a file that includes the image. 3. The method of claim 1, wherein the controls further allow the user to perform at least one of the following: adjust lighting used, specify whether video or at least one still image should be captured, select a different camera, or adjust a size of the image field of the document. 4. The method of claim 1, wherein the document further comprises text strings, displaying the image field of the document in the context of other portions of the document comprises displaying the image field of the document among the text strings. 5. The method of claim 1, further comprising, in response to receiving additional input from the user deselecting the image field of the document, ceasing display of the live input from the camera within the image field of the document. 6. The method of claim 1, wherein the document comprises a plurality of image fields that are selectable, the computing device receiving additional input from the user specifying ones of the plurality of image fields of the document for which the image should be inserted. 7. The method of claim 1, wherein said identifying the image field of the document comprises detecting a quadrilateral frame within the document that meets a size threshold. 8. The method of claim 1, wherein the image that is captured comprises a sequence of images, the computing device iterating through displaying the sequence of images in the image field of the document when the document is displayed. 9. The method of claim 1, wherein the computing device is a desktop computer attached to the camera, and the first and second inputs are received via a keyboard or mouse. 10. A non-transitory computer-readable medium embodying a program for capturing an image within a context of a document, the program executable in a computing device, comprising: code that identifies an image field within the document displayed in a user interface of a computing device, the image field identified based on detecting a quadrilateral frame that meets a size threshold, the image field operable to display live input received from a camera accessible by the computing device; code that inserts one or more additional image fields within the document in response to a first input received from a user via the user interface of the computing device; code that in response to receiving a second input from the user selecting the image field of the document, displays, within the user interface of the computing device, the live input from the camera within the image field of the document in the context of other portions of the document displayed outside of the image field; code that provides within the user interface of the computing device, controls with which the user can adjust settings for the live input from the camera displayed within the document; and code that in response to receiving a third input from the user, captures by the camera and storing, within a memory of the computing device, the image from the live input displayed within the image field of the document thereby avoiding editing of the image post-capture and adjusting the image in the image field post-capture, wherein the image is stored as a component part of the document. 11. The non-transitory computer-readable medium of claim 10, wherein the controls further allow the user to perform at least one of the following: adjust lighting used, specify whether video or at least one still image should be captured, select a different camera, or adjust a size of the image field of the document. 12. The non-transitory computer-readable medium of claim 10, wherein the code that displays the document includes displaying the captured image in the image field of the document, the document stored as a file that includes the image. 13. The non-transitory computer-readable medium of claim 10, wherein the image comprises at least one still image or a video. 14. The non-transitory computer-readable medium of claim 10, wherein the program further comprises code that in response to receiving additional input from the user deselecting the image field of the document, ceases display of the live input from the camera within the image field of the document. 15. A computing device, comprising: a network interface for communicating via a network accessible to the computing device; a camera accessible to the computing device via an I/O interface; a memory for storing an application, wherein the application comprises computer-implemented instructions for capturing an image within a context of a document; and a processor for executing the computer-implemented instructions of the application and thereby causing the computing device to: identify an image field within the document displayed in a user interface of the computing device, the image field operable to display live input received from the camera accessible to the computing device; in response to receiving a first input from a user selecting the image field of the document, display, within the user interface of the computing device, the live input from the camera within the image field of the document in the context of other portions of the document displayed outside of the image field; display, within the user interface of the computing device, controls with which the user can adjust settings for the live input from the camera displayed within the document; and in response to receiving a second input from the user, capture by the camera and storing, within the memory, the image from the live input displayed within the image field of the document, wherein the image is stored as a component part of the document. 16. The computing device of claim 15, wherein the document comprises a plurality of image fields that are selectable, the computing device receiving additional input from the user specifying ones of the plurality of image fields of the document for which the image should be inserted. 17. The computing device of claim 15, wherein the application further causes the computing device to display the document, including displaying the captured image in the image field of the document, the document stored as a file that includes the image. 18. The computing device of claim 15, wherein the image that is captured comprises a sequence of images, the computing device iterating through displaying the sequence of images in the image field of the document when the document is displayed. 19. The computing device of claim 15, wherein said identifying the image field of the document comprises detecting a quadrilateral frame within the document that meets a size threshold. 20. The computing device of claim 15, wherein the computing device is a mobile device, and the first and second inputs are received via a touch-sensitive display.
Disclosed are various embodiments for capturing an image within the context of a document. The computing device identifies an image field within the document displayed in a user interface (UI). The image field is operable to display live input received from a camera. In response to receiving a first input from a user selecting the image field of the document, the UI of the computing device displays the live input from the camera within the image field of the document in the context of other portions of the document displayed outside of the image field. Controls are provided by the computing device with which the user can adjust various characteristics of the live input. In response to receiving a second input from the user, the computing device captures the image from the live input displayed within the image field of the document.1. A method for capturing an image within a context of a document, the method comprising: identifying an image field within the document displayed in a graphical user interface (GUI) of a computing device, the image field operable to display live input received from a camera accessible by the computing device; in response to receiving a first input from a user selecting the image field of the document, displaying, within the GUI of the computing device, the live input from the camera within the image field of the document in the context of other portions of the document displayed outside of the image field; displaying, within the GUI of the computing device, controls with which the user can adjust scaling of the live input from the camera displayed within the document; and in response to receiving a second input from the user, capturing by the camera and storing, within a memory of the computing device, the image from the live input displayed within the image field of the document, wherein the image is stored as a component part of the document. 2. The method of claim 1, further comprising displaying, in the GUI of the computing device, the document including displaying the captured image in the image field of the document, the document stored as a file that includes the image. 3. The method of claim 1, wherein the controls further allow the user to perform at least one of the following: adjust lighting used, specify whether video or at least one still image should be captured, select a different camera, or adjust a size of the image field of the document. 4. The method of claim 1, wherein the document further comprises text strings, displaying the image field of the document in the context of other portions of the document comprises displaying the image field of the document among the text strings. 5. The method of claim 1, further comprising, in response to receiving additional input from the user deselecting the image field of the document, ceasing display of the live input from the camera within the image field of the document. 6. The method of claim 1, wherein the document comprises a plurality of image fields that are selectable, the computing device receiving additional input from the user specifying ones of the plurality of image fields of the document for which the image should be inserted. 7. The method of claim 1, wherein said identifying the image field of the document comprises detecting a quadrilateral frame within the document that meets a size threshold. 8. The method of claim 1, wherein the image that is captured comprises a sequence of images, the computing device iterating through displaying the sequence of images in the image field of the document when the document is displayed. 9. The method of claim 1, wherein the computing device is a desktop computer attached to the camera, and the first and second inputs are received via a keyboard or mouse. 10. A non-transitory computer-readable medium embodying a program for capturing an image within a context of a document, the program executable in a computing device, comprising: code that identifies an image field within the document displayed in a user interface of a computing device, the image field identified based on detecting a quadrilateral frame that meets a size threshold, the image field operable to display live input received from a camera accessible by the computing device; code that inserts one or more additional image fields within the document in response to a first input received from a user via the user interface of the computing device; code that in response to receiving a second input from the user selecting the image field of the document, displays, within the user interface of the computing device, the live input from the camera within the image field of the document in the context of other portions of the document displayed outside of the image field; code that provides within the user interface of the computing device, controls with which the user can adjust settings for the live input from the camera displayed within the document; and code that in response to receiving a third input from the user, captures by the camera and storing, within a memory of the computing device, the image from the live input displayed within the image field of the document thereby avoiding editing of the image post-capture and adjusting the image in the image field post-capture, wherein the image is stored as a component part of the document. 11. The non-transitory computer-readable medium of claim 10, wherein the controls further allow the user to perform at least one of the following: adjust lighting used, specify whether video or at least one still image should be captured, select a different camera, or adjust a size of the image field of the document. 12. The non-transitory computer-readable medium of claim 10, wherein the code that displays the document includes displaying the captured image in the image field of the document, the document stored as a file that includes the image. 13. The non-transitory computer-readable medium of claim 10, wherein the image comprises at least one still image or a video. 14. The non-transitory computer-readable medium of claim 10, wherein the program further comprises code that in response to receiving additional input from the user deselecting the image field of the document, ceases display of the live input from the camera within the image field of the document. 15. A computing device, comprising: a network interface for communicating via a network accessible to the computing device; a camera accessible to the computing device via an I/O interface; a memory for storing an application, wherein the application comprises computer-implemented instructions for capturing an image within a context of a document; and a processor for executing the computer-implemented instructions of the application and thereby causing the computing device to: identify an image field within the document displayed in a user interface of the computing device, the image field operable to display live input received from the camera accessible to the computing device; in response to receiving a first input from a user selecting the image field of the document, display, within the user interface of the computing device, the live input from the camera within the image field of the document in the context of other portions of the document displayed outside of the image field; display, within the user interface of the computing device, controls with which the user can adjust settings for the live input from the camera displayed within the document; and in response to receiving a second input from the user, capture by the camera and storing, within the memory, the image from the live input displayed within the image field of the document, wherein the image is stored as a component part of the document. 16. The computing device of claim 15, wherein the document comprises a plurality of image fields that are selectable, the computing device receiving additional input from the user specifying ones of the plurality of image fields of the document for which the image should be inserted. 17. The computing device of claim 15, wherein the application further causes the computing device to display the document, including displaying the captured image in the image field of the document, the document stored as a file that includes the image. 18. The computing device of claim 15, wherein the image that is captured comprises a sequence of images, the computing device iterating through displaying the sequence of images in the image field of the document when the document is displayed. 19. The computing device of claim 15, wherein said identifying the image field of the document comprises detecting a quadrilateral frame within the document that meets a size threshold. 20. The computing device of claim 15, wherein the computing device is a mobile device, and the first and second inputs are received via a touch-sensitive display.
2,600
10,393
10,393
14,991,759
2,663
A method and apparatus to automatically control the timing of an image acquisition by an imaging system in developing a correlation model of movement of a target within a patient.
1. An apparatus, comprising: a data storage device to store a plurality of displacement points of an external marker indicative of the motion of the external marker during a periodic cycle; and a processing device coupled to the data storage device, the processing device to determine a specified time in the periodic cycle, which corresponds to a first phase of the periodic cycle, in which to acquire an image of a target based on the plurality of displacement points, and to transmit a signal to automatically trigger an imaging system to acquire the image of the target at the first phase of the periodic cycle. 2. The apparatus of claim 1, wherein the processing device is configured to select the first phase of the periodic cycle, to automatically determine whether the target is in the first phase of the periodic cycle, and to send the signal to automatically trigger the imaging system to acquire the image of the target when the target is in the first phase of the periodic cycle. 3. The apparatus of claim 1, wherein the processing device is configured to allow a user to select the first phase of the periodic cycle, to automatically determine whether the target is in the first phase of the periodic cycle, and to send the signal to automatically trigger the imaging system to acquire the image of the target when the target is in the first phase of the periodic cycle. 4. The apparatus of claim 1, further comprising the imaging system coupled to the processing device, wherein the imaging system is configured to receive the signal from the processing device, and to acquire the image of the target when the imaging system receives the signal. 5. The apparatus of claim 1, wherein the processing device is configured to determine a second specified time in the periodic cycle at which to acquire a second image of the target that corresponds to a second phase of the periodic cycle, and to trigger the imaging system to acquire the second image of the target at the second phase of the periodic cycle. 6. The apparatus of claim 5, wherein the data storage device is configured to store the image of the target and the second image of the target. 7. The apparatus of claim 5, wherein the processing device is configured to generate a correlation model based on the image, the second image, and the plurality of displacement points. 8. The apparatus of claim 7, wherein the processing device is configured to identify a path of movement of the target to generate the correlation model. 9. The apparatus of claim 1, wherein the processing device comprises: a first interface to send a command to the imaging system to automatically trigger the imaging system to acquire the image of the target at the specified time during the periodic cycle; a second interface to receive the plurality of displacement points of the external marker measured by a motion tracking system. 10. An apparatus, comprising: means for receiving a plurality of data points representative of a corresponding plurality of positions over time of an external marker associated with a patient, wherein the plurality of positions of the external marker defines an external path of movement of the external marker, the external path of movement defining a periodic cycle of the patient; and means for automatically triggering an imaging system to acquire an image of a target within the patient at a specific time in the periodic cycle based on the plurality of data points. 11. The apparatus of claim 10, further comprising: means for generating evenly-distributed model points during the periodic cycle using the plurality of data points; and means for generating a correlation model using the evenly-distributed model points. 12. The apparatus of claim 11, wherein the means for automatically triggering reduces a number of images acquired to generate the correlation model. 13. The apparatus of claim 11, wherein the means for automatically triggering reduces an amount of time to generate the correlation model. 14. A system, comprising: a motion tracking system to track motion of an external marker associated with a patient, the motion of the external marker is indicative of a periodic cycle of the patient; an imaging system to acquire an image of a target within the patient; and a target locating system coupled to the motion tracking system and the imaging system, the target locating system to automatically trigger the imaging system to trigger the acquisition of the image at a specified time. 15. The system of claim 14, wherein the target locating system is configured to determine the periodic cycle based on the motion of the external marker, to automatically select a phase of the periodic cycle, to determine when the target is in the selected phase of the periodic cycle, and to automatically trigger image acquisition by the imaging system when the target is in the selected phase of the periodic cycle. 16. The system of claim 14, wherein the target locating system is configured to determine the periodic cycle based on the motion of the external marker, to allow a user to select a phase of the periodic cycle, to determine when the target is in the selected phase of the periodic cycle, and to automatically trigger image acquisition by the imaging system when the target is in the selected phase of the periodic cycle.
A method and apparatus to automatically control the timing of an image acquisition by an imaging system in developing a correlation model of movement of a target within a patient.1. An apparatus, comprising: a data storage device to store a plurality of displacement points of an external marker indicative of the motion of the external marker during a periodic cycle; and a processing device coupled to the data storage device, the processing device to determine a specified time in the periodic cycle, which corresponds to a first phase of the periodic cycle, in which to acquire an image of a target based on the plurality of displacement points, and to transmit a signal to automatically trigger an imaging system to acquire the image of the target at the first phase of the periodic cycle. 2. The apparatus of claim 1, wherein the processing device is configured to select the first phase of the periodic cycle, to automatically determine whether the target is in the first phase of the periodic cycle, and to send the signal to automatically trigger the imaging system to acquire the image of the target when the target is in the first phase of the periodic cycle. 3. The apparatus of claim 1, wherein the processing device is configured to allow a user to select the first phase of the periodic cycle, to automatically determine whether the target is in the first phase of the periodic cycle, and to send the signal to automatically trigger the imaging system to acquire the image of the target when the target is in the first phase of the periodic cycle. 4. The apparatus of claim 1, further comprising the imaging system coupled to the processing device, wherein the imaging system is configured to receive the signal from the processing device, and to acquire the image of the target when the imaging system receives the signal. 5. The apparatus of claim 1, wherein the processing device is configured to determine a second specified time in the periodic cycle at which to acquire a second image of the target that corresponds to a second phase of the periodic cycle, and to trigger the imaging system to acquire the second image of the target at the second phase of the periodic cycle. 6. The apparatus of claim 5, wherein the data storage device is configured to store the image of the target and the second image of the target. 7. The apparatus of claim 5, wherein the processing device is configured to generate a correlation model based on the image, the second image, and the plurality of displacement points. 8. The apparatus of claim 7, wherein the processing device is configured to identify a path of movement of the target to generate the correlation model. 9. The apparatus of claim 1, wherein the processing device comprises: a first interface to send a command to the imaging system to automatically trigger the imaging system to acquire the image of the target at the specified time during the periodic cycle; a second interface to receive the plurality of displacement points of the external marker measured by a motion tracking system. 10. An apparatus, comprising: means for receiving a plurality of data points representative of a corresponding plurality of positions over time of an external marker associated with a patient, wherein the plurality of positions of the external marker defines an external path of movement of the external marker, the external path of movement defining a periodic cycle of the patient; and means for automatically triggering an imaging system to acquire an image of a target within the patient at a specific time in the periodic cycle based on the plurality of data points. 11. The apparatus of claim 10, further comprising: means for generating evenly-distributed model points during the periodic cycle using the plurality of data points; and means for generating a correlation model using the evenly-distributed model points. 12. The apparatus of claim 11, wherein the means for automatically triggering reduces a number of images acquired to generate the correlation model. 13. The apparatus of claim 11, wherein the means for automatically triggering reduces an amount of time to generate the correlation model. 14. A system, comprising: a motion tracking system to track motion of an external marker associated with a patient, the motion of the external marker is indicative of a periodic cycle of the patient; an imaging system to acquire an image of a target within the patient; and a target locating system coupled to the motion tracking system and the imaging system, the target locating system to automatically trigger the imaging system to trigger the acquisition of the image at a specified time. 15. The system of claim 14, wherein the target locating system is configured to determine the periodic cycle based on the motion of the external marker, to automatically select a phase of the periodic cycle, to determine when the target is in the selected phase of the periodic cycle, and to automatically trigger image acquisition by the imaging system when the target is in the selected phase of the periodic cycle. 16. The system of claim 14, wherein the target locating system is configured to determine the periodic cycle based on the motion of the external marker, to allow a user to select a phase of the periodic cycle, to determine when the target is in the selected phase of the periodic cycle, and to automatically trigger image acquisition by the imaging system when the target is in the selected phase of the periodic cycle.
2,600
10,394
10,394
15,412,920
2,657
A method is described that processes an audio signal. A discontinuity between a filtered past frame and a filtered current frame of the audio signal is removed using linear predictive filtering.
1. A method for processing an audio signal, the method comprising: using linear predictive filtering for removing a discontinuity between a filtered past frame and a filtered current frame of the audio signal, wherein the method comprises filtering the current frame of the audio signal and removing the discontinuity by modifying a beginning portion of the filtered current frame by a signal acquired by linear predictive filtering a predefined signal with initial states of the linear predictive filter defined on the basis of a last part of the unfiltered past frame filtered using the set of filter parameters for filtering the current frame. 2. The method of claim 1, further comprising estimating the linear predictive filter on the filtered or non-filtered audio signal. 3. The method of claim 2, wherein estimating the linear predictive filter comprises estimating the filter based on the past and/or current frame of the audio signal or based on the past filtered frame of the audio signal using the Levinson-Durbin algorithm. 4. The method claim 1, wherein the linear predictive filter comprises a linear predictive filter of an audio codec. 5. The method of claim 1, wherein removing the discontinuity comprises processing the beginning portion of the filtered current frame, wherein the beginning portion of the current frame comprises a predefined number of samples being less or equal than the total number of samples in the current frame, and wherein processing the beginning portion of the current frame comprises subtracting a beginning portion of a zero-input-response (ZIR) from the beginning portion of the filtered current frame. 6. The method of claim 5, comprising filtering the current frame of the audio signal using a non-recursive filter, like a FIR filter, for producing the filtered current frame. 7. The method of claim 5, comprising processing the unfiltered current frame of the audio signal on a sample-by-sample basis using a recursive filter, like an IIR filter, and wherein processing a sample of the beginning portion of the current frame comprises: filtering the sample with the recursive filter using the filter parameters of the current frame for producing a filtered sample, and subtracting a corresponding ZIR sample from the filtered sample for producing the corresponding sample of the filtered current frame. 8. The method of claim 7, wherein filtering and subtracting are repeated until the last sample in the beginning portion of the current frame is processed, and wherein the method further comprises filtering the remaining samples in the current frame with the recursive filter using the filter parameters of the current frame. 9. The method of claim 5, comprising generating the ZIR, wherein generating the ZIR comprises: filtering the M last samples of the unfiltered past frame with the filter and the filter parameters used for filtering the current frame for producing a first portion of filtered signal, wherein M is the order of the linear predictive filter, subtracting from the first portion of filtered signal the M last samples of the filtered past frame, filtered using the filter parameters of the past frame, for generating a second portion of filtered signal, and generating a ZIR of a linear predictive filter by filtering a frame of zero samples with the linear predictive filter and initial states equal to the second portion of filtered signal. 10. The method of claim 9, comprising windowing the ZIR such that its amplitude decreases faster to zero. 11. A non-transitory digital storage medium having stored thereon a computer program for performing a method for processing an audio signal, the method comprising: using linear predictive filtering for removing a discontinuity between a filtered past frame and a filtered current frame of the audio signal, wherein the method comprises filtering the current frame of the audio signal and removing the discontinuity by modifying a beginning portion of the filtered current frame by a signal acquired by linear predictive filtering a predefined signal with initial states of the linear predictive filter defined on the basis of a last part of the unfiltered past frame filtered using the set of filter parameters for filtering the current frame, when said computer program is run by a computer. 12. An apparatus for processing an audio signal, the apparatus comprising: a processor configured to use linear predictive filtering for removing a discontinuity between a filtered past frame and a filtered current frame of the audio signal, wherein the processor is configured to filter the current frame of the audio signal and remove the discontinuity by modifying a beginning portion of the filtered current frame by a signal acquired by linear predictive filtering a predefined signal with initial states of the linear predictive filter defined on the basis of a last part of the unfiltered past frame filtered using the set of filter parameters for filtering the current frame. 13. An audio decoder, comprising an apparatus of claim 12. 14. An audio encoder, comprising an apparatus of claim 12.
A method is described that processes an audio signal. A discontinuity between a filtered past frame and a filtered current frame of the audio signal is removed using linear predictive filtering.1. A method for processing an audio signal, the method comprising: using linear predictive filtering for removing a discontinuity between a filtered past frame and a filtered current frame of the audio signal, wherein the method comprises filtering the current frame of the audio signal and removing the discontinuity by modifying a beginning portion of the filtered current frame by a signal acquired by linear predictive filtering a predefined signal with initial states of the linear predictive filter defined on the basis of a last part of the unfiltered past frame filtered using the set of filter parameters for filtering the current frame. 2. The method of claim 1, further comprising estimating the linear predictive filter on the filtered or non-filtered audio signal. 3. The method of claim 2, wherein estimating the linear predictive filter comprises estimating the filter based on the past and/or current frame of the audio signal or based on the past filtered frame of the audio signal using the Levinson-Durbin algorithm. 4. The method claim 1, wherein the linear predictive filter comprises a linear predictive filter of an audio codec. 5. The method of claim 1, wherein removing the discontinuity comprises processing the beginning portion of the filtered current frame, wherein the beginning portion of the current frame comprises a predefined number of samples being less or equal than the total number of samples in the current frame, and wherein processing the beginning portion of the current frame comprises subtracting a beginning portion of a zero-input-response (ZIR) from the beginning portion of the filtered current frame. 6. The method of claim 5, comprising filtering the current frame of the audio signal using a non-recursive filter, like a FIR filter, for producing the filtered current frame. 7. The method of claim 5, comprising processing the unfiltered current frame of the audio signal on a sample-by-sample basis using a recursive filter, like an IIR filter, and wherein processing a sample of the beginning portion of the current frame comprises: filtering the sample with the recursive filter using the filter parameters of the current frame for producing a filtered sample, and subtracting a corresponding ZIR sample from the filtered sample for producing the corresponding sample of the filtered current frame. 8. The method of claim 7, wherein filtering and subtracting are repeated until the last sample in the beginning portion of the current frame is processed, and wherein the method further comprises filtering the remaining samples in the current frame with the recursive filter using the filter parameters of the current frame. 9. The method of claim 5, comprising generating the ZIR, wherein generating the ZIR comprises: filtering the M last samples of the unfiltered past frame with the filter and the filter parameters used for filtering the current frame for producing a first portion of filtered signal, wherein M is the order of the linear predictive filter, subtracting from the first portion of filtered signal the M last samples of the filtered past frame, filtered using the filter parameters of the past frame, for generating a second portion of filtered signal, and generating a ZIR of a linear predictive filter by filtering a frame of zero samples with the linear predictive filter and initial states equal to the second portion of filtered signal. 10. The method of claim 9, comprising windowing the ZIR such that its amplitude decreases faster to zero. 11. A non-transitory digital storage medium having stored thereon a computer program for performing a method for processing an audio signal, the method comprising: using linear predictive filtering for removing a discontinuity between a filtered past frame and a filtered current frame of the audio signal, wherein the method comprises filtering the current frame of the audio signal and removing the discontinuity by modifying a beginning portion of the filtered current frame by a signal acquired by linear predictive filtering a predefined signal with initial states of the linear predictive filter defined on the basis of a last part of the unfiltered past frame filtered using the set of filter parameters for filtering the current frame, when said computer program is run by a computer. 12. An apparatus for processing an audio signal, the apparatus comprising: a processor configured to use linear predictive filtering for removing a discontinuity between a filtered past frame and a filtered current frame of the audio signal, wherein the processor is configured to filter the current frame of the audio signal and remove the discontinuity by modifying a beginning portion of the filtered current frame by a signal acquired by linear predictive filtering a predefined signal with initial states of the linear predictive filter defined on the basis of a last part of the unfiltered past frame filtered using the set of filter parameters for filtering the current frame. 13. An audio decoder, comprising an apparatus of claim 12. 14. An audio encoder, comprising an apparatus of claim 12.
2,600
10,395
10,395
15,458,832
2,621
A display comprises a matrix comprising a plurality of N rows divided into a plurality of M columns of cells, each cell including a light emitting device; a scan driver providing a plurality of N scan line signals to respective rows of said matrix, each for selecting a respective row of the matrix to be programmed with pixel values; and a data driver providing a plurality of M variable level data signals to respective columns of the matrix, each for programming a respective pixel within a selected row of the matrix with a pixel value. A pulse driver provides a plurality of N driving signals to respective rows of the matrix, each driving signal comprising successive sequences of pulses enabling the cells to emit light according to their programmed pixel values during respective sub-frames of successive frames to be displayed. The data driver is arranged to provide variable level data signals to respective pixels within a selected row of the matrix during a limited number of sub-frames of a frame, the variable data levels corresponding to a programmed value of a plurality of bits of a pixel value for a frame. The data driver is further arranged to provide data signals to respective pixels within a selected row of the matrix during a remaining number of sub-frames of a frame, the data signals each corresponding to a programmed value of a single bit of a pixel value for a frame.
1. A display comprising: a matrix comprising a plurality of N rows divided into a plurality of M columns of cells, each cell including a light emitting device; a scan driver providing a plurality of N scan line signals to respective rows of said matrix, each for selecting a respective row of said matrix to be programmed with pixel values; a data driver providing a plurality of M variable level data signals to respective columns of said matrix, each for programming a respective pixel within a selected row of said matrix with a pixel value; and a pulse driver providing a plurality of N driving signals to respective rows of said matrix, each driving signal comprising successive sequences of pulses enabling the cells to emit light according to their programmed pixel values during respective subframes of successive frames to be displayed; wherein said data driver is arranged to provide variable level data signals to respective pixels within a selected row of said matrix during a limited number of sub-frames of a frame, the variable data levels corresponding to a programmed value of a plurality of bits of a pixel value for a frame and wherein said data driver is arranged to provide data signals to respective pixels within a selected row of said matrix during a remaining number of subframes of a frame, the data signals each corresponding to a programmed value of a single bit of a pixel value for a frame. 2. A display according to claim 1 wherein each pixel is programmed according to a grayscale value and wherein the number of sub-frames is less than the number of gray-scale bits. 3. A display according to claim 1 wherein the limited number of sub-frames comprises a single sub-frame. 4. A display according to claim 1 wherein the limited number of sub-frames correspond with the least significant bits (LSB) of a pixel value for a frame. 5. A display according to claim 1 wherein the limited number of sub-frames correspond with either the 2 or 3 least significant bits (LSB) of a pixel value for a frame. 6. A display according to claim 1, wherein a sub-frame corresponding to the most-significant bit (MSB) of a pixel value for a frame has the longest sub-frame duration and a sub-frame corresponding to the least-significant bits of a pixel value for a frame has the shortest sub-frame duration. 7. The display of claim 1, wherein the limited number of sub-frames are variable according to the maximum resolution of said variable level data signals provided by said data driver. 8. The display of claim 1 wherein each cell comprises a first transistor connected to each of a scan driver signal line and a data driver signal line, said first transistor being connected to a second transistor, said second transistor being connected in series with a light emitting device, and a charge storage device connected between said first and second transistors, said scan driver signal line periodically actuating said first transistor to enable said data driver signal line to set a charge on said charge storage device for a subsequent sub-frame. 9. The display of claim 8 wherein a source for each second transistor of a row is connected in common to a pulse driving signal for the row. 10. The display of claim 8 wherein each light emitting device is connected between said pulse driving signal for the row and a source for the second transistor. 11. The display of claim 8 wherein each light emitting device is connected between said pulse driving signal for the row and a drain for the second transistor and wherein the source for each second transistor is connected to a common supply line. 12. The display of claim 11 wherein an amplitude of said driving pulses is less than the voltage of said common supply line. 13. The display of claim 1 wherein said light emitting devices comprise an inorganic light emitting diode (LED). 14. The display of claim 1 wherein each of said sequence of driving pulses comprises a stepped pulse with multiple intermediate voltage levels. 15. The display of claim 1 wherein a duration of the limited number of sub-frames is no greater than a shortest duration of a sub-frame from the remaining number of sub-frames. 16. The display of claim 1 wherein a duration of the limited number of sub-frames is approximately equal to a shortest duration of a sub-frame from the remaining number of sub-frames. 17. A display comprising: a matrix comprising a plurality of N rows divided into a plurality of M columns of cells, each cell including a light emitting device; a scan driver providing a plurality of N scan line signals to respective rows of said matrix, each for selecting a respective row of said matrix to be programmed with pixel values; a data driver providing a plurality of M variable level data signals to respective columns of said matrix, each for programming a respective pixel within a selected row of said matrix with a pixel value; and a pulse driver providing a plurality of N driving signals to respective rows of said matrix, each driving signal comprising successive sequences of pulses enabling the cells to emit light according to their programmed pixel values during respective subframes of successive frames to be displayed; wherein each of said sequence of driving pulses comprises a stepped pulse with multiple intermediate voltage levels. 18. A display according to claim 17, wherein each cell comprises a first transistor connected to each of a scan driver signal line and a data driver signal line, said first transistor being connected to a second transistor, said second transistor being connected in series with a light emitting device, and a charge storage device connected between said first and second transistors, said scan driver signal line periodically actuating said first transistor to enable said data driver signal line to set a charge on said charge storage device for a subsequent sub-frame. 19. A display comprising: a plurality of cells, each cell including a light emitting device; and a data driver providing variable level data signals to the matrix during a first sub-frame of a frame and providing data signals to the matrix during a second sub-frame of the frame, the variable level data signals of the first sub-frame corresponding to a first plurality of bits of a pixel value for the frame, the data signals of the second sub-frame corresponding to a single bit of the pixel value for the frame. 20. A display according to claim 19 further comprising a pulse driver providing driving signals to the matrix to enable the cells to emit light according to the variable level data signals during the first sub-frame and emit light according to the data signals during the second sub-frame.
A display comprises a matrix comprising a plurality of N rows divided into a plurality of M columns of cells, each cell including a light emitting device; a scan driver providing a plurality of N scan line signals to respective rows of said matrix, each for selecting a respective row of the matrix to be programmed with pixel values; and a data driver providing a plurality of M variable level data signals to respective columns of the matrix, each for programming a respective pixel within a selected row of the matrix with a pixel value. A pulse driver provides a plurality of N driving signals to respective rows of the matrix, each driving signal comprising successive sequences of pulses enabling the cells to emit light according to their programmed pixel values during respective sub-frames of successive frames to be displayed. The data driver is arranged to provide variable level data signals to respective pixels within a selected row of the matrix during a limited number of sub-frames of a frame, the variable data levels corresponding to a programmed value of a plurality of bits of a pixel value for a frame. The data driver is further arranged to provide data signals to respective pixels within a selected row of the matrix during a remaining number of sub-frames of a frame, the data signals each corresponding to a programmed value of a single bit of a pixel value for a frame.1. A display comprising: a matrix comprising a plurality of N rows divided into a plurality of M columns of cells, each cell including a light emitting device; a scan driver providing a plurality of N scan line signals to respective rows of said matrix, each for selecting a respective row of said matrix to be programmed with pixel values; a data driver providing a plurality of M variable level data signals to respective columns of said matrix, each for programming a respective pixel within a selected row of said matrix with a pixel value; and a pulse driver providing a plurality of N driving signals to respective rows of said matrix, each driving signal comprising successive sequences of pulses enabling the cells to emit light according to their programmed pixel values during respective subframes of successive frames to be displayed; wherein said data driver is arranged to provide variable level data signals to respective pixels within a selected row of said matrix during a limited number of sub-frames of a frame, the variable data levels corresponding to a programmed value of a plurality of bits of a pixel value for a frame and wherein said data driver is arranged to provide data signals to respective pixels within a selected row of said matrix during a remaining number of subframes of a frame, the data signals each corresponding to a programmed value of a single bit of a pixel value for a frame. 2. A display according to claim 1 wherein each pixel is programmed according to a grayscale value and wherein the number of sub-frames is less than the number of gray-scale bits. 3. A display according to claim 1 wherein the limited number of sub-frames comprises a single sub-frame. 4. A display according to claim 1 wherein the limited number of sub-frames correspond with the least significant bits (LSB) of a pixel value for a frame. 5. A display according to claim 1 wherein the limited number of sub-frames correspond with either the 2 or 3 least significant bits (LSB) of a pixel value for a frame. 6. A display according to claim 1, wherein a sub-frame corresponding to the most-significant bit (MSB) of a pixel value for a frame has the longest sub-frame duration and a sub-frame corresponding to the least-significant bits of a pixel value for a frame has the shortest sub-frame duration. 7. The display of claim 1, wherein the limited number of sub-frames are variable according to the maximum resolution of said variable level data signals provided by said data driver. 8. The display of claim 1 wherein each cell comprises a first transistor connected to each of a scan driver signal line and a data driver signal line, said first transistor being connected to a second transistor, said second transistor being connected in series with a light emitting device, and a charge storage device connected between said first and second transistors, said scan driver signal line periodically actuating said first transistor to enable said data driver signal line to set a charge on said charge storage device for a subsequent sub-frame. 9. The display of claim 8 wherein a source for each second transistor of a row is connected in common to a pulse driving signal for the row. 10. The display of claim 8 wherein each light emitting device is connected between said pulse driving signal for the row and a source for the second transistor. 11. The display of claim 8 wherein each light emitting device is connected between said pulse driving signal for the row and a drain for the second transistor and wherein the source for each second transistor is connected to a common supply line. 12. The display of claim 11 wherein an amplitude of said driving pulses is less than the voltage of said common supply line. 13. The display of claim 1 wherein said light emitting devices comprise an inorganic light emitting diode (LED). 14. The display of claim 1 wherein each of said sequence of driving pulses comprises a stepped pulse with multiple intermediate voltage levels. 15. The display of claim 1 wherein a duration of the limited number of sub-frames is no greater than a shortest duration of a sub-frame from the remaining number of sub-frames. 16. The display of claim 1 wherein a duration of the limited number of sub-frames is approximately equal to a shortest duration of a sub-frame from the remaining number of sub-frames. 17. A display comprising: a matrix comprising a plurality of N rows divided into a plurality of M columns of cells, each cell including a light emitting device; a scan driver providing a plurality of N scan line signals to respective rows of said matrix, each for selecting a respective row of said matrix to be programmed with pixel values; a data driver providing a plurality of M variable level data signals to respective columns of said matrix, each for programming a respective pixel within a selected row of said matrix with a pixel value; and a pulse driver providing a plurality of N driving signals to respective rows of said matrix, each driving signal comprising successive sequences of pulses enabling the cells to emit light according to their programmed pixel values during respective subframes of successive frames to be displayed; wherein each of said sequence of driving pulses comprises a stepped pulse with multiple intermediate voltage levels. 18. A display according to claim 17, wherein each cell comprises a first transistor connected to each of a scan driver signal line and a data driver signal line, said first transistor being connected to a second transistor, said second transistor being connected in series with a light emitting device, and a charge storage device connected between said first and second transistors, said scan driver signal line periodically actuating said first transistor to enable said data driver signal line to set a charge on said charge storage device for a subsequent sub-frame. 19. A display comprising: a plurality of cells, each cell including a light emitting device; and a data driver providing variable level data signals to the matrix during a first sub-frame of a frame and providing data signals to the matrix during a second sub-frame of the frame, the variable level data signals of the first sub-frame corresponding to a first plurality of bits of a pixel value for the frame, the data signals of the second sub-frame corresponding to a single bit of the pixel value for the frame. 20. A display according to claim 19 further comprising a pulse driver providing driving signals to the matrix to enable the cells to emit light according to the variable level data signals during the first sub-frame and emit light according to the data signals during the second sub-frame.
2,600
10,396
10,396
15,089,867
2,667
A system for applying video data to a neural network (NN) for online multi-class multi-object tracking includes a computer programed to perform an image classification method including the operations of receiving a video sequence; detecting candidate objects in each of a previous and a current video frame; transforming the previous and current video frames into a temporal difference input image; applying the temporal difference input image to a pre-trained neural network (NN) (or deep convolutional network) comprising an ordered sequence of layers; and based on a classification value received by the neural network, associating a pair of detected candidate objects in the previous and current frames as belonging to one of matching objects and different objects.
1. A system for applying video data to a neural network (NN) for online multi-class multi-object tracking, the system comprising: a computer programed to perform a method for a classification of candidate object associations and including the operations of: receiving video frames from a video sequence; detecting candidate objects in each of a previous and a current video frame; transforming the previous and current video frames into a temporal difference input image; applying the temporal difference input image to a pre-trained neural network (NN) comprising an ordered sequence of layers; and based on a classification value received by the neural network, associating a pair of detected candidate objects in the previous and current frames as belonging to one of matching objects and different objects. 2. The system of claim 1 wherein the computer is further programmed to perform the operation of the transforming by: for each pair of the candidate objects detected in the previous and current frames, spatially align bounding boxes bounding the each pair of the detected candidate objects; reshaping the pair of bounding boxes to a predetermined n x n number of pixels to generate a pair of rectified input images; and generating the temporal difference input image using the pair of rectified input images. 3. The system of claim 1 wherein the computer is further programmed to perform the operation of: linking the matching objects between the previous and current frames; and tracking a location of the matching objects across the video sequence. 4. The system of claim 1 wherein the temporal difference input image corresponding to a pair of matching objects in the previous and current frames represents a motion boundary. 5. The system of claim 1 wherein the current and previous frames are adjacent frames. 6. The system of claim 1 wherein the neural network is not limiting to a particular class of objects. 7. The system of claim 1, wherein the neural network has been pre-trained to predict labels for an image, the training having been performed with a set of labeled training images. 8. The system of claim 1 wherein the computer is further programmed to perform a neural network training method comprising the operations of: for each set of pairwise training images, generating a training temporal difference input image representing a training sample by the same operations that generate the temporal difference input image; and training the neural network (NN) on a training set comprising the generated training temporal difference input images annotated by labels of the represented training samples. 9. The system of claim 8 wherein the set of training samples comprise a set of training images of objects, the labels of the training samples are labels of matching and non-matching objects captured in the pairwise image frames, and the classification value for the temporal difference input image that is generated by the neural network is effective to classify the objects detected in a pair of video frames as belonging to one of matching objects and different objects. 10. The system of claim 1 comprising: a memory which stores the pre-trained neural network; a candidate object detection component for detecting the candidate objects; a prediction component for predicting labels for the temporal difference input image using a forward pass of the neural network; and a processor in communication with the memory for implementing the prediction component. 11. The system of claim 1 further comprising: a video camera arranged to acquire the video frames as image of objects to be matched and tracked across the video sequence. 12. A non-transitory storage medium storing instructions readable and executable by a computer to perform a tracking method including the operations of: (a) detecting candidate objects in each of a previous and a current video frame; (b) transforming the previous and current video frames into a temporal difference input image; (c) applying the temporal difference input image to a pre-trained neural network (NN) (or deep convolutional network) comprising an ordered sequence of layers; and (d) based on a classification value received by the neural network, associating a pair of detected candidate objects in the previous and current frames as belonging to one of matching objects and different objects. 13. The non-transitory storage medium of claim 12 wherein the transforming operation (b) comprises: for each pair of the candidate objects detected in the previous and current frames, spatially align bounding boxes bounding the each pair of the detected candidate objects; reshaping the pair of bounding boxes to a predetermined n x n number of pixels to generate a pair of rectified images; generating the temporal difference input image using the pair of rectified images. 14. The non-transitory storage medium of claim 12 further performing the operations of: linking the matching objects between the previous and current frames; and tracking a location of the matching objects across the video sequence. 15. The non-transitory storage medium of claim 12 wherein the temporal difference input image corresponding to a pair of matching objects in the previous and current frames represents a motion boundary. 16. The non-transitory storage medium of claim 12 wherein the current and previous frames are adjacent frames. 17. The non-transitory storage medium of claim 12 wherein the neural network is not limiting to a particular class of objects. 18. The non-transitory storage medium of claim 12 wherein the neural network has been pre-trained to predict labels for an image, the training having been performed with a set of labeled training images. 19. The non-transitory storage medium of claim 12 further performing the operation of training the neural network including the operations of: for each set of pairwise training images, generating a training temporal difference input image representing a training sample by the same operations that generate the temporal difference input image; and training the neural network (NN) on a training set comprising the generated training temporal difference input images annotated by labels of the represented training samples.
A system for applying video data to a neural network (NN) for online multi-class multi-object tracking includes a computer programed to perform an image classification method including the operations of receiving a video sequence; detecting candidate objects in each of a previous and a current video frame; transforming the previous and current video frames into a temporal difference input image; applying the temporal difference input image to a pre-trained neural network (NN) (or deep convolutional network) comprising an ordered sequence of layers; and based on a classification value received by the neural network, associating a pair of detected candidate objects in the previous and current frames as belonging to one of matching objects and different objects.1. A system for applying video data to a neural network (NN) for online multi-class multi-object tracking, the system comprising: a computer programed to perform a method for a classification of candidate object associations and including the operations of: receiving video frames from a video sequence; detecting candidate objects in each of a previous and a current video frame; transforming the previous and current video frames into a temporal difference input image; applying the temporal difference input image to a pre-trained neural network (NN) comprising an ordered sequence of layers; and based on a classification value received by the neural network, associating a pair of detected candidate objects in the previous and current frames as belonging to one of matching objects and different objects. 2. The system of claim 1 wherein the computer is further programmed to perform the operation of the transforming by: for each pair of the candidate objects detected in the previous and current frames, spatially align bounding boxes bounding the each pair of the detected candidate objects; reshaping the pair of bounding boxes to a predetermined n x n number of pixels to generate a pair of rectified input images; and generating the temporal difference input image using the pair of rectified input images. 3. The system of claim 1 wherein the computer is further programmed to perform the operation of: linking the matching objects between the previous and current frames; and tracking a location of the matching objects across the video sequence. 4. The system of claim 1 wherein the temporal difference input image corresponding to a pair of matching objects in the previous and current frames represents a motion boundary. 5. The system of claim 1 wherein the current and previous frames are adjacent frames. 6. The system of claim 1 wherein the neural network is not limiting to a particular class of objects. 7. The system of claim 1, wherein the neural network has been pre-trained to predict labels for an image, the training having been performed with a set of labeled training images. 8. The system of claim 1 wherein the computer is further programmed to perform a neural network training method comprising the operations of: for each set of pairwise training images, generating a training temporal difference input image representing a training sample by the same operations that generate the temporal difference input image; and training the neural network (NN) on a training set comprising the generated training temporal difference input images annotated by labels of the represented training samples. 9. The system of claim 8 wherein the set of training samples comprise a set of training images of objects, the labels of the training samples are labels of matching and non-matching objects captured in the pairwise image frames, and the classification value for the temporal difference input image that is generated by the neural network is effective to classify the objects detected in a pair of video frames as belonging to one of matching objects and different objects. 10. The system of claim 1 comprising: a memory which stores the pre-trained neural network; a candidate object detection component for detecting the candidate objects; a prediction component for predicting labels for the temporal difference input image using a forward pass of the neural network; and a processor in communication with the memory for implementing the prediction component. 11. The system of claim 1 further comprising: a video camera arranged to acquire the video frames as image of objects to be matched and tracked across the video sequence. 12. A non-transitory storage medium storing instructions readable and executable by a computer to perform a tracking method including the operations of: (a) detecting candidate objects in each of a previous and a current video frame; (b) transforming the previous and current video frames into a temporal difference input image; (c) applying the temporal difference input image to a pre-trained neural network (NN) (or deep convolutional network) comprising an ordered sequence of layers; and (d) based on a classification value received by the neural network, associating a pair of detected candidate objects in the previous and current frames as belonging to one of matching objects and different objects. 13. The non-transitory storage medium of claim 12 wherein the transforming operation (b) comprises: for each pair of the candidate objects detected in the previous and current frames, spatially align bounding boxes bounding the each pair of the detected candidate objects; reshaping the pair of bounding boxes to a predetermined n x n number of pixels to generate a pair of rectified images; generating the temporal difference input image using the pair of rectified images. 14. The non-transitory storage medium of claim 12 further performing the operations of: linking the matching objects between the previous and current frames; and tracking a location of the matching objects across the video sequence. 15. The non-transitory storage medium of claim 12 wherein the temporal difference input image corresponding to a pair of matching objects in the previous and current frames represents a motion boundary. 16. The non-transitory storage medium of claim 12 wherein the current and previous frames are adjacent frames. 17. The non-transitory storage medium of claim 12 wherein the neural network is not limiting to a particular class of objects. 18. The non-transitory storage medium of claim 12 wherein the neural network has been pre-trained to predict labels for an image, the training having been performed with a set of labeled training images. 19. The non-transitory storage medium of claim 12 further performing the operation of training the neural network including the operations of: for each set of pairwise training images, generating a training temporal difference input image representing a training sample by the same operations that generate the temporal difference input image; and training the neural network (NN) on a training set comprising the generated training temporal difference input images annotated by labels of the represented training samples.
2,600
10,397
10,397
15,502,798
2,667
The inventors of the present application developed novel optically active materials, methods, and systems for reading identifying information on an optically active article. Specifically, the present application relates to substantially simultaneously capturing and/or processing a first optically active image and a second optically active image. In some embodiments, the first optically active image is taken at a first wavelength and the second optically active image is taken at a second wavelength, wherein the first wavelength is different from the second wavelength. In one aspect, the present applications relates to reading information on a license plate for purposes of vehicle identification.
1. A system for reading identifying information comprising: an optically active article including a first set of identifying information and a second set of identifying information, wherein the first set is detectable at a first wavelength and the second set is detectable at a second wavelength, different from the first wavelength; and an apparatus for substantially concurrently processing the first and second set of identifying information. 2. The system of claim 1, wherein the apparatus further includes a first sensor and a second sensor, the first sensor detecting at the first wavelength and the second sensor detecting at the second wavelength. 3. The system of claim 1, wherein the first wavelength is within the visible spectrum and the second wavelength is within the near infrared spectrum. 4. (canceled) 5. The system of claim 1, wherein the first set of identifying information is non-interfering in the second wavelength, and the second set of identifying information is non-interfering in the first wavelength. 6. (canceled) 7. The system of claim 1, wherein the first set of identifying information is human-readable, and the second set of identifying information is machine-readable. 8-10. (canceled) 11. The system of claim 1, wherein the optically active article is non-retroreflective or retroreflective. 12. The system of claim 1, wherein the optically active article is at least one of a license plate or signage. 13-17. (canceled) 18. The system of claim 2, wherein the first sensor concurrently produces a first image as illuminated by the first wavelength and the second sensor produces a second image as illuminated by the second wavelength. 19. The system of claim 1, wherein the first set of identifying information is processed within 40 milliseconds or less from the processing of the second set of identifying information. 20-21. (canceled) 22. A method of reading an optically active article comprising: substantially simultaneously exposing an optically active article to radiation having a first wavelength and radiation having a second wavelength, the second wavelength being different from the first wavelength; and substantially concurrently capturing a first optically active article image at the first wavelength and a second optically active article image at the second wavelength. 23. The method of claim 22, wherein the optically active article comprises first identifying information and second identifying information, wherein the first identifying information is substantially visible at the first wavelength and non-interfering in the second wavelength, and the second identifying information is not substantially visible at the first wavelength and is detectable in the second wavelength. 24-25. (canceled) 26. The method of claim 22, wherein the optically active article is non-retroreflective or retroreflective. 27-29. (canceled) 30. The method of claim 22, further comprising: performing optical character recognition of at least one of the first identifying information and the second identifying information. 31. The method of claim 22, wherein the first optically active article image is captured within 40 milliseconds or less from the capturing of the second optically active article image. 32-33. (canceled) 34. An apparatus for reading an optically active article comprising: a first channel detecting at a first wavelength; and a second channel detecting at a second wavelength; wherein the apparatus substantially concurrently captures at least a first image of the optically active article through the first channel and a second image of the optically active article through the second channel 35-38. (canceled) 39. The apparatus of claim 34, wherein the first image is captured within 40 milliseconds or less from the capturing of the second image. 40-41. (canceled) 42. The method of claim 34, wherein information gleaned from the first image is used to facilitate processing of the second image. 43. The method of claim 34, wherein information gleaned from the second image is used to facilitate processing of the first image. 44. A method of reading optically active articles comprising: providing a first optically active article that is non-retroreflective; providing a second optically article that is retroreflective; substantially simultaneously exposing the first and second optically active articles to radiation having a first wavelength and radiation having a second wavelength, the second wavelength being different from the first wavelength; and substantially concurrently capturing an image of the first optically active article at the first wavelength and capturing an image of the second optically active article at the second wavelength. 45. The method of claim 44, wherein the first wavelength is within the visible spectrum and the second wavelength is within the infrared spectrum.
The inventors of the present application developed novel optically active materials, methods, and systems for reading identifying information on an optically active article. Specifically, the present application relates to substantially simultaneously capturing and/or processing a first optically active image and a second optically active image. In some embodiments, the first optically active image is taken at a first wavelength and the second optically active image is taken at a second wavelength, wherein the first wavelength is different from the second wavelength. In one aspect, the present applications relates to reading information on a license plate for purposes of vehicle identification.1. A system for reading identifying information comprising: an optically active article including a first set of identifying information and a second set of identifying information, wherein the first set is detectable at a first wavelength and the second set is detectable at a second wavelength, different from the first wavelength; and an apparatus for substantially concurrently processing the first and second set of identifying information. 2. The system of claim 1, wherein the apparatus further includes a first sensor and a second sensor, the first sensor detecting at the first wavelength and the second sensor detecting at the second wavelength. 3. The system of claim 1, wherein the first wavelength is within the visible spectrum and the second wavelength is within the near infrared spectrum. 4. (canceled) 5. The system of claim 1, wherein the first set of identifying information is non-interfering in the second wavelength, and the second set of identifying information is non-interfering in the first wavelength. 6. (canceled) 7. The system of claim 1, wherein the first set of identifying information is human-readable, and the second set of identifying information is machine-readable. 8-10. (canceled) 11. The system of claim 1, wherein the optically active article is non-retroreflective or retroreflective. 12. The system of claim 1, wherein the optically active article is at least one of a license plate or signage. 13-17. (canceled) 18. The system of claim 2, wherein the first sensor concurrently produces a first image as illuminated by the first wavelength and the second sensor produces a second image as illuminated by the second wavelength. 19. The system of claim 1, wherein the first set of identifying information is processed within 40 milliseconds or less from the processing of the second set of identifying information. 20-21. (canceled) 22. A method of reading an optically active article comprising: substantially simultaneously exposing an optically active article to radiation having a first wavelength and radiation having a second wavelength, the second wavelength being different from the first wavelength; and substantially concurrently capturing a first optically active article image at the first wavelength and a second optically active article image at the second wavelength. 23. The method of claim 22, wherein the optically active article comprises first identifying information and second identifying information, wherein the first identifying information is substantially visible at the first wavelength and non-interfering in the second wavelength, and the second identifying information is not substantially visible at the first wavelength and is detectable in the second wavelength. 24-25. (canceled) 26. The method of claim 22, wherein the optically active article is non-retroreflective or retroreflective. 27-29. (canceled) 30. The method of claim 22, further comprising: performing optical character recognition of at least one of the first identifying information and the second identifying information. 31. The method of claim 22, wherein the first optically active article image is captured within 40 milliseconds or less from the capturing of the second optically active article image. 32-33. (canceled) 34. An apparatus for reading an optically active article comprising: a first channel detecting at a first wavelength; and a second channel detecting at a second wavelength; wherein the apparatus substantially concurrently captures at least a first image of the optically active article through the first channel and a second image of the optically active article through the second channel 35-38. (canceled) 39. The apparatus of claim 34, wherein the first image is captured within 40 milliseconds or less from the capturing of the second image. 40-41. (canceled) 42. The method of claim 34, wherein information gleaned from the first image is used to facilitate processing of the second image. 43. The method of claim 34, wherein information gleaned from the second image is used to facilitate processing of the first image. 44. A method of reading optically active articles comprising: providing a first optically active article that is non-retroreflective; providing a second optically article that is retroreflective; substantially simultaneously exposing the first and second optically active articles to radiation having a first wavelength and radiation having a second wavelength, the second wavelength being different from the first wavelength; and substantially concurrently capturing an image of the first optically active article at the first wavelength and capturing an image of the second optically active article at the second wavelength. 45. The method of claim 44, wherein the first wavelength is within the visible spectrum and the second wavelength is within the infrared spectrum.
2,600
10,398
10,398
15,174,209
2,689
A system and method for automated discovery of wireless locks in a security system allows installers to assign each wireless lock to a slot on a wireless hub that provides wireless communications to the wireless locks. Device controllers poll the wireless hubs to discover the communications paths to each wireless lock. The device controllers store the information obtained from the polling, and present the information to a control system that manages the wireless locks. This eliminates the current practice of manually updating assignment information on the control system between wireless hub slots and the wireless locks in response to additions, deletions, or relocations of wireless locks within the security system. This is especially useful in installations that include hundreds or thousands of wireless locks within office buildings, hotels, or conference centers.
1. A security system providing discovery of wireless security devices, comprising: a control system for managing the wireless security devices; one or more wireless hubs for enabling wireless connections to the wireless security devices; and device controllers that communicate with the control system over communications channels and poll the wireless security devices of the wireless hubs to discover changes to the wireless security devices. 2. The system of claim 1, wherein the wireless security devices are wireless door locks which include a user credential reader for reading a user's credentials from an access card. 3. The system of claim 1, wherein the changes to the wireless security devices include: adding new wireless security devices to the wireless hubs; removing the wireless security devices from the wireless hubs; and/or changing the locations of the wireless security devices on the wireless hubs. 4. The system of claim 1, further comprising channel controllers that provide the communications channels, and define a virtual controller ID (“VCID”) for the communications channels. 5. The system of claim 1, wherein the control system assigns IDs of the wireless security devices to virtual controller IDs (“VCID”) of the communications channels for managing the wireless security devices. 6. The system of claim 5, wherein the device controllers request the virtual controller IDs (“VCID”) associated with the assigned IDs of the wireless security devices from the control system. 7. The system of claim 1, wherein: IDs of the wireless security devices are assigned to slots of the wireless hubs for enabling the wireless connections to the wireless security devices; and the device controllers poll the wireless hubs to discover the slots assigned to the wireless security devices, and to discover the slots unassigned to the wireless security devices. 8. The system of claim 1, wherein the wireless hubs provide path information to the wireless security devices, the path information including an ID of the wireless hubs, and slots of the wireless hubs. 9. The system of claim 1, wherein the device controllers poll path information of the wireless hubs to retrieve requests from the wireless security devices assigned to slots of the wireless hubs. 10. The system of claim 1, wherein the device controllers poll path information of the wireless hubs for slots of the wireless hubs unassigned to the wireless security devices, to determine new, relocated, or deleted wireless security devices. 11. The system of claim 1, wherein the device controllers further include a polling daemon for polling path information of the wireless hubs to the wireless security devices. 12. The system of claim 1, wherein in response to receiving the messages from the control system over the communications channels for communicating with the wireless security devices, the messages including requested IDs of the wireless security devices: the device controllers search path information of slots of the wireless hubs that includes IDs of the wireless security devices assigned to the slots, and upon finding a match between the requested IDs and the IDs of the wireless security devices assigned to the slots, return messages that include an acknowledgment of the match to the control system. 13. The system of claim 1, wherein the device controllers: store path information for slots of the wireless hubs assigned to the wireless security devices, and for the slots of the wireless hubs unassigned to the wireless security devices, and provide IDs of the wireless security devices for the assigned slots to the control system. 14. A method for discovering wireless security devices in a security system including a control system, one or more wireless hubs, and device controllers, the method comprising: the control system managing the wireless security devices; the wireless hubs enabling wireless connections to the wireless security devices; and the device controllers communicating with the control system over communications channels and polling the wireless security devices via the wireless hubs to discover changes to the wireless security devices. 15. The method of claim 14, further comprising the changes to the wireless security devices including: adding new wireless security devices to the wireless hubs; removing the wireless security devices from the wireless hubs; and/or changing the locations of the wireless security devices on the wireless hubs. 16. The method of claim 14, further comprising the control system assigning IDs of the wireless security devices to virtual controller IDs (“VCID”) of the communications channels for managing the wireless security devices. 17. The method of claim 16, further comprising the device controllers requesting the virtual controller IDs (“VCID”) associated with the assigned IDs of the wireless security devices from the control system. 18. The method of claim 14, further comprising: assigning IDs of the wireless security devices to slots of the wireless hubs for enabling the wireless connections to the wireless security devices; and the device controllers polling the wireless hubs to discover the slots assigned to the wireless security devices, and to discover the slots unassigned to the wireless security devices. 19. The method of claim 14, further comprising the wireless hubs providing path information to the wireless security devices, the path information including an ID of the wireless hubs, and slots of the wireless hubs. 20. A method for discovering wireless security devices in a security system including a control system, one or more wireless hubs, and device controllers, the method comprising: the control system managing the wireless security devices; the wireless hubs enabling wireless connections to the wireless security devices; and the device controllers communicating with the control system and polling the wireless security devices via the wireless hubs to discover changes to the wireless security devices, including adding new wireless security devices to the wireless hubs, removing the wireless security devices from the wireless hubs, and/or changing the locations of the wireless security devices on the wireless hubs.
A system and method for automated discovery of wireless locks in a security system allows installers to assign each wireless lock to a slot on a wireless hub that provides wireless communications to the wireless locks. Device controllers poll the wireless hubs to discover the communications paths to each wireless lock. The device controllers store the information obtained from the polling, and present the information to a control system that manages the wireless locks. This eliminates the current practice of manually updating assignment information on the control system between wireless hub slots and the wireless locks in response to additions, deletions, or relocations of wireless locks within the security system. This is especially useful in installations that include hundreds or thousands of wireless locks within office buildings, hotels, or conference centers.1. A security system providing discovery of wireless security devices, comprising: a control system for managing the wireless security devices; one or more wireless hubs for enabling wireless connections to the wireless security devices; and device controllers that communicate with the control system over communications channels and poll the wireless security devices of the wireless hubs to discover changes to the wireless security devices. 2. The system of claim 1, wherein the wireless security devices are wireless door locks which include a user credential reader for reading a user's credentials from an access card. 3. The system of claim 1, wherein the changes to the wireless security devices include: adding new wireless security devices to the wireless hubs; removing the wireless security devices from the wireless hubs; and/or changing the locations of the wireless security devices on the wireless hubs. 4. The system of claim 1, further comprising channel controllers that provide the communications channels, and define a virtual controller ID (“VCID”) for the communications channels. 5. The system of claim 1, wherein the control system assigns IDs of the wireless security devices to virtual controller IDs (“VCID”) of the communications channels for managing the wireless security devices. 6. The system of claim 5, wherein the device controllers request the virtual controller IDs (“VCID”) associated with the assigned IDs of the wireless security devices from the control system. 7. The system of claim 1, wherein: IDs of the wireless security devices are assigned to slots of the wireless hubs for enabling the wireless connections to the wireless security devices; and the device controllers poll the wireless hubs to discover the slots assigned to the wireless security devices, and to discover the slots unassigned to the wireless security devices. 8. The system of claim 1, wherein the wireless hubs provide path information to the wireless security devices, the path information including an ID of the wireless hubs, and slots of the wireless hubs. 9. The system of claim 1, wherein the device controllers poll path information of the wireless hubs to retrieve requests from the wireless security devices assigned to slots of the wireless hubs. 10. The system of claim 1, wherein the device controllers poll path information of the wireless hubs for slots of the wireless hubs unassigned to the wireless security devices, to determine new, relocated, or deleted wireless security devices. 11. The system of claim 1, wherein the device controllers further include a polling daemon for polling path information of the wireless hubs to the wireless security devices. 12. The system of claim 1, wherein in response to receiving the messages from the control system over the communications channels for communicating with the wireless security devices, the messages including requested IDs of the wireless security devices: the device controllers search path information of slots of the wireless hubs that includes IDs of the wireless security devices assigned to the slots, and upon finding a match between the requested IDs and the IDs of the wireless security devices assigned to the slots, return messages that include an acknowledgment of the match to the control system. 13. The system of claim 1, wherein the device controllers: store path information for slots of the wireless hubs assigned to the wireless security devices, and for the slots of the wireless hubs unassigned to the wireless security devices, and provide IDs of the wireless security devices for the assigned slots to the control system. 14. A method for discovering wireless security devices in a security system including a control system, one or more wireless hubs, and device controllers, the method comprising: the control system managing the wireless security devices; the wireless hubs enabling wireless connections to the wireless security devices; and the device controllers communicating with the control system over communications channels and polling the wireless security devices via the wireless hubs to discover changes to the wireless security devices. 15. The method of claim 14, further comprising the changes to the wireless security devices including: adding new wireless security devices to the wireless hubs; removing the wireless security devices from the wireless hubs; and/or changing the locations of the wireless security devices on the wireless hubs. 16. The method of claim 14, further comprising the control system assigning IDs of the wireless security devices to virtual controller IDs (“VCID”) of the communications channels for managing the wireless security devices. 17. The method of claim 16, further comprising the device controllers requesting the virtual controller IDs (“VCID”) associated with the assigned IDs of the wireless security devices from the control system. 18. The method of claim 14, further comprising: assigning IDs of the wireless security devices to slots of the wireless hubs for enabling the wireless connections to the wireless security devices; and the device controllers polling the wireless hubs to discover the slots assigned to the wireless security devices, and to discover the slots unassigned to the wireless security devices. 19. The method of claim 14, further comprising the wireless hubs providing path information to the wireless security devices, the path information including an ID of the wireless hubs, and slots of the wireless hubs. 20. A method for discovering wireless security devices in a security system including a control system, one or more wireless hubs, and device controllers, the method comprising: the control system managing the wireless security devices; the wireless hubs enabling wireless connections to the wireless security devices; and the device controllers communicating with the control system and polling the wireless security devices via the wireless hubs to discover changes to the wireless security devices, including adding new wireless security devices to the wireless hubs, removing the wireless security devices from the wireless hubs, and/or changing the locations of the wireless security devices on the wireless hubs.
2,600
10,399
10,399
15,377,096
2,625
One illustrative device disclosed herein includes a proximity sensor capable of detecting a non-contact interaction with a touch-sensitive device and outputting a first sensor signal. The device also includes a touch sensor for detecting a touch on the touch-sensitive device and outputting a second sensor signal. The disclosed device also includes a processor configured to receive the first and second sensor signals, generate a haptic output signal based at least in part on the first and second sensor signals, transmit the haptic output signal to a haptic output device. The haptic output device in the disclosed device then outputs the haptic effect.
1. A device comprising: a proximity sensor capable of detecting a non-contact interaction with a touch-sensitive device and outputting a first sensor signal; a touch sensor capable of detecting a touch with the touch-sensitive input device and outputting a second sensor signal; a haptic output device configured to receive a haptic output signal and output a haptic effect in response to the haptic output signal; and a processor configured to: receive the first sensor signal and the second sensor signal; generate a haptic output signal based at least in part on the first and second sensor signals; and transmit the haptic output signal to the haptic output device. 2. The device of claim 1, wherein the proximity sensor is integrated into the touch-sensitive input device. 3. The device of claim 1, wherein the proximity sensor comprises one of a capacitive sensor or an optical sensor. 4. The device of claim 1, wherein the haptic output signal is designed to direct a user to contact the touch-sensitive device at a particular location. 5. The device of claim 1, wherein the an integrated sensor comprises the proximity sensor and the touch sensor. 6. The device of claim 5, wherein the integrated sensor further comprises the haptic output device. 7. The device of claim 5, wherein the integrated sensor comprises an electro-active polymer. 8. The device of claim 7, wherein the electro-active polymer comprises a first side with a uniform electrode pattern layer affixed thereto and a second side with a network electrode pattern layer affixed thereto. 9. The device of claim 8, further comprising an elastomer layer having a first side with a network pattern electrode layer affixed thereto and a second side configured adjacent to the second side of the electro-active polymer. 10. The device of claim 9, further comprising an insulator layer adjacent to the first side of the electro active polymer. 11. The device of claim 3, wherein the optical sensor comprises one of a laser, an infra-red emitter, or a photo-resistor. 12. The device of claim 1, wherein the haptic output device comprises an electrostatic friction actuator. 13. The device of claim 1, wherein the haptic output device comprises an air puff actuator. 14. The device of claim 1, wherein the touch sensor is capable of detecting a pressure and wherein the haptic output signal is based at least in part on the detected pressure. 15. A method comprising: detecting, by a proximity sensor, a non-contact interaction with a touch-sensitive device; detecting, by a touch sensor, a touch on the touch-sensitive input device; transmitting a first sensor signal associated with the non-contact interaction to a processor; transmitting a second sensor signal associated with the touch to a processor; receiving, by the processor, the first and second sensor signals; generating, by the processor, a haptic output signal based at least in part on the first and second sensor signals; transmitting, by the processor, the haptic output signal to a haptic output device; and outputting, by the haptic output device, a haptic effect in response to the haptic output signal. 16. The method of claim 15, wherein the haptic output signal designed to direct a user to contact the touch-sensitive device at a particular location. 17. The method of claim 15, wherein outputting a haptic effect comprises outputting, by an electrostatic friction actuator, an electrostatic friction haptic effect. 18. The method of claim 15, wherein outputting a haptic effect comprises outputting, by an air puff actuator, an air puff haptic effect. 19. A computer-readable non-transitory medium encoded with executable program code, the computer-readable medium comprising: program code for detecting, by a proximity sensor, a non-contact interaction with a touch-sensitive device; program code for detecting, by a touch sensor, a touch on the touch-sensitive input device; program code for transmitting a first sensor signal associated with the non-contact interaction to a processor; program code for transmitting a second sensor signal associated with the touch to a processor; program code for receiving, by the processor, the first and second sensor signals; program code for generating, by the processor, a haptic output signal based at least in part on the first and second sensor signals; and program code for transmitting the haptic output signal to a haptic output device.
One illustrative device disclosed herein includes a proximity sensor capable of detecting a non-contact interaction with a touch-sensitive device and outputting a first sensor signal. The device also includes a touch sensor for detecting a touch on the touch-sensitive device and outputting a second sensor signal. The disclosed device also includes a processor configured to receive the first and second sensor signals, generate a haptic output signal based at least in part on the first and second sensor signals, transmit the haptic output signal to a haptic output device. The haptic output device in the disclosed device then outputs the haptic effect.1. A device comprising: a proximity sensor capable of detecting a non-contact interaction with a touch-sensitive device and outputting a first sensor signal; a touch sensor capable of detecting a touch with the touch-sensitive input device and outputting a second sensor signal; a haptic output device configured to receive a haptic output signal and output a haptic effect in response to the haptic output signal; and a processor configured to: receive the first sensor signal and the second sensor signal; generate a haptic output signal based at least in part on the first and second sensor signals; and transmit the haptic output signal to the haptic output device. 2. The device of claim 1, wherein the proximity sensor is integrated into the touch-sensitive input device. 3. The device of claim 1, wherein the proximity sensor comprises one of a capacitive sensor or an optical sensor. 4. The device of claim 1, wherein the haptic output signal is designed to direct a user to contact the touch-sensitive device at a particular location. 5. The device of claim 1, wherein the an integrated sensor comprises the proximity sensor and the touch sensor. 6. The device of claim 5, wherein the integrated sensor further comprises the haptic output device. 7. The device of claim 5, wherein the integrated sensor comprises an electro-active polymer. 8. The device of claim 7, wherein the electro-active polymer comprises a first side with a uniform electrode pattern layer affixed thereto and a second side with a network electrode pattern layer affixed thereto. 9. The device of claim 8, further comprising an elastomer layer having a first side with a network pattern electrode layer affixed thereto and a second side configured adjacent to the second side of the electro-active polymer. 10. The device of claim 9, further comprising an insulator layer adjacent to the first side of the electro active polymer. 11. The device of claim 3, wherein the optical sensor comprises one of a laser, an infra-red emitter, or a photo-resistor. 12. The device of claim 1, wherein the haptic output device comprises an electrostatic friction actuator. 13. The device of claim 1, wherein the haptic output device comprises an air puff actuator. 14. The device of claim 1, wherein the touch sensor is capable of detecting a pressure and wherein the haptic output signal is based at least in part on the detected pressure. 15. A method comprising: detecting, by a proximity sensor, a non-contact interaction with a touch-sensitive device; detecting, by a touch sensor, a touch on the touch-sensitive input device; transmitting a first sensor signal associated with the non-contact interaction to a processor; transmitting a second sensor signal associated with the touch to a processor; receiving, by the processor, the first and second sensor signals; generating, by the processor, a haptic output signal based at least in part on the first and second sensor signals; transmitting, by the processor, the haptic output signal to a haptic output device; and outputting, by the haptic output device, a haptic effect in response to the haptic output signal. 16. The method of claim 15, wherein the haptic output signal designed to direct a user to contact the touch-sensitive device at a particular location. 17. The method of claim 15, wherein outputting a haptic effect comprises outputting, by an electrostatic friction actuator, an electrostatic friction haptic effect. 18. The method of claim 15, wherein outputting a haptic effect comprises outputting, by an air puff actuator, an air puff haptic effect. 19. A computer-readable non-transitory medium encoded with executable program code, the computer-readable medium comprising: program code for detecting, by a proximity sensor, a non-contact interaction with a touch-sensitive device; program code for detecting, by a touch sensor, a touch on the touch-sensitive input device; program code for transmitting a first sensor signal associated with the non-contact interaction to a processor; program code for transmitting a second sensor signal associated with the touch to a processor; program code for receiving, by the processor, the first and second sensor signals; program code for generating, by the processor, a haptic output signal based at least in part on the first and second sensor signals; and program code for transmitting the haptic output signal to a haptic output device.
2,600