Unnamed: 0
int64
0
350k
level_0
int64
0
351k
ApplicationNumber
int64
9.75M
96.1M
ArtUnit
int64
1.6k
3.99k
Abstract
stringlengths
1
8.37k
Claims
stringlengths
3
292k
abstract-claims
stringlengths
68
293k
TechCenter
int64
1.6k
3.9k
9,800
9,800
14,465,256
2,652
An approach for simulating a telephony system to enable the evaluation of rules employed by the system for routing inbound calls to representatives is described. A simulation manager receives call production data, representative production data, or a combination thereof as generated by a telephony system during a production run for directing one or more inbound calls to one or more representatives. The simulation manager also generates an expected call handling response of the telephony system during a subsequent production run due to (a) one or more subsequent inbound calls, (b) a change associated with the one or more representatives, (c) a change associated with the call handling rules, or (d) a combination thereof based on execution of a simulation of the execution of the telephony system.
1. A method comprising: receiving call production data, representative production data, or a combination thereof as generated by a telephony system during a production run for directing one or more inbound calls to one or more representatives; simulating an execution of the telephony system for processing at least a portion of the one or more inbound calls based, at least in part, on the call production data, the representative production data, call handling rules associated with the telephony system, one or more representative profiles, or a combination thereof; and generating an expected call handling response of the telephony system during a subsequent production run due to (a) one or more subsequent inbound calls, (b) a change associated with the one or more representatives, (c) a change associated with the call handling rules, or (d) a combination thereof based on execution of the simulation. 2. A method of claim 1, further comprising: determining schedule information, availability information, a skill level or a combination thereof associated with the one or more representatives based on the one or more representative profiles, wherein the change associated with the one or more representatives is based on a change of the schedule information, availability information, skill level, or a combination thereof. 3. A method of claim 2, wherein the call handling rules specify how the one or more inbound calls are to be routed to the one or more representatives based on the determination. 4. A method of claim 1, further comprising: determining at least one service type, at least one skill type, or a combination thereof associated with each of the one or more inbound calls, the one or more representatives, or a combination thereof, wherein the simulated execution of the telephony system, the expected call handling response of the telephony system, or a combination thereof is based at least in part on the determination. 5. A method of claim 4, further comprising: determining the one or more inbound calls are placed in at least one call queue associated with the at least one service type, the at least one skill type, or a combination thereof based on the simulation; and calculating a value for indicating an effectiveness of assignment of the one or more inbound calls within the at least one call queue to at least one of the one or more representatives based, at least in part, on (a) a skill level associated with the one or more representatives, (b) another skill level associated with the one or more representatives, (c) a wait duration associated with the one or more inbound calls, or (d) a combination thereof, wherein the calculation is repeated periodically, is based on the availability information of the one or more representatives, or a combination thereof. 6. A method of claim 5, further comprising: determining a wait threshold associated with the telephony system, the one or more representatives, or a combination thereof exceeds the wait duration of the one or more inbound calls placed in the at least one call queue; and applying a penalty to the one or more inbound calls based on the determination, wherein the penalty is an additional amount of wait time to be associated with the one or more inbound calls for calculating the value and the wait time decreases as the wait duration approaches the wait threshold. 7. A method of claim 5, further comprising: generating a report, a recommendation, or a combination thereof for indicating the assignment of the one or more inbound calls to one or more representatives, an effectiveness of assignment of the one or more inbound calls, or a combination thereof upon completion of the simulation, wherein the expected call handling response of the telephony system is based on the report, the recommendation, or a combination thereof. 8. A method of claim 1, further comprising: determining a time of receipt of a first of the one or more inbound calls to the telephony system; and initiating the execution of the simulation based on the determination, wherein the time of receipt is the same or different than an initial time of recording of the call production data, the representative production data, or a combination thereof. 9. A method of claim 8, further comprising: determining a time of receipt of a subsequent one of the one or more inbound calls; and forwarding the execution of the simulation based on the determination, wherein the forwarding results in a lessened duration of time of execution of the simulation. 10. A method of claim 9, further comprising: determining a proportional relationship between a time of receipt of the first inbound call of the portion of the one or more inbound calls, a time of receipt of the subsequent of the portion of the one or more inbound calls, or a combination thereof and schedule information, availability information, or a combination thereof associated with the one or more representatives, wherein the forwarding of the execution of the simulation is based on the determination. 11. An apparatus comprising a processor configured to: receive call production data, representative production data, or a combination thereof as generated by a telephony system during a production run for directing one or more inbound calls to one or more representatives; simulate an execution of the telephony system for processing at least a portion of the one or more inbound calls based, at least in part, on the call production data, the representative production data, call handling rules associated with the telephony system, one or more representative profiles, or a combination thereof; and generate an expected call handling response of the telephony system during a subsequent production run due to (a) one or more subsequent inbound calls, (b) a change associated with the one or more representatives, (c) a change associated with the call handling rules, or (d) a combination thereof based on execution of the simulation. 12. An apparatus of claim 11, wherein the processor is further configured to: determine schedule information, availability information, a skill level or a combination thereof associated with the one or more representatives based on the one or more representative profiles, wherein the change associated with the one or more representatives is based on a change of the schedule information, availability information, skill level, or a combination thereof. 13. An apparatus of claim 12, wherein the call handling rules specify how the one or more inbound calls are to be routed to the one or more representatives based on the determination. 14. An apparatus of claim 1, wherein the processor is further configured to: determine at least one service type, at least one skill type, or a combination thereof associated with each of the one or more inbound calls, the one or more representatives, or a combination thereof, wherein the simulated execution of the telephony system, the expected call handling response of the telephony system, or a combination thereof is based at least in part on the determination. 15. An apparatus of claim 14, wherein the processor is further configured to: determine the one or more inbound calls are placed in at least one call queue associated with the at least one service type, the at least one skill type, or a combination thereof based on the simulation; and calculate a value for indicating an effectiveness of assignment of the one or more inbound calls within the at least one call queue to at least one of the one or more representatives based, at least in part, on (a) a skill level associated with the one or more representatives, (b) another skill level associated with the one or more representatives, (c) a wait duration associated with the one or more inbound calls, or (d) a combination thereof, wherein the calculation is repeated periodically, is based on the availability information of the one or more representatives, or a combination thereof. 16. An apparatus of claim 15, wherein the processor is further configured to: determine a wait threshold associated with the telephony system, the one or more representatives, or a combination thereof exceeds the wait duration of the one or more inbound calls placed in the at least one call queue; and apply a penalty to the one or more inbound calls based on the determination, wherein the penalty is an additional amount of wait time to be associated with the one or more inbound calls for calculating the value and the wait time decreases as the wait duration approaches the wait threshold. 17. An apparatus of claim 15, wherein the processor is further configured to: generate a report, a recommendation, or a combination thereof for indicating the assignment of the one or more inbound calls to one or more representatives, an effectiveness of assignment of the one or more inbound calls, or a combination thereof upon completion of the simulation, wherein the expected call handling response of the telephony system is based on the report, the recommendation, or a combination thereof. 18. A system comprising: a telephony system configured for a production run; and a platform configured to receive call production data, representative production data, or a combination thereof as generated by the telephony system during the production run for directing one or more inbound calls to one or more representatives; to simulate an execution of the telephony system for processing at least a portion of the one or more inbound calls based, at least in part, on the call production data, the representative production data, call handling rules associated with the telephony system, one or more representative profiles, or a combination thereof; and to generate an expected call handling response of the telephony system during a subsequent production run due to (a) one or more subsequent inbound calls, (b) a change associated with the one or more representatives, (c) a change associated with the call handling rules, or (d) a combination thereof based on execution of the simulation. 19. A system of claim 18, wherein the platform is further configured to determine schedule information, availability information, a skill level or a combination thereof associated with the one or more representatives based on the one or more representative profiles; and wherein the change associated with the one or more representatives is based on a change of the schedule information, availability information, skill level, or a combination thereof. 20. A system of claim 19, wherein the call handling rules specify how the one or more inbound calls are to be routed to the one or more representatives based on the determination.
An approach for simulating a telephony system to enable the evaluation of rules employed by the system for routing inbound calls to representatives is described. A simulation manager receives call production data, representative production data, or a combination thereof as generated by a telephony system during a production run for directing one or more inbound calls to one or more representatives. The simulation manager also generates an expected call handling response of the telephony system during a subsequent production run due to (a) one or more subsequent inbound calls, (b) a change associated with the one or more representatives, (c) a change associated with the call handling rules, or (d) a combination thereof based on execution of a simulation of the execution of the telephony system.1. A method comprising: receiving call production data, representative production data, or a combination thereof as generated by a telephony system during a production run for directing one or more inbound calls to one or more representatives; simulating an execution of the telephony system for processing at least a portion of the one or more inbound calls based, at least in part, on the call production data, the representative production data, call handling rules associated with the telephony system, one or more representative profiles, or a combination thereof; and generating an expected call handling response of the telephony system during a subsequent production run due to (a) one or more subsequent inbound calls, (b) a change associated with the one or more representatives, (c) a change associated with the call handling rules, or (d) a combination thereof based on execution of the simulation. 2. A method of claim 1, further comprising: determining schedule information, availability information, a skill level or a combination thereof associated with the one or more representatives based on the one or more representative profiles, wherein the change associated with the one or more representatives is based on a change of the schedule information, availability information, skill level, or a combination thereof. 3. A method of claim 2, wherein the call handling rules specify how the one or more inbound calls are to be routed to the one or more representatives based on the determination. 4. A method of claim 1, further comprising: determining at least one service type, at least one skill type, or a combination thereof associated with each of the one or more inbound calls, the one or more representatives, or a combination thereof, wherein the simulated execution of the telephony system, the expected call handling response of the telephony system, or a combination thereof is based at least in part on the determination. 5. A method of claim 4, further comprising: determining the one or more inbound calls are placed in at least one call queue associated with the at least one service type, the at least one skill type, or a combination thereof based on the simulation; and calculating a value for indicating an effectiveness of assignment of the one or more inbound calls within the at least one call queue to at least one of the one or more representatives based, at least in part, on (a) a skill level associated with the one or more representatives, (b) another skill level associated with the one or more representatives, (c) a wait duration associated with the one or more inbound calls, or (d) a combination thereof, wherein the calculation is repeated periodically, is based on the availability information of the one or more representatives, or a combination thereof. 6. A method of claim 5, further comprising: determining a wait threshold associated with the telephony system, the one or more representatives, or a combination thereof exceeds the wait duration of the one or more inbound calls placed in the at least one call queue; and applying a penalty to the one or more inbound calls based on the determination, wherein the penalty is an additional amount of wait time to be associated with the one or more inbound calls for calculating the value and the wait time decreases as the wait duration approaches the wait threshold. 7. A method of claim 5, further comprising: generating a report, a recommendation, or a combination thereof for indicating the assignment of the one or more inbound calls to one or more representatives, an effectiveness of assignment of the one or more inbound calls, or a combination thereof upon completion of the simulation, wherein the expected call handling response of the telephony system is based on the report, the recommendation, or a combination thereof. 8. A method of claim 1, further comprising: determining a time of receipt of a first of the one or more inbound calls to the telephony system; and initiating the execution of the simulation based on the determination, wherein the time of receipt is the same or different than an initial time of recording of the call production data, the representative production data, or a combination thereof. 9. A method of claim 8, further comprising: determining a time of receipt of a subsequent one of the one or more inbound calls; and forwarding the execution of the simulation based on the determination, wherein the forwarding results in a lessened duration of time of execution of the simulation. 10. A method of claim 9, further comprising: determining a proportional relationship between a time of receipt of the first inbound call of the portion of the one or more inbound calls, a time of receipt of the subsequent of the portion of the one or more inbound calls, or a combination thereof and schedule information, availability information, or a combination thereof associated with the one or more representatives, wherein the forwarding of the execution of the simulation is based on the determination. 11. An apparatus comprising a processor configured to: receive call production data, representative production data, or a combination thereof as generated by a telephony system during a production run for directing one or more inbound calls to one or more representatives; simulate an execution of the telephony system for processing at least a portion of the one or more inbound calls based, at least in part, on the call production data, the representative production data, call handling rules associated with the telephony system, one or more representative profiles, or a combination thereof; and generate an expected call handling response of the telephony system during a subsequent production run due to (a) one or more subsequent inbound calls, (b) a change associated with the one or more representatives, (c) a change associated with the call handling rules, or (d) a combination thereof based on execution of the simulation. 12. An apparatus of claim 11, wherein the processor is further configured to: determine schedule information, availability information, a skill level or a combination thereof associated with the one or more representatives based on the one or more representative profiles, wherein the change associated with the one or more representatives is based on a change of the schedule information, availability information, skill level, or a combination thereof. 13. An apparatus of claim 12, wherein the call handling rules specify how the one or more inbound calls are to be routed to the one or more representatives based on the determination. 14. An apparatus of claim 1, wherein the processor is further configured to: determine at least one service type, at least one skill type, or a combination thereof associated with each of the one or more inbound calls, the one or more representatives, or a combination thereof, wherein the simulated execution of the telephony system, the expected call handling response of the telephony system, or a combination thereof is based at least in part on the determination. 15. An apparatus of claim 14, wherein the processor is further configured to: determine the one or more inbound calls are placed in at least one call queue associated with the at least one service type, the at least one skill type, or a combination thereof based on the simulation; and calculate a value for indicating an effectiveness of assignment of the one or more inbound calls within the at least one call queue to at least one of the one or more representatives based, at least in part, on (a) a skill level associated with the one or more representatives, (b) another skill level associated with the one or more representatives, (c) a wait duration associated with the one or more inbound calls, or (d) a combination thereof, wherein the calculation is repeated periodically, is based on the availability information of the one or more representatives, or a combination thereof. 16. An apparatus of claim 15, wherein the processor is further configured to: determine a wait threshold associated with the telephony system, the one or more representatives, or a combination thereof exceeds the wait duration of the one or more inbound calls placed in the at least one call queue; and apply a penalty to the one or more inbound calls based on the determination, wherein the penalty is an additional amount of wait time to be associated with the one or more inbound calls for calculating the value and the wait time decreases as the wait duration approaches the wait threshold. 17. An apparatus of claim 15, wherein the processor is further configured to: generate a report, a recommendation, or a combination thereof for indicating the assignment of the one or more inbound calls to one or more representatives, an effectiveness of assignment of the one or more inbound calls, or a combination thereof upon completion of the simulation, wherein the expected call handling response of the telephony system is based on the report, the recommendation, or a combination thereof. 18. A system comprising: a telephony system configured for a production run; and a platform configured to receive call production data, representative production data, or a combination thereof as generated by the telephony system during the production run for directing one or more inbound calls to one or more representatives; to simulate an execution of the telephony system for processing at least a portion of the one or more inbound calls based, at least in part, on the call production data, the representative production data, call handling rules associated with the telephony system, one or more representative profiles, or a combination thereof; and to generate an expected call handling response of the telephony system during a subsequent production run due to (a) one or more subsequent inbound calls, (b) a change associated with the one or more representatives, (c) a change associated with the call handling rules, or (d) a combination thereof based on execution of the simulation. 19. A system of claim 18, wherein the platform is further configured to determine schedule information, availability information, a skill level or a combination thereof associated with the one or more representatives based on the one or more representative profiles; and wherein the change associated with the one or more representatives is based on a change of the schedule information, availability information, skill level, or a combination thereof. 20. A system of claim 19, wherein the call handling rules specify how the one or more inbound calls are to be routed to the one or more representatives based on the determination.
2,600
9,801
9,801
14,016,264
2,693
The liquid crystal display device includes a pixel portion including a plurality of pixels to which image signals are supplied; a driver circuit including a signal line driver circuit which selectively controls a signal line and a gate line driver circuit which selectively controls a gate line; a memory circuit which stores the image signals; a comparison circuit which compares the image signals stored in the memory circuit in the pixels and detects a difference; and a display control circuit which controls the driver circuit and reads the image signal in accordance with the difference. The display control circuit supplies the image signal only to the pixel where the difference is detected. The pixel includes a thin film transistor including a semiconductor layer including an oxide semiconductor.
1. (canceled) 2. A device comprising: a unit configured to decrease a refresh rate when a still image is displayed in a pixel portion compared to a refresh rate when a moving image is displayed in the pixel portion, wherein the pixel portion comprises a plurality of pixels each comprising a transistor whose channel formation region comprises an oxide semiconductor. 3. The device according to claim 2, wherein the oxide semiconductor is an In—Ga—Zn—O based oxide semiconductor. 4. The device according to claim 2, wherein the unit is configured to make the still image displayed without rewriting an image signal for 10 seconds or longer. 5. The device according to claim 2, wherein each of the plurality of pixels comprises a liquid crystal element. 6. The device according to claim 2, wherein the unit comprises: a memory circuit configured to store a first image signal and a second image signal; a comparison circuit configured to compare the first image signal and the second image signal stored in the memory circuit in the pixels and to detect a difference between the first image signal and the second image signal; and a display control circuit configured to control a driver circuit and to read the second image signal in accordance with the difference. 7. A device comprising: a unit configured to input an image signal to a first pixel when a display of the first pixel is changed between adjacent two frame periods and configured not to input an image signal to a second pixel when a display of the second pixel is not changed between the adjacent two frame periods, wherein each of the first pixel and the second pixel comprises a transistor whose channel formation region comprises an oxide semiconductor. 8. The device according to claim 7, wherein the oxide semiconductor is an In—Ga—Zn—O based oxide semiconductor. 9. The device according to claim 7, wherein the unit is configured to make a still image displayed without rewriting an image signal for 10 seconds or longer. 10. The device according to claim 7, wherein each of the first pixel and the second pixel comprises a liquid crystal element. 11. The device according to claim 7, wherein the unit comprises: a memory circuit configured to store a first image signal and a second image signal; a comparison circuit configured to compare the first image signal and the second image signal stored in the memory circuit in the pixels and to detect a difference between the first image signal and the second image signal; and a display control circuit configured to control a driver circuit and to read the second image signal in accordance with the difference. 12. A method comprising: decreasing a refresh rate when a still image is displayed in a pixel portion compared to a refresh rate when a moving image is displayed in the pixel portion, wherein the pixel portion comprises a plurality of pixels each comprising a transistor whose channel formation region comprises an oxide semiconductor. 13. The method according to claim 12, wherein the oxide semiconductor is an In—Ga—Zn—O based oxide semiconductor. 14. The method according to claim 12, wherein the still image is displayed without rewriting an image signal for 10 seconds or longer. 15. The method according to claim 12, wherein each of the plurality of pixels comprises a liquid crystal element. 16. The method according to claim 12, wherein the step of decreasing the refresh rate is performed by a unit comprising: a memory circuit configured to store a first image signal and a second image signal; a comparison circuit configured to compare the first image signal and the second image signal stored in the memory circuit in the pixels and to detect a difference between the first image signal and the second image signal; and a display control circuit configured to control a driver circuit and to read the second image signal in accordance with the difference.
The liquid crystal display device includes a pixel portion including a plurality of pixels to which image signals are supplied; a driver circuit including a signal line driver circuit which selectively controls a signal line and a gate line driver circuit which selectively controls a gate line; a memory circuit which stores the image signals; a comparison circuit which compares the image signals stored in the memory circuit in the pixels and detects a difference; and a display control circuit which controls the driver circuit and reads the image signal in accordance with the difference. The display control circuit supplies the image signal only to the pixel where the difference is detected. The pixel includes a thin film transistor including a semiconductor layer including an oxide semiconductor.1. (canceled) 2. A device comprising: a unit configured to decrease a refresh rate when a still image is displayed in a pixel portion compared to a refresh rate when a moving image is displayed in the pixel portion, wherein the pixel portion comprises a plurality of pixels each comprising a transistor whose channel formation region comprises an oxide semiconductor. 3. The device according to claim 2, wherein the oxide semiconductor is an In—Ga—Zn—O based oxide semiconductor. 4. The device according to claim 2, wherein the unit is configured to make the still image displayed without rewriting an image signal for 10 seconds or longer. 5. The device according to claim 2, wherein each of the plurality of pixels comprises a liquid crystal element. 6. The device according to claim 2, wherein the unit comprises: a memory circuit configured to store a first image signal and a second image signal; a comparison circuit configured to compare the first image signal and the second image signal stored in the memory circuit in the pixels and to detect a difference between the first image signal and the second image signal; and a display control circuit configured to control a driver circuit and to read the second image signal in accordance with the difference. 7. A device comprising: a unit configured to input an image signal to a first pixel when a display of the first pixel is changed between adjacent two frame periods and configured not to input an image signal to a second pixel when a display of the second pixel is not changed between the adjacent two frame periods, wherein each of the first pixel and the second pixel comprises a transistor whose channel formation region comprises an oxide semiconductor. 8. The device according to claim 7, wherein the oxide semiconductor is an In—Ga—Zn—O based oxide semiconductor. 9. The device according to claim 7, wherein the unit is configured to make a still image displayed without rewriting an image signal for 10 seconds or longer. 10. The device according to claim 7, wherein each of the first pixel and the second pixel comprises a liquid crystal element. 11. The device according to claim 7, wherein the unit comprises: a memory circuit configured to store a first image signal and a second image signal; a comparison circuit configured to compare the first image signal and the second image signal stored in the memory circuit in the pixels and to detect a difference between the first image signal and the second image signal; and a display control circuit configured to control a driver circuit and to read the second image signal in accordance with the difference. 12. A method comprising: decreasing a refresh rate when a still image is displayed in a pixel portion compared to a refresh rate when a moving image is displayed in the pixel portion, wherein the pixel portion comprises a plurality of pixels each comprising a transistor whose channel formation region comprises an oxide semiconductor. 13. The method according to claim 12, wherein the oxide semiconductor is an In—Ga—Zn—O based oxide semiconductor. 14. The method according to claim 12, wherein the still image is displayed without rewriting an image signal for 10 seconds or longer. 15. The method according to claim 12, wherein each of the plurality of pixels comprises a liquid crystal element. 16. The method according to claim 12, wherein the step of decreasing the refresh rate is performed by a unit comprising: a memory circuit configured to store a first image signal and a second image signal; a comparison circuit configured to compare the first image signal and the second image signal stored in the memory circuit in the pixels and to detect a difference between the first image signal and the second image signal; and a display control circuit configured to control a driver circuit and to read the second image signal in accordance with the difference.
2,600
9,802
9,802
15,305,271
2,672
A method for color mapping is disclosed based on obtaining a measurement of one or more characteristics of an imaging system. A set of color mappings are provided and the color mapping is selected based on the measurement. Each of the color mappings enables a mapping from a first color space to Neugebauer Primary area coverage vector space. A method for generating a color mapping is disclosed.
1. A method for generating a color mapping comprising: selecting a resource within an imaging system; modelling a limitation for the resource; determining a color mapping that incorporates the limitation, wherein the color mapping enables a mapping of color values from a first color space to one or more Neugebauer Primary area coverage values. 2. The method of claim 1, wherein each color mapping comprises a look-up-table. 3. The method of claim 1, wherein the imaging system is a color halftone printing system. 4. The method of claim 1, wherein: the resource comprises at least one of: one or more printer pens, one or more printer dies and one or more printer nozzles, and the limitation comprises one or more of a limitation on printing fluid usage and print nozzle usage. 5. The method of claim 1, wherein the first color space comprises a red, green, blue (RGB) color space. 6. A method for color mapping comprising: obtaining a measurement of one or more characteristics of an imaging system; selecting one or more color mappings based on the measurement, wherein each of the one or more color mappings enables a mapping of color values from a first color space to one or more Neugebauer Primary area coverage values. 7. The method of claim 6, wherein each color mapping comprises a look-up table. 8. The method of claim 6, wherein the measurement indicates a limitation on a resource for the imaging system. 9. The method of claim 8, wherein the resource comprises a printer pen and the limitation comprises one or more of a limitation on printing fluid usage and print nozzle usage. 10. The method of claim 6, wherein the first color space comprises a red, green, blue (RGB) color space. 11. The method of claim 6, wherein: the obtained measurement indicates a plurality of limitations on one or more resources for the imaging system, a different color mapping is selected for each limitation, and the set of different color mappings are combined to provide mapping of color values from a first color space to one or more Neugebauer Primary area coverage values. 12. The method of claim 11, wherein each color mapping comprises a look-up table with nodes defining color values, and wherein a weighting is applied to one or more nodes of each look-up table to combine the color mappings. 13. The method of claim 11, wherein the two or more color mappings are combined based on one or more levels of feedback for the imaging system. 14. The method of claim 11, comprising alerting a user of the imaging system to a limitation of a resource in the imaging system. 15. A machine-readable storage medium encoded with instructions for color mapping, the instructions executable by a processor of a system to cause the system to: obtain a measurement of one or more characteristics of an imaging system; and select one or more color mappings based on the measurement, wherein each of the one or more color mappings enables a mapping of color values from a first color space to one or more Neugebauer Primary area coverage values.
A method for color mapping is disclosed based on obtaining a measurement of one or more characteristics of an imaging system. A set of color mappings are provided and the color mapping is selected based on the measurement. Each of the color mappings enables a mapping from a first color space to Neugebauer Primary area coverage vector space. A method for generating a color mapping is disclosed.1. A method for generating a color mapping comprising: selecting a resource within an imaging system; modelling a limitation for the resource; determining a color mapping that incorporates the limitation, wherein the color mapping enables a mapping of color values from a first color space to one or more Neugebauer Primary area coverage values. 2. The method of claim 1, wherein each color mapping comprises a look-up-table. 3. The method of claim 1, wherein the imaging system is a color halftone printing system. 4. The method of claim 1, wherein: the resource comprises at least one of: one or more printer pens, one or more printer dies and one or more printer nozzles, and the limitation comprises one or more of a limitation on printing fluid usage and print nozzle usage. 5. The method of claim 1, wherein the first color space comprises a red, green, blue (RGB) color space. 6. A method for color mapping comprising: obtaining a measurement of one or more characteristics of an imaging system; selecting one or more color mappings based on the measurement, wherein each of the one or more color mappings enables a mapping of color values from a first color space to one or more Neugebauer Primary area coverage values. 7. The method of claim 6, wherein each color mapping comprises a look-up table. 8. The method of claim 6, wherein the measurement indicates a limitation on a resource for the imaging system. 9. The method of claim 8, wherein the resource comprises a printer pen and the limitation comprises one or more of a limitation on printing fluid usage and print nozzle usage. 10. The method of claim 6, wherein the first color space comprises a red, green, blue (RGB) color space. 11. The method of claim 6, wherein: the obtained measurement indicates a plurality of limitations on one or more resources for the imaging system, a different color mapping is selected for each limitation, and the set of different color mappings are combined to provide mapping of color values from a first color space to one or more Neugebauer Primary area coverage values. 12. The method of claim 11, wherein each color mapping comprises a look-up table with nodes defining color values, and wherein a weighting is applied to one or more nodes of each look-up table to combine the color mappings. 13. The method of claim 11, wherein the two or more color mappings are combined based on one or more levels of feedback for the imaging system. 14. The method of claim 11, comprising alerting a user of the imaging system to a limitation of a resource in the imaging system. 15. A machine-readable storage medium encoded with instructions for color mapping, the instructions executable by a processor of a system to cause the system to: obtain a measurement of one or more characteristics of an imaging system; and select one or more color mappings based on the measurement, wherein each of the one or more color mappings enables a mapping of color values from a first color space to one or more Neugebauer Primary area coverage values.
2,600
9,803
9,803
12,200,905
2,658
A method and system for ordering content includes a voice menu system and a phone device communicating a phone signal to the voice menu system. The voice menu system determines the phone number associated with the phone device through the phone signal and generates a voice prompt for recording a content selection from the voice menu system. The phone device selects a recording content option. The voice menu system generates prompts for determining a content title. The phone device selects a content title by communicating a selection signal to the voice menu system. The voice menu system enables a content recording at a recording device in response to the selection signal.
1. A method comprising: communicating between a voice device and a voice menu system using a phone signal; determining the phone number associated with the voice device from the phone signal; generating a voice prompt for recording a content selection from the voice menu system; selecting a recording content option; generating prompts for determining a content title; selecting a content title by communicating a selection signal from the voice device to the voice menu system; and enabling a content recording at a recording device in response to the selection signal. 2. A method as recited in claim 1 wherein communicating between a voice device and a voice menu system comprises communicating between the voice device and a voice recognition phone system. 3. A method as recited in claim 1 wherein enabling a content recording comprises communicating a control word to a user device for recording the content title; tuning the user device in response to the control word; receiving the content at the user device; storing the content corresponding to the content title in the user device. 4. A method as recited in claim 3 wherein communicating the control word comprises communicating the control word through a satellite. 5. A method as recited in claim 3 wherein communicating the control word comprises communicating the control word through a broadband communication system. 6. A method as recited in claim 3 wherein receiving the content comprises receiving the content through a satellite. 7. A method as recited in claim 3 wherein receiving the content comprises receiving the content through a broadband communication system. 8. A method as recited in claim 3 wherein receiving the content comprises receiving the content through a terrestrial system. 9. A method as recited in claim 3 wherein communicating a control word comprises communicating the control word through a satellite and wherein receiving the content comprises receiving the content through a satellite. 10. A method as recited in claim 3 wherein communicating a control word comprises communicating the control word through a satellite and wherein receiving the content comprises receiving the content through a broadband communication system. 11. A method as recited in claim 1 wherein selecting a content title comprises selecting the content title from a search result. 12. A method as recited in claim 11 wherein prior to selecting a content title from a search result, performing a title search and generating the search result from the title search. 13. A method as recited in claim 11 wherein prior to selecting a content title from a search result, performing a channel name search and generating the search result from the channel name search. 14. A method as recited in claim 11 wherein prior to selecting a content title from a search result, performing a channel number search and generating the search result from the channel number search. 15. A method as recited in claim 1 wherein communicating between a voice device and a voice menu system comprises communicating between a telephone and the voice menu system. 16. A system comprising: a voice menu system; a voice device communicating a phone signal to the voice menu system; said voice menu system determining the phone number associated with the voice device through the phone signal and generating a voice prompt for recording a content selection from the voice menu system; said voice device selecting a recording content option; said voice menu system generating prompts for determining a content title; said voice device selecting a content title from the prompts by communicating a selection signal to the voice menu system; and said voice menu system enabling a content recording at a recording device in response to the selection signal. 17. A system as recited in claim 16 further comprising a content processing system in communication with the voice menu system, said content processing system communicating a control word for enabling content recording at the recording device. 18. A system as recited in claim 17 further comprising a satellite in communication with the content processing system communicating the control word therethrough. 19. A system as recited in claim 17 further comprising a broadband communication system in communication with the content processing system communicating the control word therethrough. 20. A system as recited in claim 16 wherein the voice device comprises a mobile device. 21. A system as recited in claim 20 wherein the voice menu system comprises a voice recognition system. 22. A system as recited in claim 16 further comprising a content processing system in communication with the voice menu system, said content processing system communicating content data including content titles to the content voice menu system. 23. A system as recited in claim 16 wherein the voice device comprises a telephone. 24. A method comprising: selecting a program to record; determining whether the program is part of a series; when the program is part of a series, generating a selector for selecting the series; selecting the series; and recording the series on a recording device. 25. A method as recited in claim 24 wherein the selector comprises a voice selector. 26. A method as recited in claim 24 wherein the selector comprises a textual select box. 27. A method as recited in claim 24 further comprising displaying the selector on a website. 28. A method as recited in claim 24 further comprising displaying the selector on a mobile phone.
A method and system for ordering content includes a voice menu system and a phone device communicating a phone signal to the voice menu system. The voice menu system determines the phone number associated with the phone device through the phone signal and generates a voice prompt for recording a content selection from the voice menu system. The phone device selects a recording content option. The voice menu system generates prompts for determining a content title. The phone device selects a content title by communicating a selection signal to the voice menu system. The voice menu system enables a content recording at a recording device in response to the selection signal.1. A method comprising: communicating between a voice device and a voice menu system using a phone signal; determining the phone number associated with the voice device from the phone signal; generating a voice prompt for recording a content selection from the voice menu system; selecting a recording content option; generating prompts for determining a content title; selecting a content title by communicating a selection signal from the voice device to the voice menu system; and enabling a content recording at a recording device in response to the selection signal. 2. A method as recited in claim 1 wherein communicating between a voice device and a voice menu system comprises communicating between the voice device and a voice recognition phone system. 3. A method as recited in claim 1 wherein enabling a content recording comprises communicating a control word to a user device for recording the content title; tuning the user device in response to the control word; receiving the content at the user device; storing the content corresponding to the content title in the user device. 4. A method as recited in claim 3 wherein communicating the control word comprises communicating the control word through a satellite. 5. A method as recited in claim 3 wherein communicating the control word comprises communicating the control word through a broadband communication system. 6. A method as recited in claim 3 wherein receiving the content comprises receiving the content through a satellite. 7. A method as recited in claim 3 wherein receiving the content comprises receiving the content through a broadband communication system. 8. A method as recited in claim 3 wherein receiving the content comprises receiving the content through a terrestrial system. 9. A method as recited in claim 3 wherein communicating a control word comprises communicating the control word through a satellite and wherein receiving the content comprises receiving the content through a satellite. 10. A method as recited in claim 3 wherein communicating a control word comprises communicating the control word through a satellite and wherein receiving the content comprises receiving the content through a broadband communication system. 11. A method as recited in claim 1 wherein selecting a content title comprises selecting the content title from a search result. 12. A method as recited in claim 11 wherein prior to selecting a content title from a search result, performing a title search and generating the search result from the title search. 13. A method as recited in claim 11 wherein prior to selecting a content title from a search result, performing a channel name search and generating the search result from the channel name search. 14. A method as recited in claim 11 wherein prior to selecting a content title from a search result, performing a channel number search and generating the search result from the channel number search. 15. A method as recited in claim 1 wherein communicating between a voice device and a voice menu system comprises communicating between a telephone and the voice menu system. 16. A system comprising: a voice menu system; a voice device communicating a phone signal to the voice menu system; said voice menu system determining the phone number associated with the voice device through the phone signal and generating a voice prompt for recording a content selection from the voice menu system; said voice device selecting a recording content option; said voice menu system generating prompts for determining a content title; said voice device selecting a content title from the prompts by communicating a selection signal to the voice menu system; and said voice menu system enabling a content recording at a recording device in response to the selection signal. 17. A system as recited in claim 16 further comprising a content processing system in communication with the voice menu system, said content processing system communicating a control word for enabling content recording at the recording device. 18. A system as recited in claim 17 further comprising a satellite in communication with the content processing system communicating the control word therethrough. 19. A system as recited in claim 17 further comprising a broadband communication system in communication with the content processing system communicating the control word therethrough. 20. A system as recited in claim 16 wherein the voice device comprises a mobile device. 21. A system as recited in claim 20 wherein the voice menu system comprises a voice recognition system. 22. A system as recited in claim 16 further comprising a content processing system in communication with the voice menu system, said content processing system communicating content data including content titles to the content voice menu system. 23. A system as recited in claim 16 wherein the voice device comprises a telephone. 24. A method comprising: selecting a program to record; determining whether the program is part of a series; when the program is part of a series, generating a selector for selecting the series; selecting the series; and recording the series on a recording device. 25. A method as recited in claim 24 wherein the selector comprises a voice selector. 26. A method as recited in claim 24 wherein the selector comprises a textual select box. 27. A method as recited in claim 24 further comprising displaying the selector on a website. 28. A method as recited in claim 24 further comprising displaying the selector on a mobile phone.
2,600
9,804
9,804
15,427,326
2,647
Apparatus for constructing a digital telephone message including a message defining unit, configured for allowing a sender to define a message for sending to a recipient, and a response defining unit, configured for allowing the sender to predefine a recipient response, and to include the predefined recipient response in the message for activation at the recipient. Apparatus for receiving a digital telephone message, the message including an activatable sender-defined response, the apparatus including a receiving unit for receiving the message, a notification unit for notifying a recipient of the arrival of the message, and a response activation unit for displaying the sender-defined response, and associating the sender-defined response with a user action for providing user input to send the response. Related apparatus and methods are also described.
1-74. (canceled) 75. A method for authorization-based digital messaging, comprising: constructing a digital message, the digital message including a script code defining at least one required authentication; and sending the constructed digital message to a recipient device, wherein the script code, when executed at the recipient device, configures the recipient device to: identify at least one input of a user of the recipient device; determine, based on the identified at least one input, whether the at least one required authentication has been provided; and display the digital message, when it is determined that the at least one required authentication has been provided. 76. The method of claim 75, further comprising: determining a status of a response to the sent digital message from the recipient device, wherein the status of the response is determined based on whether any of the at least one activatable recipient response has been activated; and displaying the determined status. 77. The method of claim 76, further comprising: determining, based on the determined status, whether a response has been received from the recipient device within a predetermined period of time. 78. The method of claim 75, wherein the digital message further includes at least one predetermined recipient response for activation at the recipient device. 79. The method of claim 78, further comprising: receiving, from the recipient device, a response including at least one activated response of the at least one predetermined recipient response, when the at least one activated response is activated by the user of the recipient device. 80. The method of claim 75, wherein the at least one required authentication includes at least one of: a personal identification number, and a biometric identification. 81. A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process for displaying response information for a digital group message, the process comprising: constructing a digital message, the digital message including a script code defining at least one required authentication; and sending the constructed digital message to a recipient device, wherein the script code, when executed at the recipient device, configures the recipient device to: identify at least one input of a user of the recipient device; determine, based on the identified at least one input, whether the at least one required authentication has been provided; and display the digital message, when it is determined that the at least one required authentication has been provided. 82. A user terminal for authorization-based digital messaging, comprising: a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the user terminal to: construct a digital message, the digital message including a script code defining at least one required authentication; and send the constructed digital message to a recipient device, wherein the script code, when executed at the recipient device, configures the recipient device to: identify at least one input of a user of the recipient device; determine, based on the identified at least one input, whether the at least one required authentication has been provided; and display the digital message, when it is determined that the at least one required authentication has been provided. 83. The user terminal of claim 82, wherein the user terminal is further configured to: determine a status of a response to the sent digital message from the recipient device, wherein the status of the response is determined based on whether any of the at least one activatable recipient response has been activated; and display the determined status. 84. The user terminal of claim 82, wherein the digital message further includes at least one predetermined recipient response for activation at the recipient device. 85. The user terminal of claim 84, wherein the user terminal is further configured to: receive, from the recipient device, a response including at least one activated response of the at least one predetermined recipient response, when the at least one activated response is activated by the user of the recipient device. 86. The user terminal of claim 82, wherein the at least one required authentication includes at least one of: a personal identification number, and a biometric identification. 87. A system for receiving an authorization-based digital message, comprising: a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: receive the digital message, the digital message including a script code defining at least one required authentication; and execute the script code, wherein the script code, when executed at the system, configures the system to: identify at least one input of a user of the system; determine, based on the identified at least one input, whether the at least one required authentication has been provided; and display the digital message, when it is determined that the at least one required authentication has been provided. 88. The system of claim 87, wherein the digital message further includes at least one predetermined recipient response for activation at the apparatus. 89. The system of claim 87, wherein the at least one required authentication includes at least one of: a personal identification number, and a biometric identification. 90. A method for receiving an authorization-based digital message, comprising: receiving, at a recipient device, a digital message, the digital message including a script code defining at least one required authentication; and executing, at the recipient device, the script code, wherein the script code, when executed at the recipient device, configures the recipient device to: identify at least one input of a user of the apparatus; determine, based on the identified at least one input, whether the at least one required authentication has been provided; and display the digital message, when it is determined that the at least one required authentication has been provided. 91. The method of claim 90, wherein the digital message further includes at least one predetermined recipient response for activation at the recipient device. 92. The method of claim 90, wherein the at least one required authentication includes at least one of: a personal identification number, and a biometric identification.
Apparatus for constructing a digital telephone message including a message defining unit, configured for allowing a sender to define a message for sending to a recipient, and a response defining unit, configured for allowing the sender to predefine a recipient response, and to include the predefined recipient response in the message for activation at the recipient. Apparatus for receiving a digital telephone message, the message including an activatable sender-defined response, the apparatus including a receiving unit for receiving the message, a notification unit for notifying a recipient of the arrival of the message, and a response activation unit for displaying the sender-defined response, and associating the sender-defined response with a user action for providing user input to send the response. Related apparatus and methods are also described.1-74. (canceled) 75. A method for authorization-based digital messaging, comprising: constructing a digital message, the digital message including a script code defining at least one required authentication; and sending the constructed digital message to a recipient device, wherein the script code, when executed at the recipient device, configures the recipient device to: identify at least one input of a user of the recipient device; determine, based on the identified at least one input, whether the at least one required authentication has been provided; and display the digital message, when it is determined that the at least one required authentication has been provided. 76. The method of claim 75, further comprising: determining a status of a response to the sent digital message from the recipient device, wherein the status of the response is determined based on whether any of the at least one activatable recipient response has been activated; and displaying the determined status. 77. The method of claim 76, further comprising: determining, based on the determined status, whether a response has been received from the recipient device within a predetermined period of time. 78. The method of claim 75, wherein the digital message further includes at least one predetermined recipient response for activation at the recipient device. 79. The method of claim 78, further comprising: receiving, from the recipient device, a response including at least one activated response of the at least one predetermined recipient response, when the at least one activated response is activated by the user of the recipient device. 80. The method of claim 75, wherein the at least one required authentication includes at least one of: a personal identification number, and a biometric identification. 81. A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process for displaying response information for a digital group message, the process comprising: constructing a digital message, the digital message including a script code defining at least one required authentication; and sending the constructed digital message to a recipient device, wherein the script code, when executed at the recipient device, configures the recipient device to: identify at least one input of a user of the recipient device; determine, based on the identified at least one input, whether the at least one required authentication has been provided; and display the digital message, when it is determined that the at least one required authentication has been provided. 82. A user terminal for authorization-based digital messaging, comprising: a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the user terminal to: construct a digital message, the digital message including a script code defining at least one required authentication; and send the constructed digital message to a recipient device, wherein the script code, when executed at the recipient device, configures the recipient device to: identify at least one input of a user of the recipient device; determine, based on the identified at least one input, whether the at least one required authentication has been provided; and display the digital message, when it is determined that the at least one required authentication has been provided. 83. The user terminal of claim 82, wherein the user terminal is further configured to: determine a status of a response to the sent digital message from the recipient device, wherein the status of the response is determined based on whether any of the at least one activatable recipient response has been activated; and display the determined status. 84. The user terminal of claim 82, wherein the digital message further includes at least one predetermined recipient response for activation at the recipient device. 85. The user terminal of claim 84, wherein the user terminal is further configured to: receive, from the recipient device, a response including at least one activated response of the at least one predetermined recipient response, when the at least one activated response is activated by the user of the recipient device. 86. The user terminal of claim 82, wherein the at least one required authentication includes at least one of: a personal identification number, and a biometric identification. 87. A system for receiving an authorization-based digital message, comprising: a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: receive the digital message, the digital message including a script code defining at least one required authentication; and execute the script code, wherein the script code, when executed at the system, configures the system to: identify at least one input of a user of the system; determine, based on the identified at least one input, whether the at least one required authentication has been provided; and display the digital message, when it is determined that the at least one required authentication has been provided. 88. The system of claim 87, wherein the digital message further includes at least one predetermined recipient response for activation at the apparatus. 89. The system of claim 87, wherein the at least one required authentication includes at least one of: a personal identification number, and a biometric identification. 90. A method for receiving an authorization-based digital message, comprising: receiving, at a recipient device, a digital message, the digital message including a script code defining at least one required authentication; and executing, at the recipient device, the script code, wherein the script code, when executed at the recipient device, configures the recipient device to: identify at least one input of a user of the apparatus; determine, based on the identified at least one input, whether the at least one required authentication has been provided; and display the digital message, when it is determined that the at least one required authentication has been provided. 91. The method of claim 90, wherein the digital message further includes at least one predetermined recipient response for activation at the recipient device. 92. The method of claim 90, wherein the at least one required authentication includes at least one of: a personal identification number, and a biometric identification.
2,600
9,805
9,805
14,054,531
2,628
An operation and observation system of a technical plant and/or a technical process and associated method are provided for operating components and for displaying measurement readings, process variables and/or status messages of the components of the plant and/or process. The system includes at least one large screen for displaying the components of the technical process and/or technical plant, and a plurality of operator workstations. A movement detection and movement control component is installed on the screen and on the operator workstations, which allows an operation and/or polling of the components, the measurement readings, the process variables and/or the status messages of the components of the technical plant or of the technical process by means of a body movement of an operator.
1. An operating and observation system for at least one of a technical installation and a technical process for operating components and for presenting at least one of measured values, process variables and state messages from the components of the at least one of the installation and the process, the system comprising: at least one large display screen configured to present the components of the at least one of the installation and the process; and a plurality of user workstations, wherein the display screen and the user workstations respectively have therein installed a controller configured to recognize and control motion, wherein the controller is configured to allow at least one of operation and polling of at least one of the components, the measured values, the process variables and the state messages from the components of the at least one of the installation and the process by means of a body movement from a user. 2. The operating and observation system as claimed in claim 1, wherein the screen and user workstations are configured to present, as objects from the components, at least one of selected components, measured values, process variables and state messages, and wherein the controller is configured to operate the presented objects by gestures and movement sequences from the user. 3. The operating and observation system as claimed in claim 1, comprising: at least one recording unit configured to record previously defined movements and gestures from a user, convert the recorded movements and gestures into a motion signal, and provide the motion signal for the controller installed on the screen and the user workstations for further processing. 4. The operating and observation system as claimed in claim 1, wherein the controller comprises: at least one of a motion module and a face recognition module configured to provide an identification for a person who is in front of the screen and an identification of the person is usable as a basis for presenting views tailored specifically to the identified person. 5. The operating and observation system as claimed in claim 4, wherein the views that have sensitive information are automatically closed if there is no identification for the relevant person. 6. The operating and observation system as claimed in claim 1, comprising: at least one recording unit configured to record spoken words and interact with a processing unit connected thereto that is configured to produce from the recorded spoken words an execution signal to execute at least one of the operation and polling of the components, the measured values, the process variables, and the state messages from the components of the at least one of the technical installation and the process. 7. The operating and observation system as claimed in claim 6, wherein the screen and the user workstations comprise a respective processor configured to execute voice recognition software tangibly recorded on a non-transitory computer-readable recording medium to execute the at least one recording unit. 8. The operating and observation system as claimed in claim 7, wherein at least one of selected components, measured values, process variables and state messages from the components are presented as objects to be operated by spoken commands. 9. The operating and observation system as claimed in claim 2, comprising: at least one recording unit configured to record previously defined movements and gestures from a user, convert the recorded movements and gestures into a motion signal, and provide the motion signal for the controller installed on the screen and the user workstations for further processing. 10. The operating and observation system as claimed in claim 9, wherein the controller comprises: at least one of a motion module and a face recognition module configured to provide an identification for a person who is in front of the screen and an identification of the person is usable as a basis for presenting views tailored specifically to the identified person. 11. The operating and observation system as claimed in claim 10, wherein the views that have sensitive information are automatically closed if there is no identification for the relevant person. 12. The operating and observation system as claimed in claim 10, comprising: at least one recording unit configured to record spoken words and interact with a processing unit connected thereto that is configured to produce from the recorded spoken words an execution signal to execute at least one of the operation and polling of the components, the measured values, the process variables, and the state messages from the components of the at least one of the technical installation and the process. 13. The operating and observation system as claimed in claim 12, wherein the screen and the user workstations comprise a respective processor configured to execute voice recognition software tangibly recorded on a non-transitory computer-readable recording medium to execute the at least one recording unit. 14. The operating and observation system as claimed in claim 13, wherein at least one of selected components, measured values, process variables and state messages from the components are presented as objects to be operated by spoken commands. 15. A method for operating components and for presenting at least one of measured values, process variables and state messages from the components of a at least one of a technical installation and a technical process, the method comprising: presenting, on at least one of a large display screen and a multiplicity of user workstations, the components, measured values, process variables and state messages from the components of the at least one of the installation and the process; and operating, on at least one of the screen and the user workstations, a respective controller to execute at least one of operation and polling of at least on eof the components, the measured values, the process variables and the state messages from the components of the at least one of the installation and the process process by means of a body movement from a user. 16. The method as claimed in claim 15, comprising: detecting a distance of a person situated closest to the large screen; and optimizing the display of the large screen based on a magnitude of the detected distance of the person situated closest to the large screen. 17. The method as claimed in claim 16, comprising: generating, by the controller, a signal when no movement is registered over a previously stipulated period. 18. The method as claimed in claim 15, comprising: generating, by the controller, a signal when no movement is registered over a previously stipulated period.
An operation and observation system of a technical plant and/or a technical process and associated method are provided for operating components and for displaying measurement readings, process variables and/or status messages of the components of the plant and/or process. The system includes at least one large screen for displaying the components of the technical process and/or technical plant, and a plurality of operator workstations. A movement detection and movement control component is installed on the screen and on the operator workstations, which allows an operation and/or polling of the components, the measurement readings, the process variables and/or the status messages of the components of the technical plant or of the technical process by means of a body movement of an operator.1. An operating and observation system for at least one of a technical installation and a technical process for operating components and for presenting at least one of measured values, process variables and state messages from the components of the at least one of the installation and the process, the system comprising: at least one large display screen configured to present the components of the at least one of the installation and the process; and a plurality of user workstations, wherein the display screen and the user workstations respectively have therein installed a controller configured to recognize and control motion, wherein the controller is configured to allow at least one of operation and polling of at least one of the components, the measured values, the process variables and the state messages from the components of the at least one of the installation and the process by means of a body movement from a user. 2. The operating and observation system as claimed in claim 1, wherein the screen and user workstations are configured to present, as objects from the components, at least one of selected components, measured values, process variables and state messages, and wherein the controller is configured to operate the presented objects by gestures and movement sequences from the user. 3. The operating and observation system as claimed in claim 1, comprising: at least one recording unit configured to record previously defined movements and gestures from a user, convert the recorded movements and gestures into a motion signal, and provide the motion signal for the controller installed on the screen and the user workstations for further processing. 4. The operating and observation system as claimed in claim 1, wherein the controller comprises: at least one of a motion module and a face recognition module configured to provide an identification for a person who is in front of the screen and an identification of the person is usable as a basis for presenting views tailored specifically to the identified person. 5. The operating and observation system as claimed in claim 4, wherein the views that have sensitive information are automatically closed if there is no identification for the relevant person. 6. The operating and observation system as claimed in claim 1, comprising: at least one recording unit configured to record spoken words and interact with a processing unit connected thereto that is configured to produce from the recorded spoken words an execution signal to execute at least one of the operation and polling of the components, the measured values, the process variables, and the state messages from the components of the at least one of the technical installation and the process. 7. The operating and observation system as claimed in claim 6, wherein the screen and the user workstations comprise a respective processor configured to execute voice recognition software tangibly recorded on a non-transitory computer-readable recording medium to execute the at least one recording unit. 8. The operating and observation system as claimed in claim 7, wherein at least one of selected components, measured values, process variables and state messages from the components are presented as objects to be operated by spoken commands. 9. The operating and observation system as claimed in claim 2, comprising: at least one recording unit configured to record previously defined movements and gestures from a user, convert the recorded movements and gestures into a motion signal, and provide the motion signal for the controller installed on the screen and the user workstations for further processing. 10. The operating and observation system as claimed in claim 9, wherein the controller comprises: at least one of a motion module and a face recognition module configured to provide an identification for a person who is in front of the screen and an identification of the person is usable as a basis for presenting views tailored specifically to the identified person. 11. The operating and observation system as claimed in claim 10, wherein the views that have sensitive information are automatically closed if there is no identification for the relevant person. 12. The operating and observation system as claimed in claim 10, comprising: at least one recording unit configured to record spoken words and interact with a processing unit connected thereto that is configured to produce from the recorded spoken words an execution signal to execute at least one of the operation and polling of the components, the measured values, the process variables, and the state messages from the components of the at least one of the technical installation and the process. 13. The operating and observation system as claimed in claim 12, wherein the screen and the user workstations comprise a respective processor configured to execute voice recognition software tangibly recorded on a non-transitory computer-readable recording medium to execute the at least one recording unit. 14. The operating and observation system as claimed in claim 13, wherein at least one of selected components, measured values, process variables and state messages from the components are presented as objects to be operated by spoken commands. 15. A method for operating components and for presenting at least one of measured values, process variables and state messages from the components of a at least one of a technical installation and a technical process, the method comprising: presenting, on at least one of a large display screen and a multiplicity of user workstations, the components, measured values, process variables and state messages from the components of the at least one of the installation and the process; and operating, on at least one of the screen and the user workstations, a respective controller to execute at least one of operation and polling of at least on eof the components, the measured values, the process variables and the state messages from the components of the at least one of the installation and the process process by means of a body movement from a user. 16. The method as claimed in claim 15, comprising: detecting a distance of a person situated closest to the large screen; and optimizing the display of the large screen based on a magnitude of the detected distance of the person situated closest to the large screen. 17. The method as claimed in claim 16, comprising: generating, by the controller, a signal when no movement is registered over a previously stipulated period. 18. The method as claimed in claim 15, comprising: generating, by the controller, a signal when no movement is registered over a previously stipulated period.
2,600
9,806
9,806
14,574,802
2,689
A sensor system includes a sensor including a sensing unit structured to sense a condition, a wireless transmitter structured to output a wireless signal in response to the sensing unit sensing the condition, and a battery structured to provide power to operate the sensing unit and the wireless transmitter. The sensor system further includes a control unit including a wireless receiver structured to receive the wireless signal from the sensor. The control unit is structured to electrically connect a power source and an electric device in response to receiving the wireless signal from the sensor.
1. A sensor system comprising: a sensor including: a sensing unit structured to sense a condition; a wireless transmitter structured to output a wireless signal in response to the sensing unit sensing the condition; and a battery structured to provide power to operate the sensing unit and the wireless transmitter; and a control unit including a wireless receiver structured to receive the wireless signal from the sensor, wherein the control unit is structured to electrically connect a power source and an electric device in response to receiving the wireless signal from the sensor. 2. The sensor system of claim 1, wherein the condition is motion in a room. 3. The sensor system of claim 1, wherein the electric device is a light. 4. The sensor system of claim 1, wherein the control unit further includes switching circuitry structured to electrically connect and electrically disconnect the power source and the electric device. 5. The sensor system of claim 4, wherein the switching circuitry includes at least one of a transistor and an electrically controlled relay. 6. The sensor system of claim 4, wherein the switching circuitry is structured to electrically disconnect the power source from the electric device in response to a predetermined period of time passing without the control unit receiving the wireless signal from the sensor. 7. The sensor system of claim 1, wherein the control unit is structured to electrically connect the power source to a plurality of electric devices in response to receiving the wireless signal from the sensor. 8. The sensor system of claim 1, further comprising: a plurality of control units each including a wireless receiver structured to receive the wireless signal from the sensor, wherein each of the plurality of control units are structured to electrically connect a corresponding power source to a corresponding electric device in response to receiving the wireless signal from the sensor. 9. The sensor system of claim 1, wherein the sensor is an occupancy sensor. 10. The sensor system of claim 1, wherein the sensor is one of a temperature sensor, a light level sensor, and moisture/humidity sensor. 11. The sensor system of claim 1, wherein the electric device is one of a radio and an air conditioner. 12. The sensor system of claim 1, wherein the sensor further includes a solar unit structured to harvest solar power and to provide the harvested solar power to the battery. 13. The sensor system of claim 1, wherein the wireless transmitter is structured to use a wireless communication protocol selected from Bluetooth, Wi-Fi, and Z-Wave. 14. The sensor system of claim 1, wherein the wireless signal is configured to carry information. 15. The sensor system of claim 14, wherein the wireless signal is configured to carry temperature information.
A sensor system includes a sensor including a sensing unit structured to sense a condition, a wireless transmitter structured to output a wireless signal in response to the sensing unit sensing the condition, and a battery structured to provide power to operate the sensing unit and the wireless transmitter. The sensor system further includes a control unit including a wireless receiver structured to receive the wireless signal from the sensor. The control unit is structured to electrically connect a power source and an electric device in response to receiving the wireless signal from the sensor.1. A sensor system comprising: a sensor including: a sensing unit structured to sense a condition; a wireless transmitter structured to output a wireless signal in response to the sensing unit sensing the condition; and a battery structured to provide power to operate the sensing unit and the wireless transmitter; and a control unit including a wireless receiver structured to receive the wireless signal from the sensor, wherein the control unit is structured to electrically connect a power source and an electric device in response to receiving the wireless signal from the sensor. 2. The sensor system of claim 1, wherein the condition is motion in a room. 3. The sensor system of claim 1, wherein the electric device is a light. 4. The sensor system of claim 1, wherein the control unit further includes switching circuitry structured to electrically connect and electrically disconnect the power source and the electric device. 5. The sensor system of claim 4, wherein the switching circuitry includes at least one of a transistor and an electrically controlled relay. 6. The sensor system of claim 4, wherein the switching circuitry is structured to electrically disconnect the power source from the electric device in response to a predetermined period of time passing without the control unit receiving the wireless signal from the sensor. 7. The sensor system of claim 1, wherein the control unit is structured to electrically connect the power source to a plurality of electric devices in response to receiving the wireless signal from the sensor. 8. The sensor system of claim 1, further comprising: a plurality of control units each including a wireless receiver structured to receive the wireless signal from the sensor, wherein each of the plurality of control units are structured to electrically connect a corresponding power source to a corresponding electric device in response to receiving the wireless signal from the sensor. 9. The sensor system of claim 1, wherein the sensor is an occupancy sensor. 10. The sensor system of claim 1, wherein the sensor is one of a temperature sensor, a light level sensor, and moisture/humidity sensor. 11. The sensor system of claim 1, wherein the electric device is one of a radio and an air conditioner. 12. The sensor system of claim 1, wherein the sensor further includes a solar unit structured to harvest solar power and to provide the harvested solar power to the battery. 13. The sensor system of claim 1, wherein the wireless transmitter is structured to use a wireless communication protocol selected from Bluetooth, Wi-Fi, and Z-Wave. 14. The sensor system of claim 1, wherein the wireless signal is configured to carry information. 15. The sensor system of claim 14, wherein the wireless signal is configured to carry temperature information.
2,600
9,807
9,807
15,756,605
2,667
The invention relates to a CT image generation apparatus ( 10 ) for generating an image of a head. Transformations of a first CT image of the head are determined for different measured projection groups, wherein for a measured projection group a transformation is determined such that a degree of similarity between the measured projection group and a calculated projection group is increased, wherein the calculated projection group is calculated by transforming the first CT image in accordance with the transformation to be determined and by forward projecting the transformed first CT image. A motion corrected three-dimensional second CT image is reconstructed based on the measured projections and the transformations determined for the different measured projection groups. This allows providing a high quality CT image of the head, even if a patient cannot stop moving the head in case of, for instance, stroke.
1. A computed tomography image generation apparatus for generating an image of a human head, wherein the computed tomography image generation apparatus comprises: a projections providing unit for providing measured two-dimensional projections of the head, wherein the measured projections have been measured at different times while a radiation source, which emits radiation for traversing the head, has been moved around the head and wherein the measured projections have been generated based on the radiation after having traversed the head, a reconstruction unit for reconstructing a three-dimensional first computed tomography image of the head based on the provided measured projections, a transformation determination unit for determining three-dimensional transformations of the first computed tomography image of the head for different measured projection groups, wherein a measured projection group comprises one or several measured projections, wherein the transformation determination unit is adapted to determine for a certain measured projection group a transformation such that a degree of similarity between the certain measured projection group and a calculated projection group is increased, wherein the calculated projection group corresponds to the certain measured projection group and is calculated by transforming the first computed tomography image in accordance with the transformation to be determined and by forward projecting the transformed first computed tomography image, wherein the reconstruction unit is adapted to reconstruct a motion corrected three-dimensional second computed tomography image based on the measured projections and the transformations determined for the different measured projection groups, wherein the computed tomography image generation apparatus comprises an examination zone in which the human head is arrangeable, wherein the reconstruction unit is adapted to reconstruct the first computed tomography image and the second computed tomography image such that they show the examination zone and to, for generating the second computed tomography image, perform the motion correction for a first part of the examination zone and to perform the motion correction not for a second part of the examination zone, wherein the first part of the examination zone includes the human head. 2. The computed tomography image generation apparatus as defined in claim 1, wherein the transformation determination unit is adapted to use a gradient angle difference for determining the degree of similarity. 3. The computed tomography image generation apparatus as defined in claim 1, wherein the reconstruction unit is adapted to reduce motion artifacts in the first computed tomography image by using overscan weighting. 4. The computed tomography image generation apparatus as defined in claim 1, wherein the transformation determination unit is adapted to determine a transformation for a measured projection group iteratively. 5. The computed tomography image generation apparatus as defined in claim 1, wherein the transformation determination unit is adapted such that each measured projection group comprises a single measured projection. 6. The computed tomography image generation apparatus as defined in claim 1, wherein the transformation determination unit is adapted such that each measured projection group comprises several measured projections. 7. The computed tomography image generation apparatus as defined in claim 1, wherein the transformation determination unit is adapted to filter the determined transformations. 8. The computed tomography image generation apparatus as defined in claim 1, wherein the transformation determination unit is adapted to determine transformations based on the degree of similarity and the forward projection not for all times, at which projections have been measured, and to determine transformations for times, at which transformations have not been determined based on the degree of similarity and the forward projection, by interpolation. 9. The computed tomography image generation apparatus as defined in claim 1, wherein the transformation determination unit is adapted to determine an outlier in the determined transformations and to remove the determined outlier from the determined transformations. 10. The computed tomography image generation apparatus as defined in claim 1, wherein: the reconstruction unit is adapted to divide the measured projections into several sets of measured projections, wherein each set includes temporally adjacent measured projections, and to reconstruct several three-dimensional first computed tomography images of the head based on the several sets of measured projections, wherein a respective first computed tomography image is reconstructed based on a respective set of measured projections, the transformation determination unit is adapted to determine for each first computed tomography image a set of three-dimensional transformations of the respective first computed tomography image of the head for different measured projection groups, wherein a measured projection group comprises one or several measured projections of the respective set of measured projections, wherein the transformation determination unit is adapted to determine for a certain measured projection group of the respective set of measured projections a transformation such that a degree of similarity between the certain measured projection group and a calculated projection group is increased, wherein the calculated projection group corresponds to the certain measured projection group and is calculated by transforming the respective first computed tomography image in accordance with the transformation to be determined and by forward projecting the transformed respective first computed tomography image, the reconstruction unit is adapted to reconstruct a motion corrected three-dimensional second computed tomography image based on the measured projections and the several sets of transformations determined for the different measured projection groups of the several sets of measured projections. 11. The computed tomography image generation apparatus as defined in claim 10, wherein the transformation determination unit is adapted to determine a further transformation transforming a transformed first computed tomography image, which has been reconstructed based on a set of measured projections and which has then be transformed, such that it corresponds to another transformed first computed tomography image, which has been reconstructed based on another set of measured projections and which has then be transformed, wherein the reconstruction unit is adapted to reconstruct the motion corrected three-dimensional second computed tomography image also based on the further transformation. 12. The computed tomography image generation apparatus as defined in claim 10, wherein the reconstruction unit is adapted to divide the measured projections into several sets of measured projections such that at least one set of measured projections does not fulfill the completeness criterion for computed tomography reconstruction. 13. The computed tomography image generation apparatus as defined in claim 1, wherein the reconstruction unit and the transformation determination unit are adapted such that, after the second computed tomography image has been reconstructed, in an iteration step the transformations are determined again based on the second computed tomography image, which then has the function of the first computed tomography image, and the second computed tomography image is again reconstructed based on the newly determined transformations and the measured projections. 14. A computed tomography image generation method for generating an image of a human head, wherein the computed tomography image generation method comprises: providing measured two-dimensional projections of the head by a projections providing unit, wherein the measured projections have been measured at different times while a radiation source, which emits radiation for traversing the head, has been moved around the head and wherein the measured projections have been generated based on the radiation after having traversed the head, reconstructing a three-dimensional first computed tomography image of the head based on the provided measured projections by a reconstruction unit, determining three-dimensional transformations of the first computed tomography image of the head for different measured projection groups by a transformation determination unit, wherein a measured projection group comprises one or several measured projections, wherein the transformation determination unit determines for a certain measured projection group a transformation such that a degree of similarity between the certain measured projection group and a calculated projection group is increased, wherein the calculated projection group corresponds to the certain measured projection group and is calculated by transforming the first computed tomography image in accordance with the transformation to be determined and by forward projecting the transformed first computed tomography image, wherein the reconstruction unit reconstructs a motion corrected three-dimensional second computed tomography image based on the measured projections and the transformations determined for the different measured projection groups, wherein the computed tomography image generation apparatus comprises an examination zone in which the human head is arrangeable, wherein the reconstruction unit reconstructs the first computed tomography image and the second computed tomography image such that they show the examination zone and to, for generating the second computed tomography image, perform the motion correction for a first part of the examination zone and to perform the motion correction not for a second part of the examination zone, wherein the first part of the examination zone includes the human head. 15. A computer program for controlling a computed tomography image generation apparatus, the computer program comprising program code means for causing the computed tomography image generation apparatus to carry out the steps of the computed tomography image generation method as defined in claim 14, when the computer program is run on a computer controlling the computed tomography image generation apparatus.
The invention relates to a CT image generation apparatus ( 10 ) for generating an image of a head. Transformations of a first CT image of the head are determined for different measured projection groups, wherein for a measured projection group a transformation is determined such that a degree of similarity between the measured projection group and a calculated projection group is increased, wherein the calculated projection group is calculated by transforming the first CT image in accordance with the transformation to be determined and by forward projecting the transformed first CT image. A motion corrected three-dimensional second CT image is reconstructed based on the measured projections and the transformations determined for the different measured projection groups. This allows providing a high quality CT image of the head, even if a patient cannot stop moving the head in case of, for instance, stroke.1. A computed tomography image generation apparatus for generating an image of a human head, wherein the computed tomography image generation apparatus comprises: a projections providing unit for providing measured two-dimensional projections of the head, wherein the measured projections have been measured at different times while a radiation source, which emits radiation for traversing the head, has been moved around the head and wherein the measured projections have been generated based on the radiation after having traversed the head, a reconstruction unit for reconstructing a three-dimensional first computed tomography image of the head based on the provided measured projections, a transformation determination unit for determining three-dimensional transformations of the first computed tomography image of the head for different measured projection groups, wherein a measured projection group comprises one or several measured projections, wherein the transformation determination unit is adapted to determine for a certain measured projection group a transformation such that a degree of similarity between the certain measured projection group and a calculated projection group is increased, wherein the calculated projection group corresponds to the certain measured projection group and is calculated by transforming the first computed tomography image in accordance with the transformation to be determined and by forward projecting the transformed first computed tomography image, wherein the reconstruction unit is adapted to reconstruct a motion corrected three-dimensional second computed tomography image based on the measured projections and the transformations determined for the different measured projection groups, wherein the computed tomography image generation apparatus comprises an examination zone in which the human head is arrangeable, wherein the reconstruction unit is adapted to reconstruct the first computed tomography image and the second computed tomography image such that they show the examination zone and to, for generating the second computed tomography image, perform the motion correction for a first part of the examination zone and to perform the motion correction not for a second part of the examination zone, wherein the first part of the examination zone includes the human head. 2. The computed tomography image generation apparatus as defined in claim 1, wherein the transformation determination unit is adapted to use a gradient angle difference for determining the degree of similarity. 3. The computed tomography image generation apparatus as defined in claim 1, wherein the reconstruction unit is adapted to reduce motion artifacts in the first computed tomography image by using overscan weighting. 4. The computed tomography image generation apparatus as defined in claim 1, wherein the transformation determination unit is adapted to determine a transformation for a measured projection group iteratively. 5. The computed tomography image generation apparatus as defined in claim 1, wherein the transformation determination unit is adapted such that each measured projection group comprises a single measured projection. 6. The computed tomography image generation apparatus as defined in claim 1, wherein the transformation determination unit is adapted such that each measured projection group comprises several measured projections. 7. The computed tomography image generation apparatus as defined in claim 1, wherein the transformation determination unit is adapted to filter the determined transformations. 8. The computed tomography image generation apparatus as defined in claim 1, wherein the transformation determination unit is adapted to determine transformations based on the degree of similarity and the forward projection not for all times, at which projections have been measured, and to determine transformations for times, at which transformations have not been determined based on the degree of similarity and the forward projection, by interpolation. 9. The computed tomography image generation apparatus as defined in claim 1, wherein the transformation determination unit is adapted to determine an outlier in the determined transformations and to remove the determined outlier from the determined transformations. 10. The computed tomography image generation apparatus as defined in claim 1, wherein: the reconstruction unit is adapted to divide the measured projections into several sets of measured projections, wherein each set includes temporally adjacent measured projections, and to reconstruct several three-dimensional first computed tomography images of the head based on the several sets of measured projections, wherein a respective first computed tomography image is reconstructed based on a respective set of measured projections, the transformation determination unit is adapted to determine for each first computed tomography image a set of three-dimensional transformations of the respective first computed tomography image of the head for different measured projection groups, wherein a measured projection group comprises one or several measured projections of the respective set of measured projections, wherein the transformation determination unit is adapted to determine for a certain measured projection group of the respective set of measured projections a transformation such that a degree of similarity between the certain measured projection group and a calculated projection group is increased, wherein the calculated projection group corresponds to the certain measured projection group and is calculated by transforming the respective first computed tomography image in accordance with the transformation to be determined and by forward projecting the transformed respective first computed tomography image, the reconstruction unit is adapted to reconstruct a motion corrected three-dimensional second computed tomography image based on the measured projections and the several sets of transformations determined for the different measured projection groups of the several sets of measured projections. 11. The computed tomography image generation apparatus as defined in claim 10, wherein the transformation determination unit is adapted to determine a further transformation transforming a transformed first computed tomography image, which has been reconstructed based on a set of measured projections and which has then be transformed, such that it corresponds to another transformed first computed tomography image, which has been reconstructed based on another set of measured projections and which has then be transformed, wherein the reconstruction unit is adapted to reconstruct the motion corrected three-dimensional second computed tomography image also based on the further transformation. 12. The computed tomography image generation apparatus as defined in claim 10, wherein the reconstruction unit is adapted to divide the measured projections into several sets of measured projections such that at least one set of measured projections does not fulfill the completeness criterion for computed tomography reconstruction. 13. The computed tomography image generation apparatus as defined in claim 1, wherein the reconstruction unit and the transformation determination unit are adapted such that, after the second computed tomography image has been reconstructed, in an iteration step the transformations are determined again based on the second computed tomography image, which then has the function of the first computed tomography image, and the second computed tomography image is again reconstructed based on the newly determined transformations and the measured projections. 14. A computed tomography image generation method for generating an image of a human head, wherein the computed tomography image generation method comprises: providing measured two-dimensional projections of the head by a projections providing unit, wherein the measured projections have been measured at different times while a radiation source, which emits radiation for traversing the head, has been moved around the head and wherein the measured projections have been generated based on the radiation after having traversed the head, reconstructing a three-dimensional first computed tomography image of the head based on the provided measured projections by a reconstruction unit, determining three-dimensional transformations of the first computed tomography image of the head for different measured projection groups by a transformation determination unit, wherein a measured projection group comprises one or several measured projections, wherein the transformation determination unit determines for a certain measured projection group a transformation such that a degree of similarity between the certain measured projection group and a calculated projection group is increased, wherein the calculated projection group corresponds to the certain measured projection group and is calculated by transforming the first computed tomography image in accordance with the transformation to be determined and by forward projecting the transformed first computed tomography image, wherein the reconstruction unit reconstructs a motion corrected three-dimensional second computed tomography image based on the measured projections and the transformations determined for the different measured projection groups, wherein the computed tomography image generation apparatus comprises an examination zone in which the human head is arrangeable, wherein the reconstruction unit reconstructs the first computed tomography image and the second computed tomography image such that they show the examination zone and to, for generating the second computed tomography image, perform the motion correction for a first part of the examination zone and to perform the motion correction not for a second part of the examination zone, wherein the first part of the examination zone includes the human head. 15. A computer program for controlling a computed tomography image generation apparatus, the computer program comprising program code means for causing the computed tomography image generation apparatus to carry out the steps of the computed tomography image generation method as defined in claim 14, when the computer program is run on a computer controlling the computed tomography image generation apparatus.
2,600
9,808
9,808
15,158,451
2,626
A display device with an integrated touch screen including a display panel including electrodes divided into a plurality of block type groups and a plurality of data lines; a display driver IC configured to apply a common voltage to the electrodes when a driving mode of the panel is a display driving mode, sequentially apply a touch scan signal to each block type group when the driving mode of the panel is a touch driving mode, and apply a data signal to the data lines associated with a corresponding block type group when the touch scan signal is applied to the corresponding block type group; and a touch IC configured to generate the touch scan signal and apply the touch scan signal to the display driver IC.
1. A driver circuit for driving a display panel with an integrated touch screen, the driver circuit configured to: apply a common voltage to a touch electrode for driving the display panel during a display driving mode, apply a touch scan signal to the touch electrode being sensed for determining a touched position during a touch driving mode; and apply a signal having a same phase as the touch scan signal to a display driving electrode corresponding to the touch electrode to which the touch scan signal is applied, the signal to prevent reduction in touch sensitivity. 2. The driver circuit of claim 1, wherein a display signal is applied to the display driving electrode to drive the display panel. 3. The driver circuit of claim 1, wherein the display driving electrode is a data electrode or a gate electrode. 4. The driver circuit of claim 1, wherein the signal having the same phase as the touch scan signal is the touch scan signal itself. 5. The driver circuit of claim 1, wherein the signal having the same phase as the touch scan signal is applied to only the display driving electrode corresponding to the touch electrode to which the touch scan signal is applied. 6. The driver circuit of claim 1, wherein a change in a capacitance of the touch electrode is sensed for determining the touched position. 7. The driver circuit of claim 1, wherein the touch scan signal is applied to the touch electrode through a connection line and a change in the capacitance of the touch electrode is sensed through the connection line. 8. The driver circuit of claim 1, further comprising a touch driver applying the touch scan signal to the touch electrode. 9. The driver circuit of claim 8, further comprising a display driver applying the signal to the display driving electrode. 10. The driver circuit of claim 9, further comprising a touch IC applying the touch scan signal to the touch driver and the touch scan signal is transferred to the display driver. 11. A driver circuit for driving a display panel with an integrated touch screen, the driver circuit comprising: a first circuit for applying a common voltage to a touch electrode for driving the display panel during a display driving mode, and applying a touch scan signal to the touch electrode being sensed for determining a touched position during a touch driving mode; and a second circuit for applying a signal having a same phase as the touch scan signal to a display driving electrode corresponding to the touch electrode to which the touch scan signal is applied, the signal to prevent reduction in touch sensitivity. 12. The driver circuit of claim 11, wherein a display signal is applied to the display driving electrode to drive the display panel. 13. The driver circuit of claim 11, wherein the display driving electrode is a data electrode or a gate electrode. 14. The driver circuit of claim 11, wherein the signal having the same phase as the touch scan signal is the touch scan signal itself. 15. The driver circuit of claim 11, wherein the signal having the same phase as the touch scan signal is applied to only the display driving electrode corresponding to said the touch electrode to which the touch scan signal is applied. 16. The driver circuit of claim 11, wherein a change in a capacitance of the touch electrode is sensed for determining the touched position. 17. The driver circuit of claim 11, wherein the touch scan signal is applied to the touch electrode through a connection line and a change in a capacitance of the touch electrode is sensed through the connection line. 18. The driver circuit of claim 11, wherein the first circuit is a touch driver. 19. The driver circuit of claim 18, wherein the second circuit is a display driver. 20. The driver circuit of claim 19, further comprising a touch IC applying the touch scan signal to the touch driver and the touch scan signal is transferred to the display driver. 21. The driver circuit of claim 11, wherein the driver circuit is a display driver IC.
A display device with an integrated touch screen including a display panel including electrodes divided into a plurality of block type groups and a plurality of data lines; a display driver IC configured to apply a common voltage to the electrodes when a driving mode of the panel is a display driving mode, sequentially apply a touch scan signal to each block type group when the driving mode of the panel is a touch driving mode, and apply a data signal to the data lines associated with a corresponding block type group when the touch scan signal is applied to the corresponding block type group; and a touch IC configured to generate the touch scan signal and apply the touch scan signal to the display driver IC.1. A driver circuit for driving a display panel with an integrated touch screen, the driver circuit configured to: apply a common voltage to a touch electrode for driving the display panel during a display driving mode, apply a touch scan signal to the touch electrode being sensed for determining a touched position during a touch driving mode; and apply a signal having a same phase as the touch scan signal to a display driving electrode corresponding to the touch electrode to which the touch scan signal is applied, the signal to prevent reduction in touch sensitivity. 2. The driver circuit of claim 1, wherein a display signal is applied to the display driving electrode to drive the display panel. 3. The driver circuit of claim 1, wherein the display driving electrode is a data electrode or a gate electrode. 4. The driver circuit of claim 1, wherein the signal having the same phase as the touch scan signal is the touch scan signal itself. 5. The driver circuit of claim 1, wherein the signal having the same phase as the touch scan signal is applied to only the display driving electrode corresponding to the touch electrode to which the touch scan signal is applied. 6. The driver circuit of claim 1, wherein a change in a capacitance of the touch electrode is sensed for determining the touched position. 7. The driver circuit of claim 1, wherein the touch scan signal is applied to the touch electrode through a connection line and a change in the capacitance of the touch electrode is sensed through the connection line. 8. The driver circuit of claim 1, further comprising a touch driver applying the touch scan signal to the touch electrode. 9. The driver circuit of claim 8, further comprising a display driver applying the signal to the display driving electrode. 10. The driver circuit of claim 9, further comprising a touch IC applying the touch scan signal to the touch driver and the touch scan signal is transferred to the display driver. 11. A driver circuit for driving a display panel with an integrated touch screen, the driver circuit comprising: a first circuit for applying a common voltage to a touch electrode for driving the display panel during a display driving mode, and applying a touch scan signal to the touch electrode being sensed for determining a touched position during a touch driving mode; and a second circuit for applying a signal having a same phase as the touch scan signal to a display driving electrode corresponding to the touch electrode to which the touch scan signal is applied, the signal to prevent reduction in touch sensitivity. 12. The driver circuit of claim 11, wherein a display signal is applied to the display driving electrode to drive the display panel. 13. The driver circuit of claim 11, wherein the display driving electrode is a data electrode or a gate electrode. 14. The driver circuit of claim 11, wherein the signal having the same phase as the touch scan signal is the touch scan signal itself. 15. The driver circuit of claim 11, wherein the signal having the same phase as the touch scan signal is applied to only the display driving electrode corresponding to said the touch electrode to which the touch scan signal is applied. 16. The driver circuit of claim 11, wherein a change in a capacitance of the touch electrode is sensed for determining the touched position. 17. The driver circuit of claim 11, wherein the touch scan signal is applied to the touch electrode through a connection line and a change in a capacitance of the touch electrode is sensed through the connection line. 18. The driver circuit of claim 11, wherein the first circuit is a touch driver. 19. The driver circuit of claim 18, wherein the second circuit is a display driver. 20. The driver circuit of claim 19, further comprising a touch IC applying the touch scan signal to the touch driver and the touch scan signal is transferred to the display driver. 21. The driver circuit of claim 11, wherein the driver circuit is a display driver IC.
2,600
9,809
9,809
15,380,747
2,699
Multi-touch touch-sensing devices and methods are described herein. The touch sensing devices can include multiple sense points, each located at a crossing of a drive line and a sense line. In some embodiments, multiple drive lines may be simultaneously or nearly simultaneously stimulated with drive signals having unique characteristics, such as phase or frequency. A sense signal can occur on each sense line that can be related to the drive signals by an amount of touch present at sense points corresponding to the stimulated drive lines and the sense line. By using processing techniques based on the unique drive signals, an amount of touch corresponding to each sense point can be extracted from the sense signal. The touch sensing methods and devices can be incorporated into interfaces for a variety of electronic devices such as a desktop, tablet, notebook, and handheld computers, personal digital assistants, media players, and mobile telephones.
1. A touch sensing device comprising: drive circuitry configured to: apply a first drive signal to a first drive line of a plurality of drive lines during a first time period; and apply a second drive signal to a second drive line of the plurality of drive lines during a second time period, the second time period at least partially overlapping the first time period during an overlapping time period, wherein the first and second drive signals have the same frequency and are in phase during a first portion of the overlapping time period and out of phase during a second portion of the overlapping time period; and sense circuitry configured to: detect a sense signal from at least one sense line, the sense signal being related to the first and second drive signals by touch or proximity of one or more objects to one or more sensing points associated with the at least one sense line, a sensing point being associated with at least one of the plurality of drive lines and at least one of a plurality of sense lines; and derive touch information for the one or more sensing points from the sense signal. 2. The touch sensing device of claim 1, wherein the sense circuitry comprises at least one microcontroller. 3. The touch sensing device of claim 2, wherein the at least one microcontroller is an application specific integrated circuit (ASIC). 4. The touch sensing device of claim 2, wherein the at least one microcontroller is a digital signal processor (DSP). 5. The touch sensing device of claim 1, wherein the sense circuitry derives touch information from the sense signal by deriving a plurality of values from the sense signal and deriving touch information from a mathematical combination of the plurality of values. 6. The touch sensing device of claim 5, wherein the sense circuitry derives a plurality of values from the sense signal by integrating the sense signal over time. 7. The touch sensing device of claim 5, wherein the plurality of values correspond to a capacitance measurement at a plurality of sensing points. 8. The touch sensing device of claim 1, the drive circuitry further configured to: apply a third drive signal to a third drive line of the plurality of drive lines during a third time period; and apply a fourth drive signal to a fourth drive line of the plurality of drive lines during a fourth time period, the first, second, third and fourth time periods at least partially overlapping during the overlapping time period; wherein the first, second, third and fourth drive signals have the same frequency and two of the first, second, third and fourth drive signals are out of phase with another two of the first, second, third and fourth drive signal during a third portion of the overlapping time period. 9. The touch sensing device of claim 1, wherein a phase relationship between the first and second drive lines during the overlapping time period is selected to eliminate a DC component of the sense signal. 10. The touch sensing device of claim 1, wherein the plurality of drive lines and the plurality of sense lines are part of a touch screen. 11. A method of stimulating a touch sensitive surface, the touch sensitive surface comprising a plurality of sensing points, a sensing point being associated with at least one of a plurality of drive lines and at least one of a plurality of sense lines, the method comprising: stimulating a first drive line of the plurality of drive lines with a first drive signal during a first time period; and stimulating a second drive line of the plurality of drive lines with a second drive signal during a second time period, the second time period at least partially overlapping the first time period during an overlapping time period; wherein the first and second drive signals have the same frequency and are in phase during a first portion of the overlapping time period and out of phase during a second portion of the overlapping time period. 12. The method of claim 11, the method further comprising: sensing a sense signal on at least one sense line, wherein the sense signal is related to the first and second drive signals by touch or proximity of one or more objects to one or more sensing points associated with at least one of the first and second drive lines and the at least one sense line; and deriving touch information from the sense signal; wherein deriving touch information from the sense signal comprises: deriving a plurality of values from the sense signal; and deriving touch information from a mathematical combination of the plurality of values. 13. The method of claim 12, wherein deriving touch information from the sense signal comprises integrating the sense signal over time. 14. The method of claim 12, wherein the plurality of values from the sense signal include a capacitance measurement. 15. The method of claim 11, further comprising: stimulating a third drive line of the plurality of drive lines with a third drive signal during a third time period; and stimulating a fourth drive line of the plurality of drive lines with a fourth drive signal during a fourth time period, the first, second, third and fourth time periods at least partially overlapping during the overlapping time period; wherein the first, second, third and fourth drive signals have the same frequency and two of the first, second, third and fourth drive signals are out of phase with another two of the first, second, third and fourth drive signals during a third portion of the overlapping time period. 16. The method of claim 15, wherein the first, second, third and fourth drive signals are in phase during a fourth portion of the overlapping time period. 17. The method of claim 11, wherein a predetermined phase relationship of the first and second drive signals is selected to eliminate a DC component of a sense signal.
Multi-touch touch-sensing devices and methods are described herein. The touch sensing devices can include multiple sense points, each located at a crossing of a drive line and a sense line. In some embodiments, multiple drive lines may be simultaneously or nearly simultaneously stimulated with drive signals having unique characteristics, such as phase or frequency. A sense signal can occur on each sense line that can be related to the drive signals by an amount of touch present at sense points corresponding to the stimulated drive lines and the sense line. By using processing techniques based on the unique drive signals, an amount of touch corresponding to each sense point can be extracted from the sense signal. The touch sensing methods and devices can be incorporated into interfaces for a variety of electronic devices such as a desktop, tablet, notebook, and handheld computers, personal digital assistants, media players, and mobile telephones.1. A touch sensing device comprising: drive circuitry configured to: apply a first drive signal to a first drive line of a plurality of drive lines during a first time period; and apply a second drive signal to a second drive line of the plurality of drive lines during a second time period, the second time period at least partially overlapping the first time period during an overlapping time period, wherein the first and second drive signals have the same frequency and are in phase during a first portion of the overlapping time period and out of phase during a second portion of the overlapping time period; and sense circuitry configured to: detect a sense signal from at least one sense line, the sense signal being related to the first and second drive signals by touch or proximity of one or more objects to one or more sensing points associated with the at least one sense line, a sensing point being associated with at least one of the plurality of drive lines and at least one of a plurality of sense lines; and derive touch information for the one or more sensing points from the sense signal. 2. The touch sensing device of claim 1, wherein the sense circuitry comprises at least one microcontroller. 3. The touch sensing device of claim 2, wherein the at least one microcontroller is an application specific integrated circuit (ASIC). 4. The touch sensing device of claim 2, wherein the at least one microcontroller is a digital signal processor (DSP). 5. The touch sensing device of claim 1, wherein the sense circuitry derives touch information from the sense signal by deriving a plurality of values from the sense signal and deriving touch information from a mathematical combination of the plurality of values. 6. The touch sensing device of claim 5, wherein the sense circuitry derives a plurality of values from the sense signal by integrating the sense signal over time. 7. The touch sensing device of claim 5, wherein the plurality of values correspond to a capacitance measurement at a plurality of sensing points. 8. The touch sensing device of claim 1, the drive circuitry further configured to: apply a third drive signal to a third drive line of the plurality of drive lines during a third time period; and apply a fourth drive signal to a fourth drive line of the plurality of drive lines during a fourth time period, the first, second, third and fourth time periods at least partially overlapping during the overlapping time period; wherein the first, second, third and fourth drive signals have the same frequency and two of the first, second, third and fourth drive signals are out of phase with another two of the first, second, third and fourth drive signal during a third portion of the overlapping time period. 9. The touch sensing device of claim 1, wherein a phase relationship between the first and second drive lines during the overlapping time period is selected to eliminate a DC component of the sense signal. 10. The touch sensing device of claim 1, wherein the plurality of drive lines and the plurality of sense lines are part of a touch screen. 11. A method of stimulating a touch sensitive surface, the touch sensitive surface comprising a plurality of sensing points, a sensing point being associated with at least one of a plurality of drive lines and at least one of a plurality of sense lines, the method comprising: stimulating a first drive line of the plurality of drive lines with a first drive signal during a first time period; and stimulating a second drive line of the plurality of drive lines with a second drive signal during a second time period, the second time period at least partially overlapping the first time period during an overlapping time period; wherein the first and second drive signals have the same frequency and are in phase during a first portion of the overlapping time period and out of phase during a second portion of the overlapping time period. 12. The method of claim 11, the method further comprising: sensing a sense signal on at least one sense line, wherein the sense signal is related to the first and second drive signals by touch or proximity of one or more objects to one or more sensing points associated with at least one of the first and second drive lines and the at least one sense line; and deriving touch information from the sense signal; wherein deriving touch information from the sense signal comprises: deriving a plurality of values from the sense signal; and deriving touch information from a mathematical combination of the plurality of values. 13. The method of claim 12, wherein deriving touch information from the sense signal comprises integrating the sense signal over time. 14. The method of claim 12, wherein the plurality of values from the sense signal include a capacitance measurement. 15. The method of claim 11, further comprising: stimulating a third drive line of the plurality of drive lines with a third drive signal during a third time period; and stimulating a fourth drive line of the plurality of drive lines with a fourth drive signal during a fourth time period, the first, second, third and fourth time periods at least partially overlapping during the overlapping time period; wherein the first, second, third and fourth drive signals have the same frequency and two of the first, second, third and fourth drive signals are out of phase with another two of the first, second, third and fourth drive signals during a third portion of the overlapping time period. 16. The method of claim 15, wherein the first, second, third and fourth drive signals are in phase during a fourth portion of the overlapping time period. 17. The method of claim 11, wherein a predetermined phase relationship of the first and second drive signals is selected to eliminate a DC component of a sense signal.
2,600
9,810
9,810
15,694,826
2,667
Methods and systems for detecting keypoints in image data may include an image sensor interface receiving pixel data from an image sensor. A front-end pixel data processing circuit may receive pixel data and convert the pixel data to a different color space format. A back-end pixel data processing circuit may perform one or more operations on the pixel data. An output circuit may receive pixel data and output the pixel data to a system memory. A keypoint detection circuit may receive pixel data from the image sensor interface in the image sensor pixel data format or receive pixel data after processing by the front-end or the back-end pixel data processing circuits. The keypoint detection circuit may perform a keypoint detection operation on the pixel data to detect one or more keypoints in the image frame and output to the system memory a description of the one or more keypoints.
1.-20. (canceled) 21. An image signal processor, comprising: a keypoint control parameter storage structure configured to store a plurality of keypoint sensitivity threshold values corresponding to a first set of respective image frame regions; a keypoint detection circuit connected to the keypoint control parameter storage structure and configured to: receive pixel data for an image frame; perform a keypoint detection operation on the received pixel data to detect one or more keypoints in the image frame; and selectively output to a system memory a description of the one or more keypoints detected in the first set of respective image frame regions of the image frame in response to respective magnitude values of the one or more keypoints exceeding one of the plurality of keypoint sensitivity threshold values corresponding to the first set of respective image frame regions. 22. The image signal processor of claim 21, wherein: the keypoint detection circuit is configured to output a respective count of keypoints detected in each region of the first set of respective image frame regions of the image frame; and the respective count of keypoints detected in each region of the first set of respective image frame regions of the image frame is usable by program instructions to dynamically adjust one of the plurality of keypoint sensitivity threshold values. 23. The image signal processor of claim 21, wherein the keypoint control parameter storage structure is configured to store: a programmable maximum limit of allowable keypoints for each of a second set of respective image frame regions usable by the keypoint detection circuit to output the description of a total number of keypoints for each of the second set of respective image frame regions of the image frame, wherein the total number of keypoints for each of the second set of respective image frame regions does not exceed the programmable maximum limit of allowable keypoints; and a programmable size of the second set of respective image frame regions, wherein the second set of respective image frame regions corresponding to the programmable maximum limit of allowable keypoints are smaller image frame regions than the first set of respective image frame regions corresponding to the plurality of adjustable keypoint sensitivity threshold values. 24. The image signal processor of claim 23, wherein to selectively output to the system memory the description of the one or more keypoints detected in a particular region of the second set of respective image frame regions of the image frame, the keypoint detection circuit is further configured to selectively output the description of the one or more keypoints with the highest respective magnitude values of the one or more keypoints detected in the particular region. 25. The image signal processor of claim 21, further comprising one or more image processing stages in addition to the keypoint detection circuit, and wherein the image signal processor is configured to operate in a low power mode in which: the one or more image processing stages enter an inactive state; the keypoint detection circuit remains in an active state configured to continue to output the description of the one or more keypoints to the system memory while the one or more image processing stages enter the inactive state; and the one or more image processing stages enter an active state in response to the keypoint detection circuit detecting one or more keypoints. 26. The image signal processor of claim 21, wherein the pixel data for the image frame comprises luminance channel image data. 27. The image signal processor of claim 21, further comprising a pre-processing module configured to: receive full-color image data encoded in a first color space format for the image frame; convert the full-color image data to the pixel data for the image frame, wherein the pixel data for the image frame is encoded in a format other than the first color space format; and output the pixel data to the keypoint detection circuit. 28. A method for an image signal processor, comprising: storing, in a keypoint control parameter storage structure connected to a keypoint detection circuit, a plurality of keypoint sensitivity threshold values corresponding to a first set of respective image frame regions; receiving, by the keypoint detection circuit, pixel data for an image frame; performing, by the keypoint detection circuit, a keypoint detection operation on the received pixel data to detect one or more keypoints in the image frame; selectively outputting, by the keypoint detection circuit, to a system memory a description of the one or more keypoints detected in the first set of respective image frame regions in response to respective magnitude values of the one or more keypoints exceeding one of the plurality of keypoint sensitivity threshold values corresponding to the first set of respective image frame regions. 29. The method of claim 28, further comprising outputting, by the keypoint detection circuit, a count of keypoints detected in the first set of respective regions of the image frame, wherein the count of keypoints detected in the first set of respective image frame regions of the image frame is usable by program instructions to dynamically adjust one of the plurality of keypoint sensitivity threshold values. 30. The method of claim 28, further comprising: storing, by the keypoint control parameter storage structure, a programmable maximum limit of allowable keypoints for each of a second set of respective regions of the image frame usable by the keypoint detection circuit to output the description of a total number of keypoints for each of the second set of respective regions of the image frame, wherein the total number of keypoints for each of the second set of respective regions does not exceed the programmable maximum limit of allowable keypoints; and storing, by the keypoint control parameter storage structure, a programmable size of the second set of respective regions of the image frame, wherein the second set of respective regions corresponding to the programmable maximum limit of allowable keypoints are smaller regions of the image frame than the first set of respective image frame regions corresponding to the plurality of adjustable keypoint sensitivity threshold values. 31. The method of claim 30, wherein selectively outputting to the system memory the description of the one or more keypoints detected in a particular region of the second set of respective image frame regions of the image frame comprises selectively outputting the description of the one or more keypoints with the highest respective magnitude values of the one or more keypoints detected in the particular region. 32. The method of claim 28, further comprising: placing one or more image processing stages in an inactive state in response to the keypoint detection circuit not detecting one or more keypoints for a pre-defined time period, wherein the keypoint detection circuit remains in an active state; and placing the one or more image processing stages in an active state in response to the keypoint detection circuit detecting one or more keypoints. 33. The method of claim 28, wherein the pixel data for the image frame comprises luminance channel image data. 34. The method of claim 28, further comprising receiving, by a pre-processing module, full-color image data encoded in a first color space format for the image frame; converting the full-color image data to the pixel data for the image frame, wherein the pixel data for the image frame is encoded in a format other than the first color space format; and outputting the pixel data to the keypoint detection circuit. 35. A device, comprising: a central processing unit; a system memory connected to the central processing unit; and an image signal processor connected to the central processing unit, wherein the image signal processor comprises: a keypoint control parameter storage structure configured to store a plurality of keypoint sensitivity threshold values corresponding to a first set of respective image frame regions; a keypoint detection circuit connected to the keypoint control parameter storage structure and configured to: receive pixel data for an image frame; perform a keypoint detection operation on the received pixel data to detect one or more keypoints in the image frame; and selectively output to the system memory a description of the one or more keypoints detected in the first set of respective image frame regions of the image frame in response to respective magnitude values of the one or more keypoints exceeding one of the plurality of keypoint sensitivity threshold values corresponding to the first set of respective image frame regions. 36. The device of claim 35, wherein: the keypoint detection circuit outputs a count of keypoints detected in the first set of respective image frame regions; and the count of keypoints detected in the first set of respective image frame regions is usable by program instructions to dynamically adjust one of the plurality of keypoint sensitivity threshold values. 37. The device of claim 35, wherein the keypoint control parameter storage structure is further configured to store: a programmable maximum limit of allowable keypoints for each of a second set of respective regions of the image frame usable by the keypoint detection circuit to output the description of a total number of keypoints for each of the second set of respective regions of the image frame, wherein the total number of keypoints for each of the second set of respective regions does not exceed the programmable maximum limit of allowable keypoints; and a programmable size of the second set of respective regions of the image frame, wherein the second set of respective regions corresponding to the programmable maximum limit of allowable keypoints are smaller regions of the image frame than the first set of respective image frame regions corresponding to the plurality of adjustable keypoint sensitivity threshold values. 38. The device of claim 35, further comprising one or more image processing stages other than the keypoint detection circuit, wherein: the image signal processor is configured to operate in a low power mode; the one or more image processing stages enter an inactive state in response to the keypoint detection circuit not detecting one or more keypoints for a pre-defined time period; the keypoint detection circuit remain in an active state configured to continue to output the description of the one or more keypoints to the system memory while the one or more image processing enter the inactive state; and the one or more image processing stages enter an active state in response to the keypoint detection circuit detecting one or more keypoints. 39. The device of claim 35, wherein the pixel data for the image frame comprises luminance channel image data. 40. The device of claim 35, wherein the image signal processor further comprises a pre-processing module configured to: receive full-color image data encoded in a first color space format for the image frame; convert the full-color image data to the pixel data for the image frame, wherein the pixel data for the image frame is encoded in a format other than the first color space format; and output the pixel data to the keypoint detection circuit.
Methods and systems for detecting keypoints in image data may include an image sensor interface receiving pixel data from an image sensor. A front-end pixel data processing circuit may receive pixel data and convert the pixel data to a different color space format. A back-end pixel data processing circuit may perform one or more operations on the pixel data. An output circuit may receive pixel data and output the pixel data to a system memory. A keypoint detection circuit may receive pixel data from the image sensor interface in the image sensor pixel data format or receive pixel data after processing by the front-end or the back-end pixel data processing circuits. The keypoint detection circuit may perform a keypoint detection operation on the pixel data to detect one or more keypoints in the image frame and output to the system memory a description of the one or more keypoints.1.-20. (canceled) 21. An image signal processor, comprising: a keypoint control parameter storage structure configured to store a plurality of keypoint sensitivity threshold values corresponding to a first set of respective image frame regions; a keypoint detection circuit connected to the keypoint control parameter storage structure and configured to: receive pixel data for an image frame; perform a keypoint detection operation on the received pixel data to detect one or more keypoints in the image frame; and selectively output to a system memory a description of the one or more keypoints detected in the first set of respective image frame regions of the image frame in response to respective magnitude values of the one or more keypoints exceeding one of the plurality of keypoint sensitivity threshold values corresponding to the first set of respective image frame regions. 22. The image signal processor of claim 21, wherein: the keypoint detection circuit is configured to output a respective count of keypoints detected in each region of the first set of respective image frame regions of the image frame; and the respective count of keypoints detected in each region of the first set of respective image frame regions of the image frame is usable by program instructions to dynamically adjust one of the plurality of keypoint sensitivity threshold values. 23. The image signal processor of claim 21, wherein the keypoint control parameter storage structure is configured to store: a programmable maximum limit of allowable keypoints for each of a second set of respective image frame regions usable by the keypoint detection circuit to output the description of a total number of keypoints for each of the second set of respective image frame regions of the image frame, wherein the total number of keypoints for each of the second set of respective image frame regions does not exceed the programmable maximum limit of allowable keypoints; and a programmable size of the second set of respective image frame regions, wherein the second set of respective image frame regions corresponding to the programmable maximum limit of allowable keypoints are smaller image frame regions than the first set of respective image frame regions corresponding to the plurality of adjustable keypoint sensitivity threshold values. 24. The image signal processor of claim 23, wherein to selectively output to the system memory the description of the one or more keypoints detected in a particular region of the second set of respective image frame regions of the image frame, the keypoint detection circuit is further configured to selectively output the description of the one or more keypoints with the highest respective magnitude values of the one or more keypoints detected in the particular region. 25. The image signal processor of claim 21, further comprising one or more image processing stages in addition to the keypoint detection circuit, and wherein the image signal processor is configured to operate in a low power mode in which: the one or more image processing stages enter an inactive state; the keypoint detection circuit remains in an active state configured to continue to output the description of the one or more keypoints to the system memory while the one or more image processing stages enter the inactive state; and the one or more image processing stages enter an active state in response to the keypoint detection circuit detecting one or more keypoints. 26. The image signal processor of claim 21, wherein the pixel data for the image frame comprises luminance channel image data. 27. The image signal processor of claim 21, further comprising a pre-processing module configured to: receive full-color image data encoded in a first color space format for the image frame; convert the full-color image data to the pixel data for the image frame, wherein the pixel data for the image frame is encoded in a format other than the first color space format; and output the pixel data to the keypoint detection circuit. 28. A method for an image signal processor, comprising: storing, in a keypoint control parameter storage structure connected to a keypoint detection circuit, a plurality of keypoint sensitivity threshold values corresponding to a first set of respective image frame regions; receiving, by the keypoint detection circuit, pixel data for an image frame; performing, by the keypoint detection circuit, a keypoint detection operation on the received pixel data to detect one or more keypoints in the image frame; selectively outputting, by the keypoint detection circuit, to a system memory a description of the one or more keypoints detected in the first set of respective image frame regions in response to respective magnitude values of the one or more keypoints exceeding one of the plurality of keypoint sensitivity threshold values corresponding to the first set of respective image frame regions. 29. The method of claim 28, further comprising outputting, by the keypoint detection circuit, a count of keypoints detected in the first set of respective regions of the image frame, wherein the count of keypoints detected in the first set of respective image frame regions of the image frame is usable by program instructions to dynamically adjust one of the plurality of keypoint sensitivity threshold values. 30. The method of claim 28, further comprising: storing, by the keypoint control parameter storage structure, a programmable maximum limit of allowable keypoints for each of a second set of respective regions of the image frame usable by the keypoint detection circuit to output the description of a total number of keypoints for each of the second set of respective regions of the image frame, wherein the total number of keypoints for each of the second set of respective regions does not exceed the programmable maximum limit of allowable keypoints; and storing, by the keypoint control parameter storage structure, a programmable size of the second set of respective regions of the image frame, wherein the second set of respective regions corresponding to the programmable maximum limit of allowable keypoints are smaller regions of the image frame than the first set of respective image frame regions corresponding to the plurality of adjustable keypoint sensitivity threshold values. 31. The method of claim 30, wherein selectively outputting to the system memory the description of the one or more keypoints detected in a particular region of the second set of respective image frame regions of the image frame comprises selectively outputting the description of the one or more keypoints with the highest respective magnitude values of the one or more keypoints detected in the particular region. 32. The method of claim 28, further comprising: placing one or more image processing stages in an inactive state in response to the keypoint detection circuit not detecting one or more keypoints for a pre-defined time period, wherein the keypoint detection circuit remains in an active state; and placing the one or more image processing stages in an active state in response to the keypoint detection circuit detecting one or more keypoints. 33. The method of claim 28, wherein the pixel data for the image frame comprises luminance channel image data. 34. The method of claim 28, further comprising receiving, by a pre-processing module, full-color image data encoded in a first color space format for the image frame; converting the full-color image data to the pixel data for the image frame, wherein the pixel data for the image frame is encoded in a format other than the first color space format; and outputting the pixel data to the keypoint detection circuit. 35. A device, comprising: a central processing unit; a system memory connected to the central processing unit; and an image signal processor connected to the central processing unit, wherein the image signal processor comprises: a keypoint control parameter storage structure configured to store a plurality of keypoint sensitivity threshold values corresponding to a first set of respective image frame regions; a keypoint detection circuit connected to the keypoint control parameter storage structure and configured to: receive pixel data for an image frame; perform a keypoint detection operation on the received pixel data to detect one or more keypoints in the image frame; and selectively output to the system memory a description of the one or more keypoints detected in the first set of respective image frame regions of the image frame in response to respective magnitude values of the one or more keypoints exceeding one of the plurality of keypoint sensitivity threshold values corresponding to the first set of respective image frame regions. 36. The device of claim 35, wherein: the keypoint detection circuit outputs a count of keypoints detected in the first set of respective image frame regions; and the count of keypoints detected in the first set of respective image frame regions is usable by program instructions to dynamically adjust one of the plurality of keypoint sensitivity threshold values. 37. The device of claim 35, wherein the keypoint control parameter storage structure is further configured to store: a programmable maximum limit of allowable keypoints for each of a second set of respective regions of the image frame usable by the keypoint detection circuit to output the description of a total number of keypoints for each of the second set of respective regions of the image frame, wherein the total number of keypoints for each of the second set of respective regions does not exceed the programmable maximum limit of allowable keypoints; and a programmable size of the second set of respective regions of the image frame, wherein the second set of respective regions corresponding to the programmable maximum limit of allowable keypoints are smaller regions of the image frame than the first set of respective image frame regions corresponding to the plurality of adjustable keypoint sensitivity threshold values. 38. The device of claim 35, further comprising one or more image processing stages other than the keypoint detection circuit, wherein: the image signal processor is configured to operate in a low power mode; the one or more image processing stages enter an inactive state in response to the keypoint detection circuit not detecting one or more keypoints for a pre-defined time period; the keypoint detection circuit remain in an active state configured to continue to output the description of the one or more keypoints to the system memory while the one or more image processing enter the inactive state; and the one or more image processing stages enter an active state in response to the keypoint detection circuit detecting one or more keypoints. 39. The device of claim 35, wherein the pixel data for the image frame comprises luminance channel image data. 40. The device of claim 35, wherein the image signal processor further comprises a pre-processing module configured to: receive full-color image data encoded in a first color space format for the image frame; convert the full-color image data to the pixel data for the image frame, wherein the pixel data for the image frame is encoded in a format other than the first color space format; and output the pixel data to the keypoint detection circuit.
2,600
9,811
9,811
14,584,008
2,626
A display device with integrated touch screen is provided. The display device includes a panel configured to include a plurality of electrodes and to be division-driven in a display driving mode and a touch driving mode during one frame period, a display driver IC configured to apply a common voltage to the plurality of electrodes during the display driving mode, and an ROIC configured to apply, to the plurality of electrodes, a touch scan signal for sensing a touch when the touch driving mode is a first touch driving mode, and apply, to the plurality of electrodes, a touch scan signal for detecting a touch input position when the touch driving mode is a second touch driving mode.
1. A display device with integrated touch screen, the display device comprising: a panel configured to include a plurality of electrodes, wherein the panel is division-driven in a display driving mode and a touch driving mode during one frame period; a display driver IC configured to apply a common voltage to the plurality of electrodes during the display driving mode; and an ROIC configured to apply, to the plurality of electrodes, a touch scan signal for sensing a touch when the touch driving mode is a first touch driving mode, and apply, to the plurality of electrodes, a touch scan signal for detecting a touch input position when the touch driving mode is a second touch driving mode, wherein number of times the touch scan signal is applied to the plurality of electrodes during one frame period when the touch driving mode is the first touch driving mode is smaller than number of times the touch scan signal is applied to the plurality of electrodes during one frame period when the touch driving mode is the second touch driving mode. 2. The display device of claim 1, wherein the first touch driving mode is an idle driving mode, and the second touch driving mode is an active driving mode. 3. The display device of claim 1, wherein, when the touch driving mode is the first touch driving mode, the touch scan signal is applied to the plurality of electrodes once during one frame period. 4. The display device of claim 1, wherein, when the touch driving mode is the second touch driving mode, the touch scan signal is applied to the plurality of electrodes twice or more during one frame period. 5. The display device of claim 1, wherein, when the touch driving mode is the first touch driving mode, the ROIC is turned on during a period where the touch scan signal is applied to the plurality of electrodes in the one frame period. 6. The display device of claim 1, wherein, when the touch driving mode is the second touch driving mode, the ROIC is turned off during the display driving mode in the one frame period. 7. The display device of claim 1, wherein the display driving mode and the second touch driving mode are alternately division-driven during the one frame period. 8. The display device of claim 1, wherein the ROIC receives, from the plurality of electrodes, a feedback signal based on the touch scan signal for sensing the touch and a feedback signal based on the touch scan signal for detecting the touch input position, and determines whether the panel is touched or detects the touch input position.
A display device with integrated touch screen is provided. The display device includes a panel configured to include a plurality of electrodes and to be division-driven in a display driving mode and a touch driving mode during one frame period, a display driver IC configured to apply a common voltage to the plurality of electrodes during the display driving mode, and an ROIC configured to apply, to the plurality of electrodes, a touch scan signal for sensing a touch when the touch driving mode is a first touch driving mode, and apply, to the plurality of electrodes, a touch scan signal for detecting a touch input position when the touch driving mode is a second touch driving mode.1. A display device with integrated touch screen, the display device comprising: a panel configured to include a plurality of electrodes, wherein the panel is division-driven in a display driving mode and a touch driving mode during one frame period; a display driver IC configured to apply a common voltage to the plurality of electrodes during the display driving mode; and an ROIC configured to apply, to the plurality of electrodes, a touch scan signal for sensing a touch when the touch driving mode is a first touch driving mode, and apply, to the plurality of electrodes, a touch scan signal for detecting a touch input position when the touch driving mode is a second touch driving mode, wherein number of times the touch scan signal is applied to the plurality of electrodes during one frame period when the touch driving mode is the first touch driving mode is smaller than number of times the touch scan signal is applied to the plurality of electrodes during one frame period when the touch driving mode is the second touch driving mode. 2. The display device of claim 1, wherein the first touch driving mode is an idle driving mode, and the second touch driving mode is an active driving mode. 3. The display device of claim 1, wherein, when the touch driving mode is the first touch driving mode, the touch scan signal is applied to the plurality of electrodes once during one frame period. 4. The display device of claim 1, wherein, when the touch driving mode is the second touch driving mode, the touch scan signal is applied to the plurality of electrodes twice or more during one frame period. 5. The display device of claim 1, wherein, when the touch driving mode is the first touch driving mode, the ROIC is turned on during a period where the touch scan signal is applied to the plurality of electrodes in the one frame period. 6. The display device of claim 1, wherein, when the touch driving mode is the second touch driving mode, the ROIC is turned off during the display driving mode in the one frame period. 7. The display device of claim 1, wherein the display driving mode and the second touch driving mode are alternately division-driven during the one frame period. 8. The display device of claim 1, wherein the ROIC receives, from the plurality of electrodes, a feedback signal based on the touch scan signal for sensing the touch and a feedback signal based on the touch scan signal for detecting the touch input position, and determines whether the panel is touched or detects the touch input position.
2,600
9,812
9,812
14,740,862
2,611
Several embodiments of scalable image processing systems and methods are disclosed herein whereby color management processing of source image data to be displayed on a target display is changed according to varying levels of metadata.
1. A method for processing and rendering image data on a target display through a set of levels of metadata, said metadata being associated with said image content, the steps of said method comprising: inputting the image data; ascertaining the set of levels of metadata associated with the image data; if no metadata is associated with the image data, performing at least one of a group of image processing steps, said group comprising: switching to default values and adaptively calculating parameter values; if metadata is associated with the image data, calculating color management algorithm parameters according to set of levels of metadata associated with the image data.
Several embodiments of scalable image processing systems and methods are disclosed herein whereby color management processing of source image data to be displayed on a target display is changed according to varying levels of metadata.1. A method for processing and rendering image data on a target display through a set of levels of metadata, said metadata being associated with said image content, the steps of said method comprising: inputting the image data; ascertaining the set of levels of metadata associated with the image data; if no metadata is associated with the image data, performing at least one of a group of image processing steps, said group comprising: switching to default values and adaptively calculating parameter values; if metadata is associated with the image data, calculating color management algorithm parameters according to set of levels of metadata associated with the image data.
2,600
9,813
9,813
13,529,088
2,645
A method for providing a user agent (UA) with service identification data. The method includes an application server (AS) transmitting Session Initiation Protocol (SIP) data. The SIP data comprises an identifier of the AS and a service identifier for at least one service supported by the AS. The method further includes the UA receiving the SIP data.
1. A method for providing service identification data, comprising: receiving Session Initiation Protocol (SIP) data from a first user agent (UA); adding an identifier of an application server (AS) and a service identifier for at least one service supported by the AS to the SIP data; and transmitting the SIP data to a second UA, the at least one service invoked during communication between the UA and a second UA, wherein the second UA is not the AS. 2. The method of claim 1 wherein the identifier of the AS is a uniform resource identifier (URI). 3. The method of claim 2 wherein the URI is Globally Routable UA URI (GRUU). 4. The method of claim 2 wherein the URI contains a ‘gr’ parameter. 5. The method of claim 2 wherein the service identifier is an IP (Internet Protocol) Multimedia Subsystem (IMS) Communication Service Identifier (ICSI). 6. The method of claim 5 wherein a URI parameter in the URI is set equal to the ICSI. 7. The method of claim 6 wherein the URI parameter is a g.3gpp.app_ref tag. 8. The method of claim 2 wherein the URI is included in one of: a Record-Route header; a Record header; a Via header; and a Contact address. 9. The method of claim 8 wherein the URI is included in a SIP INVITE message. 10. The method of claim 9 wherein a user portion of the URI identifies at least one of: a UA that originated the SIP INVITE message; and a UA that terminates the SIP INVITE message. 11. The method of claim 1 wherein the service identifier is associated with the at least one service by one of: an agreement between the UA and the AS; a hard coding of the association; and execution of an algorithm that performs the association. 12. A user agent (UA) comprising: a processor configured to receive Session Initiation Protocol (SIP) data comprising: an identifier of an application server (AS); and a service identifier for at least one service supported by the AS, the at least one service invoked during communication between the UA and a second UA, wherein the second UA is not the AS. 13. The UA of claim 12 wherein the identifier of the AS is a uniform resource identifier (URI). 14. The UA of claim 13 wherein the URI is Globally Routable UA URI (GRUU). 15. The UA of claim 13 wherein the URI contains a ‘gr’ parameter. 16. The UA of claim 13 wherein the service identifier is an IP (Internet Protocol) Multimedia Subsystem (IMS) Communication Service Identifier (ICSI). 17. The UA of claim 16 wherein a URI parameter in the URI is set equal to the ICSI. 18. The UA of claim 17 wherein the URI parameter is a g.3gpp.app_ref tag. 19. The UA of claim 13 wherein the URI is included in one of: a Record-Route header; a Record header; a Via header; and a Contact address. 20. The UA of claim 19 wherein the URI is included in a SIP INVITE message. 21. The UA of claim 20 wherein a user portion of the URI identifies at least one of: a UA that originated the SIP INVITE message; and a UA that terminates the SIP INVITE message. 22. The UA of claim 12 wherein the service identifier is associated with the at least one service by one of: an agreement between the UA and the AS; a hard coding of the association; and execution of an algorithm that performs the association. 23. A network component comprising: a processor configured to: receive Session Initiation Protocol (SIP) data; add an identifier of the network component and a service identifier for at least one service supported by the network component to the SIP data; and transmit the SIP data to a UA, the at least one service invoked during communication between the UA and a second UA, wherein the second UA is not the AS. 24. The network component of claim 23 wherein the identifier of the network component is a uniform resource identifier (URI). 25. The UA of claim 24 wherein the URI is Globally Routable UA URI (GRUU). 26. The UA of claim 24 wherein the URI contains a ‘gr’ parameter. 27. The network component of claim 24 wherein the service identifier is an IP (Internet Protocol) Multimedia Subsystem (IMS) Communication Service Identifier (ICSI). 28. The network component of claim 27 wherein a URI parameter in the URI is set equal to the ICSI. 29. The network component of claim 28 wherein the URI parameter is a g.3gpp.app_ref tag. 30. The network component of claim 24 wherein the URI is included in one of: a Record-Route header; a Record header; a Via header; and a Contact address. 31. The network component of claim 30 wherein the URI is included in a SIP INVITE message. 32. The network component of claim 31 wherein a user portion of the URI identifies at least one of: a user agent that originated the SIP INVITE message; and a user agent that terminates the SIP INVITE message. 33. The network component of claim 23 wherein the service identifier is associated with the at least one service by one of: an agreement between a user agent and the network component; a hard coding of the association; and execution of an algorithm that performs the association. 34. The network component of claim 23 wherein the network component is one of: an application server; and a telephony application server.
A method for providing a user agent (UA) with service identification data. The method includes an application server (AS) transmitting Session Initiation Protocol (SIP) data. The SIP data comprises an identifier of the AS and a service identifier for at least one service supported by the AS. The method further includes the UA receiving the SIP data.1. A method for providing service identification data, comprising: receiving Session Initiation Protocol (SIP) data from a first user agent (UA); adding an identifier of an application server (AS) and a service identifier for at least one service supported by the AS to the SIP data; and transmitting the SIP data to a second UA, the at least one service invoked during communication between the UA and a second UA, wherein the second UA is not the AS. 2. The method of claim 1 wherein the identifier of the AS is a uniform resource identifier (URI). 3. The method of claim 2 wherein the URI is Globally Routable UA URI (GRUU). 4. The method of claim 2 wherein the URI contains a ‘gr’ parameter. 5. The method of claim 2 wherein the service identifier is an IP (Internet Protocol) Multimedia Subsystem (IMS) Communication Service Identifier (ICSI). 6. The method of claim 5 wherein a URI parameter in the URI is set equal to the ICSI. 7. The method of claim 6 wherein the URI parameter is a g.3gpp.app_ref tag. 8. The method of claim 2 wherein the URI is included in one of: a Record-Route header; a Record header; a Via header; and a Contact address. 9. The method of claim 8 wherein the URI is included in a SIP INVITE message. 10. The method of claim 9 wherein a user portion of the URI identifies at least one of: a UA that originated the SIP INVITE message; and a UA that terminates the SIP INVITE message. 11. The method of claim 1 wherein the service identifier is associated with the at least one service by one of: an agreement between the UA and the AS; a hard coding of the association; and execution of an algorithm that performs the association. 12. A user agent (UA) comprising: a processor configured to receive Session Initiation Protocol (SIP) data comprising: an identifier of an application server (AS); and a service identifier for at least one service supported by the AS, the at least one service invoked during communication between the UA and a second UA, wherein the second UA is not the AS. 13. The UA of claim 12 wherein the identifier of the AS is a uniform resource identifier (URI). 14. The UA of claim 13 wherein the URI is Globally Routable UA URI (GRUU). 15. The UA of claim 13 wherein the URI contains a ‘gr’ parameter. 16. The UA of claim 13 wherein the service identifier is an IP (Internet Protocol) Multimedia Subsystem (IMS) Communication Service Identifier (ICSI). 17. The UA of claim 16 wherein a URI parameter in the URI is set equal to the ICSI. 18. The UA of claim 17 wherein the URI parameter is a g.3gpp.app_ref tag. 19. The UA of claim 13 wherein the URI is included in one of: a Record-Route header; a Record header; a Via header; and a Contact address. 20. The UA of claim 19 wherein the URI is included in a SIP INVITE message. 21. The UA of claim 20 wherein a user portion of the URI identifies at least one of: a UA that originated the SIP INVITE message; and a UA that terminates the SIP INVITE message. 22. The UA of claim 12 wherein the service identifier is associated with the at least one service by one of: an agreement between the UA and the AS; a hard coding of the association; and execution of an algorithm that performs the association. 23. A network component comprising: a processor configured to: receive Session Initiation Protocol (SIP) data; add an identifier of the network component and a service identifier for at least one service supported by the network component to the SIP data; and transmit the SIP data to a UA, the at least one service invoked during communication between the UA and a second UA, wherein the second UA is not the AS. 24. The network component of claim 23 wherein the identifier of the network component is a uniform resource identifier (URI). 25. The UA of claim 24 wherein the URI is Globally Routable UA URI (GRUU). 26. The UA of claim 24 wherein the URI contains a ‘gr’ parameter. 27. The network component of claim 24 wherein the service identifier is an IP (Internet Protocol) Multimedia Subsystem (IMS) Communication Service Identifier (ICSI). 28. The network component of claim 27 wherein a URI parameter in the URI is set equal to the ICSI. 29. The network component of claim 28 wherein the URI parameter is a g.3gpp.app_ref tag. 30. The network component of claim 24 wherein the URI is included in one of: a Record-Route header; a Record header; a Via header; and a Contact address. 31. The network component of claim 30 wherein the URI is included in a SIP INVITE message. 32. The network component of claim 31 wherein a user portion of the URI identifies at least one of: a user agent that originated the SIP INVITE message; and a user agent that terminates the SIP INVITE message. 33. The network component of claim 23 wherein the service identifier is associated with the at least one service by one of: an agreement between a user agent and the network component; a hard coding of the association; and execution of an algorithm that performs the association. 34. The network component of claim 23 wherein the network component is one of: an application server; and a telephony application server.
2,600
9,814
9,814
14,605,717
2,643
A mobile apparatus is described. When the mobile apparatus is in a locked state and a user is detected, the user is authenticated automatically on background using a primary biometric authentication method. In response to authenticating the user automatically on background, the locked state of the mobile apparatus is opened, and a screen relating to the locked state on a display is provided although the locked state has been opened.
1. A method comprising: detecting a user with a mobile apparatus, the mobile apparatus being in a locked state; authenticating the user with the mobile apparatus automatically on background using a primary biometric authentication method; unlocking the locked state of the mobile apparatus in response to authenticating the user automatically on background, the unlocking placing the mobile apparatus in an unlocked state; and providing a screen that indicates the mobile apparatus is in the unlocked state. 2. A method according to claim 1, wherein the screen relating to the unlocked state is a glance screen. 3. A method according to claim 1, comprising: receiving an input from the user; and providing a normal operating system view relating to the unlocked state on the display. 4. A method according to claim 1, comprising: providing user specific information on the screen in response to authenticating the user. 5. A method according to claim 1, comprising: automatically locking the mobile apparatus if receiving no further user input within a predetermined period of time after the user has been authenticated. 6. A method according to claim 1, comprising providing an indication on the screen to the user when the user has been authenticated. 7. A method according to claim 1, wherein the primary biometric authentication method comprises at least one of the following: iris authentication; authentication based on facial recognition; and fingerprint authentication. 8. A method according to claim 1, comprising: authenticating the user with the mobile apparatus using a secondary authentication method after the user has been authenticated using the primary biometric authentication method; and providing user-specific information on the screen in response to authenticating the user using the secondary authentication method. 9. A method according to claim 8, comprising: enabling at least two secondary authentication methods; linking specific sets of user-specific information to each of the at least two secondary authentication methods; and wherein providing user-specific information comprises providing the user-specific information on the screen linked to the secondary authentication method in response to authenticating the user using the secondary authentication method. 10. A mobile apparatus comprising: a display; at least one processor, and at least one memory storing program instructions that, when executed by the at least one processor, cause the mobile apparatus to: detect a user, the mobile apparatus being in a locked state; authenticate the user automatically on background using a primary biometric authentication method; unlock the locked state in response to authenticating the user automatically on background, the unlocking placing the mobile apparatus in an unlocked state; and provide a screen that indicates the mobile apparatus is in the unlocked state. 11. A mobile apparatus according to claim 10, wherein the screen relating to the unlocked state is a glance screen. 12. A mobile apparatus according to claim 10, wherein the at least one memory storing program instructions that, when executed by the at least one processor, cause the mobile apparatus to: receive an input from the user; and provide a normal operating system view relating to the unlocked state on the display. 13. A mobile apparatus according to claim 10, wherein the at least one memory storing program instructions that, when executed by the at least one processor, cause the mobile apparatus to: provide user specific information on the screen in response to authenticating the user. 14. A mobile apparatus according to claim 10, wherein the at least one memory storing program instructions that, when executed by the at least one processor, cause the mobile apparatus to: automatically lock the mobile apparatus if receiving no further user input within a predetermined period of time after the user has been authenticated. 15. A mobile apparatus according to claim 10, wherein the at least one memory storing program instructions that, when executed by the at least one processor, cause the mobile apparatus to: provide a lock screen, a blank screen or a glance screen on the display prior to authenticating the user. 16. A mobile apparatus according to claim 10, wherein the at least one memory storing program instructions that, when executed by the at least one processor, cause the mobile apparatus to: provide an indication on the screen to the user when the user has been authenticated. 17. A mobile apparatus according to claim 10, wherein the primary biometric authentication method comprises at least one of the following: iris authentication; authentication based on facial recognition; and fingerprint authentication. 18. A mobile apparatus according to claim 10, wherein the at least one memory storing program instructions that, when executed by the at least one processor, cause the mobile apparatus to: authenticate the user with the mobile apparatus using a secondary authentication method after the user has been authenticated using the primary biometric authentication method; and provide user specific information on the screen in response to authenticating the user using the secondary authentication method. 19. A mobile apparatus according to claim 18, wherein the at least one memory storing program instructions that, when executed by the at least one processor, cause the mobile apparatus to: enable at least two secondary authentication methods; link specific sets of user-specific information to each of the at least two secondary authentication methods; and wherein providing user-specific information comprises providing the user-specific information on the screen linked to the secondary authentication method in response to authenticating the user using the secondary authentication method. 20. A mobile apparatus comprising: a display; a detector; an authentication device; at least one processor, and at least one memory storing program instructions that, when executed by the at least one processor, cause the mobile apparatus to: detect a user with the detector while the mobile apparatus is in a locked state; authenticate the user automatically on background using a primary biometric authentication method with the authentication device; unlock the locked state in response to authenticating the user automatically on background; and provide a screen that indicates the mobile apparatus is in the unlocked state.
A mobile apparatus is described. When the mobile apparatus is in a locked state and a user is detected, the user is authenticated automatically on background using a primary biometric authentication method. In response to authenticating the user automatically on background, the locked state of the mobile apparatus is opened, and a screen relating to the locked state on a display is provided although the locked state has been opened.1. A method comprising: detecting a user with a mobile apparatus, the mobile apparatus being in a locked state; authenticating the user with the mobile apparatus automatically on background using a primary biometric authentication method; unlocking the locked state of the mobile apparatus in response to authenticating the user automatically on background, the unlocking placing the mobile apparatus in an unlocked state; and providing a screen that indicates the mobile apparatus is in the unlocked state. 2. A method according to claim 1, wherein the screen relating to the unlocked state is a glance screen. 3. A method according to claim 1, comprising: receiving an input from the user; and providing a normal operating system view relating to the unlocked state on the display. 4. A method according to claim 1, comprising: providing user specific information on the screen in response to authenticating the user. 5. A method according to claim 1, comprising: automatically locking the mobile apparatus if receiving no further user input within a predetermined period of time after the user has been authenticated. 6. A method according to claim 1, comprising providing an indication on the screen to the user when the user has been authenticated. 7. A method according to claim 1, wherein the primary biometric authentication method comprises at least one of the following: iris authentication; authentication based on facial recognition; and fingerprint authentication. 8. A method according to claim 1, comprising: authenticating the user with the mobile apparatus using a secondary authentication method after the user has been authenticated using the primary biometric authentication method; and providing user-specific information on the screen in response to authenticating the user using the secondary authentication method. 9. A method according to claim 8, comprising: enabling at least two secondary authentication methods; linking specific sets of user-specific information to each of the at least two secondary authentication methods; and wherein providing user-specific information comprises providing the user-specific information on the screen linked to the secondary authentication method in response to authenticating the user using the secondary authentication method. 10. A mobile apparatus comprising: a display; at least one processor, and at least one memory storing program instructions that, when executed by the at least one processor, cause the mobile apparatus to: detect a user, the mobile apparatus being in a locked state; authenticate the user automatically on background using a primary biometric authentication method; unlock the locked state in response to authenticating the user automatically on background, the unlocking placing the mobile apparatus in an unlocked state; and provide a screen that indicates the mobile apparatus is in the unlocked state. 11. A mobile apparatus according to claim 10, wherein the screen relating to the unlocked state is a glance screen. 12. A mobile apparatus according to claim 10, wherein the at least one memory storing program instructions that, when executed by the at least one processor, cause the mobile apparatus to: receive an input from the user; and provide a normal operating system view relating to the unlocked state on the display. 13. A mobile apparatus according to claim 10, wherein the at least one memory storing program instructions that, when executed by the at least one processor, cause the mobile apparatus to: provide user specific information on the screen in response to authenticating the user. 14. A mobile apparatus according to claim 10, wherein the at least one memory storing program instructions that, when executed by the at least one processor, cause the mobile apparatus to: automatically lock the mobile apparatus if receiving no further user input within a predetermined period of time after the user has been authenticated. 15. A mobile apparatus according to claim 10, wherein the at least one memory storing program instructions that, when executed by the at least one processor, cause the mobile apparatus to: provide a lock screen, a blank screen or a glance screen on the display prior to authenticating the user. 16. A mobile apparatus according to claim 10, wherein the at least one memory storing program instructions that, when executed by the at least one processor, cause the mobile apparatus to: provide an indication on the screen to the user when the user has been authenticated. 17. A mobile apparatus according to claim 10, wherein the primary biometric authentication method comprises at least one of the following: iris authentication; authentication based on facial recognition; and fingerprint authentication. 18. A mobile apparatus according to claim 10, wherein the at least one memory storing program instructions that, when executed by the at least one processor, cause the mobile apparatus to: authenticate the user with the mobile apparatus using a secondary authentication method after the user has been authenticated using the primary biometric authentication method; and provide user specific information on the screen in response to authenticating the user using the secondary authentication method. 19. A mobile apparatus according to claim 18, wherein the at least one memory storing program instructions that, when executed by the at least one processor, cause the mobile apparatus to: enable at least two secondary authentication methods; link specific sets of user-specific information to each of the at least two secondary authentication methods; and wherein providing user-specific information comprises providing the user-specific information on the screen linked to the secondary authentication method in response to authenticating the user using the secondary authentication method. 20. A mobile apparatus comprising: a display; a detector; an authentication device; at least one processor, and at least one memory storing program instructions that, when executed by the at least one processor, cause the mobile apparatus to: detect a user with the detector while the mobile apparatus is in a locked state; authenticate the user automatically on background using a primary biometric authentication method with the authentication device; unlock the locked state in response to authenticating the user automatically on background; and provide a screen that indicates the mobile apparatus is in the unlocked state.
2,600
9,815
9,815
15,208,738
2,649
A base station within a network for providing ATG wireless communication in various cells may include an antenna array defining a plurality of wedge shaped sectors having respective widths defined in azimuth, and a beamforming control module. The beamforming control module may be configured to communicate with the antenna array via a first RF chain to perform beamforming defining traffic channel beams having a first width, and a second RF chain to perform beamforming defining control channel beams having a second width. The second width may be greater than the first width.
1. A base station within a network for providing air-to-ground (ATG) wireless communication in various cells, the base station comprising: an antenna array defining a plurality of wedge shaped sectors having respective widths defined in azimuth; and a beamforming control module configured to communicate with the antenna array via a first RF chain to perform beamforming defining traffic channel beams having a first width, and a second RF chain to perform beamforming defining control channel beams having a second width, the second width being greater than the first width. 2. The base station of claim 1, wherein the beamforming control module is configured to generate the traffic channel beams and the control channel beams simultaneously via the first and second RF chains, respectively. 3. The base station of claim 1, wherein the second width is substantially equal to a width of each sector. 4. The base station of claim 3, wherein the first width is about 2 degrees to about 5 degrees in azimuth. 5. The base station of claim 1, wherein the second width is about two to ten times larger than the first width. 6. The base station of claim 1, wherein each sector is about 30 degrees to about 60 degrees wide in azimuth. 7. The base station of claim 1, further comprising a remote radio head disposed proximate to the antenna array, and wherein the remote radio head receives location information to enable the remote radio head to employ the first RF chain to provide the traffic channel beams to an aircraft while tracking the aircraft. 8. The base station of claim 1, wherein all sectors of the base station employ separate transmit and receive channels. 9. The base station of claim 8, wherein the separate transmit and receive channels of the base station are different than transmit and receive channels of each adjacent base station of the network. 10. The base station of claim 1, wherein the antenna array is fed by both the first and second RF chains to enable the antenna array to have one physical structure but functionally act as two different antennas. 11. A network for providing air-to-ground (ATG) wireless communication in various cells, comprising: a first base station having a first antenna array defining a plurality of first sectors having respective widths defined in azimuth; and a second base station having a second antenna array defining a plurality of second sectors having respective widths defined in azimuth, wherein the first base station and the second base station are disposed offset from each other along a first direction, and wherein each of the first and second base stations includes a beamforming control module configured to communicate with the first and second antenna arrays, respectively, via a first RF chain to perform beamforming defining traffic channel beams having a first width, and a second RF chain to perform beamforming defining control channel beams having a second width, the second width being greater than the first width. 12. The network of claim 11, wherein the beamforming control module is configured to generate the traffic channel beams and the control channel beams simultaneously via the first and second RF chains, respectively. 13. The network of claim 11, wherein the second width is substantially equal to a width of each sector. 14. The network of claim 13, wherein the first width is about 2 degrees to about 5 degrees in azimuth. 15. The network of claim 11, wherein the second width is about two to ten times larger than the first width. 16. The network of claim 11, wherein each sector is about 30 degrees to about 60 degrees wide in azimuth. 17. The network of claim 11, wherein each of the first base station and the second base station comprises a remote radio head disposed proximate to the first and second antenna arrays, and wherein the remote radio head receives location information to enable the remote radio head to employ the second RF chain to provide the traffic channel beams to an aircraft while tracking the aircraft. 18. The network of claim 11, wherein all sectors of the first and second base stations employ separate transmit and receive channels. 19. The network of claim 18, wherein the separate transmit and receive channels of the first base station are different than transmit and receive channels of the second base station. 20. The network of claim 11, wherein the antenna array is fed by both the first and second RF chains to enable the first and second antenna arrays to each have one physical structure but functionally act as two different antennas.
A base station within a network for providing ATG wireless communication in various cells may include an antenna array defining a plurality of wedge shaped sectors having respective widths defined in azimuth, and a beamforming control module. The beamforming control module may be configured to communicate with the antenna array via a first RF chain to perform beamforming defining traffic channel beams having a first width, and a second RF chain to perform beamforming defining control channel beams having a second width. The second width may be greater than the first width.1. A base station within a network for providing air-to-ground (ATG) wireless communication in various cells, the base station comprising: an antenna array defining a plurality of wedge shaped sectors having respective widths defined in azimuth; and a beamforming control module configured to communicate with the antenna array via a first RF chain to perform beamforming defining traffic channel beams having a first width, and a second RF chain to perform beamforming defining control channel beams having a second width, the second width being greater than the first width. 2. The base station of claim 1, wherein the beamforming control module is configured to generate the traffic channel beams and the control channel beams simultaneously via the first and second RF chains, respectively. 3. The base station of claim 1, wherein the second width is substantially equal to a width of each sector. 4. The base station of claim 3, wherein the first width is about 2 degrees to about 5 degrees in azimuth. 5. The base station of claim 1, wherein the second width is about two to ten times larger than the first width. 6. The base station of claim 1, wherein each sector is about 30 degrees to about 60 degrees wide in azimuth. 7. The base station of claim 1, further comprising a remote radio head disposed proximate to the antenna array, and wherein the remote radio head receives location information to enable the remote radio head to employ the first RF chain to provide the traffic channel beams to an aircraft while tracking the aircraft. 8. The base station of claim 1, wherein all sectors of the base station employ separate transmit and receive channels. 9. The base station of claim 8, wherein the separate transmit and receive channels of the base station are different than transmit and receive channels of each adjacent base station of the network. 10. The base station of claim 1, wherein the antenna array is fed by both the first and second RF chains to enable the antenna array to have one physical structure but functionally act as two different antennas. 11. A network for providing air-to-ground (ATG) wireless communication in various cells, comprising: a first base station having a first antenna array defining a plurality of first sectors having respective widths defined in azimuth; and a second base station having a second antenna array defining a plurality of second sectors having respective widths defined in azimuth, wherein the first base station and the second base station are disposed offset from each other along a first direction, and wherein each of the first and second base stations includes a beamforming control module configured to communicate with the first and second antenna arrays, respectively, via a first RF chain to perform beamforming defining traffic channel beams having a first width, and a second RF chain to perform beamforming defining control channel beams having a second width, the second width being greater than the first width. 12. The network of claim 11, wherein the beamforming control module is configured to generate the traffic channel beams and the control channel beams simultaneously via the first and second RF chains, respectively. 13. The network of claim 11, wherein the second width is substantially equal to a width of each sector. 14. The network of claim 13, wherein the first width is about 2 degrees to about 5 degrees in azimuth. 15. The network of claim 11, wherein the second width is about two to ten times larger than the first width. 16. The network of claim 11, wherein each sector is about 30 degrees to about 60 degrees wide in azimuth. 17. The network of claim 11, wherein each of the first base station and the second base station comprises a remote radio head disposed proximate to the first and second antenna arrays, and wherein the remote radio head receives location information to enable the remote radio head to employ the second RF chain to provide the traffic channel beams to an aircraft while tracking the aircraft. 18. The network of claim 11, wherein all sectors of the first and second base stations employ separate transmit and receive channels. 19. The network of claim 18, wherein the separate transmit and receive channels of the first base station are different than transmit and receive channels of the second base station. 20. The network of claim 11, wherein the antenna array is fed by both the first and second RF chains to enable the first and second antenna arrays to each have one physical structure but functionally act as two different antennas.
2,600
9,816
9,816
15,304,825
2,663
Examples disclosed herein relate to authentication based on data content and data partitions. In one implementation, a processor may execute instructions to determine the likelihood of authenticity based on partitions of the authentication data and content of the authentication data. The processor then outputs information related to the likelihood of authenticity.
1. A computer system, comprising: a processor to: determine partition authentication information related to partitions of authentication data; determine content authentication information related to the content of the authentication data; determine a likelihood of authenticity of the authentication data based on the partition authentication information and the content authentication information; and output information related to the likelihood of authenticity. 2. The computing system of claim 1, wherein determining partition authentication information comprises determining authentication information based on at least one of: the number of partitions of the authentication data; and the amount of data in the partitions of the authentication data. 3. The computing system of claim 1, further comprising a storage to store partition information including least one of: a key for the number of partitions; a key for the amount of data in each of a number of partitions; and partition information related to previously authenticated authentication data; wherein determining partition authentication information comprises determining a Hamming distance between the stored partition information and partitions of the authentication data. 4. The computing system of claim 1, wherein the authentication data is in the form of at least one of: a two-dimensional color barcode, progressive barcode, steganographic halftone, grid code, and digital document. 5. The computing system of claim 1, wherein determining the content authentication information comprises determining authentication information related to the content of data within partitions determined to be likely to be authentic based on the partition authentication information. 6. A method, comprising: determining, by a processor, authentication information related o partitions of authentication data; determining authentication information related to the content of the authentication data; determining a likelihood of authenticity of the authentication data based on the authentication information related to the partitions and the authentication information related to the content; and outputting information related to the likelihood of authenticity. 7. The method of claim 6, wherein determining authentication information related to partitions of the data comprises determining authentication information based on at least one of: the number of data partitions and the amount of data within the partitions. 8. The method of claim 6, wherein determining authentication information related to partitions of the authentication data comprises determining a Hamming distance between stored partition information and partition information related to the authentication data. 9. The method of claim 6, wherein the authentication information related to partitions of the authentication data is determined based on a Hamming distance between partition information related to the authentication data and stored partition information related to previously authenticated data. 10. The method of claim 6, wherein determining authentication information related to partitions of the authentication data comprises selecting partitions of the authentication data with an amount of data determined to be likely to be authentic, and wherein determining authentication information related to the content of the authentication data comprises determining the likelihood of authenticity of data within the selected partitions. 11. The method of claim 6, wherein determining authentication information related to partitions of authentication data comprises determining authentication information based on at least one of: the position of data bearing cells in a steganographic halftone; and the amount of data within the data bearing steganographic halftone cells. 12. A machine-readable non-transitory storage medium comprising instructions executable by a processor to: determine the likelihood of authenticity of authentication data based on partitions of the authentication data and content of the authentication data; and output information related to the likelihood of authenticity. 13. The machine-readable non-transitory storage medium of claim 12, wherein instructions to, determine the likelihood of authenticity comprise determining the likelihood of authenticity based on at least one of: the number of partitions within the authentication data and the amount of data in the partitions. 14. The machine-readable non-transitory storage medium of claim 12, wherein instructions to determine the likelihood of authenticity comprise determining the likelihood of authenticity based on at least one of: a comparison of partition information related to the authentication data to stored authentication partition information; and a comparison of partition information related to the authentication data to stored information related to partitions of previously authenticated data. 15. The machine-readable non-transitory storage medium of claim 12, wherein the authentication data is in the form of a data bearing image.
Examples disclosed herein relate to authentication based on data content and data partitions. In one implementation, a processor may execute instructions to determine the likelihood of authenticity based on partitions of the authentication data and content of the authentication data. The processor then outputs information related to the likelihood of authenticity.1. A computer system, comprising: a processor to: determine partition authentication information related to partitions of authentication data; determine content authentication information related to the content of the authentication data; determine a likelihood of authenticity of the authentication data based on the partition authentication information and the content authentication information; and output information related to the likelihood of authenticity. 2. The computing system of claim 1, wherein determining partition authentication information comprises determining authentication information based on at least one of: the number of partitions of the authentication data; and the amount of data in the partitions of the authentication data. 3. The computing system of claim 1, further comprising a storage to store partition information including least one of: a key for the number of partitions; a key for the amount of data in each of a number of partitions; and partition information related to previously authenticated authentication data; wherein determining partition authentication information comprises determining a Hamming distance between the stored partition information and partitions of the authentication data. 4. The computing system of claim 1, wherein the authentication data is in the form of at least one of: a two-dimensional color barcode, progressive barcode, steganographic halftone, grid code, and digital document. 5. The computing system of claim 1, wherein determining the content authentication information comprises determining authentication information related to the content of data within partitions determined to be likely to be authentic based on the partition authentication information. 6. A method, comprising: determining, by a processor, authentication information related o partitions of authentication data; determining authentication information related to the content of the authentication data; determining a likelihood of authenticity of the authentication data based on the authentication information related to the partitions and the authentication information related to the content; and outputting information related to the likelihood of authenticity. 7. The method of claim 6, wherein determining authentication information related to partitions of the data comprises determining authentication information based on at least one of: the number of data partitions and the amount of data within the partitions. 8. The method of claim 6, wherein determining authentication information related to partitions of the authentication data comprises determining a Hamming distance between stored partition information and partition information related to the authentication data. 9. The method of claim 6, wherein the authentication information related to partitions of the authentication data is determined based on a Hamming distance between partition information related to the authentication data and stored partition information related to previously authenticated data. 10. The method of claim 6, wherein determining authentication information related to partitions of the authentication data comprises selecting partitions of the authentication data with an amount of data determined to be likely to be authentic, and wherein determining authentication information related to the content of the authentication data comprises determining the likelihood of authenticity of data within the selected partitions. 11. The method of claim 6, wherein determining authentication information related to partitions of authentication data comprises determining authentication information based on at least one of: the position of data bearing cells in a steganographic halftone; and the amount of data within the data bearing steganographic halftone cells. 12. A machine-readable non-transitory storage medium comprising instructions executable by a processor to: determine the likelihood of authenticity of authentication data based on partitions of the authentication data and content of the authentication data; and output information related to the likelihood of authenticity. 13. The machine-readable non-transitory storage medium of claim 12, wherein instructions to, determine the likelihood of authenticity comprise determining the likelihood of authenticity based on at least one of: the number of partitions within the authentication data and the amount of data in the partitions. 14. The machine-readable non-transitory storage medium of claim 12, wherein instructions to determine the likelihood of authenticity comprise determining the likelihood of authenticity based on at least one of: a comparison of partition information related to the authentication data to stored authentication partition information; and a comparison of partition information related to the authentication data to stored information related to partitions of previously authenticated data. 15. The machine-readable non-transitory storage medium of claim 12, wherein the authentication data is in the form of a data bearing image.
2,600
9,817
9,817
14,080,179
2,641
A method and apparatus are disclosed to provide proximity service discovery in a wireless communication system. The method includes receiving, from a first user equipment, a discovery signal by a second user equipment for discovering or being discovered. The method further includes transmitting, from the second user equipment, a discovery check signal to a network in response to the discovery signal to check a discovery result.
1. A method for proximity service discovery in a wireless communication system, the method comprising: receiving, from a first user equipment (UE), a discovery signal by a second UE for discovering or being discovered; and transmitting, from the second UE, a discovery check signal to a network in response to the discovery signal to check a discovery result. 2. The method of claim 1 further comprises: receiving a response signal, by the second UE, in response to the discovery check signal sent from the network, wherein the response signal includes the discovery result. 3. The method of claim 1, wherein the discovery result comprises: at least one information, wherein the information is identity of UEs discovered, location information, information related to proximity service communication, or any combination thereof. 4. The method of claim 1, wherein the discovery check signal comprises: at least one information, wherein the information is a cell identity received from the first UE, a Public Land Mobile Network (PLMN) identity received from the first UE, an indication whether the first UE has network coverage, an indication whether the first UE needs a proximity service relay, an indication whether the first UE can be a proximity service relay, an indication about whether the second UE can be a proximity service relay, an indication about range class for proximity service discovery or proximity service communication used by the first UE, an indication about transmission power for proximity service discovery used by the first UE, an indication about radio condition between the first UE and the second UE, a Temporary Mobile Subscriber Identity (TMSI), Packet TMSI (P-TMSI), System Architecture Evolution TMSI (S-TMSI), or Mobile Management Entity TMSI (M-TMSI) of the first UE, or any combination thereof. 5. The method of claim 1, wherein transmission of the discovery check signal is restricted by a timer. 6. A second user equipment (UE) for proximity service discovery, the second UE comprising: a control circuit; a processor installed in the control circuit; a memory installed in the control circuit and operatively coupled to the processor; wherein the processor is configured to execute a program code stored in memory to: receive, from a first UE, a discovery signal for discovering or being discovered; and transmit a discovery check signal to a network in response to the discovery signal to check a discovery result. 7. The second UE of claim 6 further comprises: receiving a response signal in response to the discovery check signal, from the network; wherein the response signal includes the discovery result. 8. The second UE of claim 6, wherein the discovery result comprises: at least one information, wherein the information is identity of UEs discovered, location information, or information related to proximity service communication. 9. The second UE of claim 6, wherein the discovery check signal comprises: at least one information, wherein the information is a cell identity received from the first UE, a Public Land Mobile Network (PLMN) identity received from the first UE, an indication whether the first UE has network coverage, an indication whether the first UE needs a proximity service relay or not, an indication whether the first UE can be a proximity service relay or not, an indication whether the second UE can be a proximity service relay, an indication about range class for proximity service discovery or proximity service communication used by the first UE, an indication about transmission power for proximity service discovery used by the first UE, an indication about radio condition between the first UE and the second UE, a Temporary Mobile Subscriber Identity (TMSI), Packet TMSI (P-TMSI), System Architecture Evolution TMSI (S-TMSI), or Mobile Management Entity TMSI (M-TMSI) of the first UE, or any combination thereof. 10. The second UE of claim 6, wherein transmission of the discovery check signal is restricted by a timer. 11. A method for proximity service discovery in a wireless communication system, the method comprising: transmitting, from a first user equipment (UE), a discovery signal to a second UE for discovering or being discovered; wherein the discovery signal comprises at least one information, wherein the information is a cell identity indicating a cell that the first UE is camping on or connecting to, a Public Land Mobile Network (PLMN) identity indicating a PLMN of a cell that the first UE is camping on or connecting to, an indication about whether the first UE has network coverage, an indication about whether the first UE needs a proximity service relay, an indication about whether the first UE can be a proximity service relay, an indication about range class for proximity service discovery or proximity service communication used by the first UE, an indication about transmission power for proximity service discovery used by the first UE, a Temporary Mobile Subscriber Identity (TMSI), Packet TMSI (P-TMSI), System Architecture Evolution TMSI (S-TMSI), or Mobile Management Entity TMSI (M-TMSI) of the first UE, or any combination thereof. 12. The method of claim 11, wherein transmission of the discovery signal is restricted by a timer. 13. The method of claim 11, wherein the discovery signal is sent periodically, upon enabling proximity service discovery for an application or service, upon opening an application or service with enabled proximity service discovery, upon changing a content of the discovery signal, upon handover completion, upon entering RRC_CONNECTED, or upon changing discoverable permission. 14. A first user equipment (UE) for proximity service discovery, the first UE comprising: a control circuit; a processor installed in the control circuit; a memory installed in the control circuit and operatively coupled to the processor; wherein the processor is configured to execute a program code stored in memory to: transmit a discovery signal to a second UE for discovering or being discovered; wherein the discovery signal comprises at least one information includes a cell identity indicating a cell that the first UE is camping on or connecting to, a Public Land Mobile Network (PLMN) identity indicating a PLMN of a cell that the first UE is camping on or connecting to, an indication whether the first UE has network coverage, an indication whether the first UE needs a proximity service relay, an indication about whether the first UE can be a proximity service relay, an indication about range class for proximity service discovery or proximity service communication used by the first UE, an indication about transmission power for proximity service discovery used by the first UE, a Temporary Mobile Subscriber Identity (TMSI), Packet TMSI (P-TMSI), System Architecture Evolution TMSI (S-TMSI), or Mobile Management Entity TMSI (M-TMSI) of the first UE, or any combination thereof. 15. The first UE of claim 14, wherein transmission of the discovery signal is restricted by a timer. 16. The first UE of claim 14, wherein the discovery signal is sent periodically, upon enabling proximity service discovery for an application or service, upon opening an application or service with enabled proximity service discovery, upon changing a content of the discovery signal, upon handover completion, upon entering RRC_CONNECTED, or upon changing discoverable permission.
A method and apparatus are disclosed to provide proximity service discovery in a wireless communication system. The method includes receiving, from a first user equipment, a discovery signal by a second user equipment for discovering or being discovered. The method further includes transmitting, from the second user equipment, a discovery check signal to a network in response to the discovery signal to check a discovery result.1. A method for proximity service discovery in a wireless communication system, the method comprising: receiving, from a first user equipment (UE), a discovery signal by a second UE for discovering or being discovered; and transmitting, from the second UE, a discovery check signal to a network in response to the discovery signal to check a discovery result. 2. The method of claim 1 further comprises: receiving a response signal, by the second UE, in response to the discovery check signal sent from the network, wherein the response signal includes the discovery result. 3. The method of claim 1, wherein the discovery result comprises: at least one information, wherein the information is identity of UEs discovered, location information, information related to proximity service communication, or any combination thereof. 4. The method of claim 1, wherein the discovery check signal comprises: at least one information, wherein the information is a cell identity received from the first UE, a Public Land Mobile Network (PLMN) identity received from the first UE, an indication whether the first UE has network coverage, an indication whether the first UE needs a proximity service relay, an indication whether the first UE can be a proximity service relay, an indication about whether the second UE can be a proximity service relay, an indication about range class for proximity service discovery or proximity service communication used by the first UE, an indication about transmission power for proximity service discovery used by the first UE, an indication about radio condition between the first UE and the second UE, a Temporary Mobile Subscriber Identity (TMSI), Packet TMSI (P-TMSI), System Architecture Evolution TMSI (S-TMSI), or Mobile Management Entity TMSI (M-TMSI) of the first UE, or any combination thereof. 5. The method of claim 1, wherein transmission of the discovery check signal is restricted by a timer. 6. A second user equipment (UE) for proximity service discovery, the second UE comprising: a control circuit; a processor installed in the control circuit; a memory installed in the control circuit and operatively coupled to the processor; wherein the processor is configured to execute a program code stored in memory to: receive, from a first UE, a discovery signal for discovering or being discovered; and transmit a discovery check signal to a network in response to the discovery signal to check a discovery result. 7. The second UE of claim 6 further comprises: receiving a response signal in response to the discovery check signal, from the network; wherein the response signal includes the discovery result. 8. The second UE of claim 6, wherein the discovery result comprises: at least one information, wherein the information is identity of UEs discovered, location information, or information related to proximity service communication. 9. The second UE of claim 6, wherein the discovery check signal comprises: at least one information, wherein the information is a cell identity received from the first UE, a Public Land Mobile Network (PLMN) identity received from the first UE, an indication whether the first UE has network coverage, an indication whether the first UE needs a proximity service relay or not, an indication whether the first UE can be a proximity service relay or not, an indication whether the second UE can be a proximity service relay, an indication about range class for proximity service discovery or proximity service communication used by the first UE, an indication about transmission power for proximity service discovery used by the first UE, an indication about radio condition between the first UE and the second UE, a Temporary Mobile Subscriber Identity (TMSI), Packet TMSI (P-TMSI), System Architecture Evolution TMSI (S-TMSI), or Mobile Management Entity TMSI (M-TMSI) of the first UE, or any combination thereof. 10. The second UE of claim 6, wherein transmission of the discovery check signal is restricted by a timer. 11. A method for proximity service discovery in a wireless communication system, the method comprising: transmitting, from a first user equipment (UE), a discovery signal to a second UE for discovering or being discovered; wherein the discovery signal comprises at least one information, wherein the information is a cell identity indicating a cell that the first UE is camping on or connecting to, a Public Land Mobile Network (PLMN) identity indicating a PLMN of a cell that the first UE is camping on or connecting to, an indication about whether the first UE has network coverage, an indication about whether the first UE needs a proximity service relay, an indication about whether the first UE can be a proximity service relay, an indication about range class for proximity service discovery or proximity service communication used by the first UE, an indication about transmission power for proximity service discovery used by the first UE, a Temporary Mobile Subscriber Identity (TMSI), Packet TMSI (P-TMSI), System Architecture Evolution TMSI (S-TMSI), or Mobile Management Entity TMSI (M-TMSI) of the first UE, or any combination thereof. 12. The method of claim 11, wherein transmission of the discovery signal is restricted by a timer. 13. The method of claim 11, wherein the discovery signal is sent periodically, upon enabling proximity service discovery for an application or service, upon opening an application or service with enabled proximity service discovery, upon changing a content of the discovery signal, upon handover completion, upon entering RRC_CONNECTED, or upon changing discoverable permission. 14. A first user equipment (UE) for proximity service discovery, the first UE comprising: a control circuit; a processor installed in the control circuit; a memory installed in the control circuit and operatively coupled to the processor; wherein the processor is configured to execute a program code stored in memory to: transmit a discovery signal to a second UE for discovering or being discovered; wherein the discovery signal comprises at least one information includes a cell identity indicating a cell that the first UE is camping on or connecting to, a Public Land Mobile Network (PLMN) identity indicating a PLMN of a cell that the first UE is camping on or connecting to, an indication whether the first UE has network coverage, an indication whether the first UE needs a proximity service relay, an indication about whether the first UE can be a proximity service relay, an indication about range class for proximity service discovery or proximity service communication used by the first UE, an indication about transmission power for proximity service discovery used by the first UE, a Temporary Mobile Subscriber Identity (TMSI), Packet TMSI (P-TMSI), System Architecture Evolution TMSI (S-TMSI), or Mobile Management Entity TMSI (M-TMSI) of the first UE, or any combination thereof. 15. The first UE of claim 14, wherein transmission of the discovery signal is restricted by a timer. 16. The first UE of claim 14, wherein the discovery signal is sent periodically, upon enabling proximity service discovery for an application or service, upon opening an application or service with enabled proximity service discovery, upon changing a content of the discovery signal, upon handover completion, upon entering RRC_CONNECTED, or upon changing discoverable permission.
2,600
9,818
9,818
14,905,158
2,659
An autocorrelation calculation unit 21 calculates an autocorrelation R O (i) from an input signal. A prediction coefficient calculation unit 23 performs linear prediction analysis by using a modified autocorrelation R′ O (i) obtained by multiplying a coefficient w O (i) by the autocorrelation R O (i). It is assumed here, for each order i of some orders i at least, that the coefficient w O (i) corresponding to the order i is in a monotonically increasing relationship with an increase in a value that is negatively correlated with a fundamental frequency of the input signal of the current frame or a past frame.
1. A linear prediction analysis method of obtaining, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis method comprising: an autocorrelation calculation step of calculating an autocorrelation RO(i) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; and a prediction coefficient calculation step of calculating coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying a coefficient wO(i) (i=0, 1, . . . , Pmax) by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i, for each order i of some orders i at least, the coefficient wO(i) corresponding to the order i being in a monotonically increasing relationship with an increase in a period, a quantized value of the period, or a value that is negatively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame. 2. A linear prediction analysis method of obtaining, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis method comprising: an autocorrelation calculation step of calculating an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; a coefficient determination step of obtaining a coefficient wO(i) (i=0, 1, . . . , Pmax) from a single coefficient table of two or more coefficient tables by using a period, a quantized value of the period, or a value that is negatively correlated with the fundamental frequency based on the input time-series signal of the current frame or a past frame, the two or more coefficient tables each storing orders i of i=0, 1, . . . , Pmax in association with coefficients wO(i) corresponding to the orders i; and a prediction coefficient calculation step of calculating coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying the obtained coefficient wO(i) (i=0, 1, . . . , Pmax) by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; a first coefficient table of the two or more coefficient tables being a coefficient table from which the coefficient wO(i) (i=0, 1, . . . , Pmax) is obtained in the coefficient determination step when the period, the quantized value of the period, or the value that is negatively correlated with the fundamental frequency is a first value; a second coefficient table of the two or more coefficient tables being a coefficient table from which the coefficient wO(i) (i=0, 1, . . . , Pmax) is obtained in the coefficient determination step when the period, the quantized value of the period, or the value that is negatively correlated with the fundamental frequency is a second value larger than the first value; and for each order i of some orders i at least, the coefficient corresponding to the order i in the second coefficient table being larger than the coefficient corresponding to the order i in the first coefficient table. 3. A linear prediction analysis method of obtaining, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis method comprising: an autocorrelation calculation step of calculating an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; a coefficient determination step of obtaining a coefficient from a single coefficient table of coefficient tables t0, t1, and t2 by using a period, a quantized value of the period, or a value that is negatively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame, the coefficient table t0 storing a coefficient wt0(i) (i=0, 1, . . . , Pmax), the coefficient table t1 storing a coefficient wt1(i) (i=0, 1, . . . , Pmax), and the coefficient table t2 storing a coefficient wt2(i)(i=0, 1, . . . , Pmax); and a prediction coefficient calculation step of obtaining coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying the obtained coefficient by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; depending on the period, the quantized value of the period, or the value that is negatively correlated with the fundamental frequency, the period being classified into one of a case where the period is short, a case where the period is intermediate, and a case where the period is long; the coefficient table t0 being a coefficient table from which the coefficient is obtained in the coefficient determination step when the period is short, the coefficient table t1 being a coefficient table from which the coefficient is obtained in the coefficient determination step when the period is intermediate, and the coefficient table t2 being a coefficient table from which the coefficient is obtained in the coefficient determination step when the period is long; and wt0(i)<wt1(i)≦wt2(i) being satisfied for at least some orders i, wt0(i)≦wt1(i)<wt2(i) being satisfied for at least some orders i of the other orders i, and wt0(i)≦wt1(i)≦wt2(i) being satisfied for the remaining orders i. 4. A linear prediction analysis method of obtaining, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis method comprising: an autocorrelation calculation step of calculating an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; and a prediction coefficient calculation step of calculating coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying a coefficient wO(i) (i=0, 1, . . . , Pmax) by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; for each order i of some orders i at least, the coefficient wO(i) corresponding to the order i being in a monotonically decreasing relationship with an increase in a value that is positively correlated with a fundamental frequency based on the input time-series signal of the current or a past frame. 5. A linear prediction analysis method of obtaining, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis method comprising: an autocorrelation calculation step of calculating an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; a coefficient determination step of obtaining a coefficient wO(i) (i=0, 1, . . . , Pmax) from a single coefficient table of two or more coefficient tables by using a value that is positively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame, the two or more coefficient tables each storing orders i of i=0, 1, . . . , Pmax in association with coefficients wO(i) corresponding to the orders i; and a prediction coefficient calculation step of calculating coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying the obtained coefficient wO(i) (i=0, 1, . . . , Pmax) by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; a first coefficient table of the two or more coefficient tables being a coefficient table from which the coefficient wO(i) (i=0, 1, . . . , Pmax) is obtained in the coefficient determination step when the value that is positively correlated with the fundamental frequency is a first value; a second coefficient table of the two or more coefficient tables being a coefficient table from which the coefficient wO(i) (i=0, 1, . . . , Pmax) is obtained in the coefficient determination step when the value that is positively correlated with the fundamental frequency is a second value smaller than the first value; and for each order i of some orders i at least, the coefficient corresponding to the order i in the second coefficient table being larger than the coefficient corresponding to the order i in the first coefficient table. 6. A linear prediction analysis method of obtaining, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis method comprising: an autocorrelation calculation step of calculating an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; a coefficient determination step of obtaining a coefficient from a single coefficient table of coefficient tables t0, t1, and t2 by using a value that is positively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame, the coefficient table t0 storing a coefficient wt0(i) (i=0, 1, . . . , Pmax), the coefficient table t1 storing a coefficient wt1(i) (i=0, 1, . . . , Pmax), and the coefficient table t2 storing a coefficient wO(i) (i=0, 1, . . . , Pmax); and a prediction coefficient calculation step of calculating coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying the obtained coefficient by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; depending on the value that is positively correlated with the fundamental frequency, the fundamental frequency being classified into one of a case where the fundamental frequency is high, a case where the fundamental frequency is intermediate, and a case where the fundamental frequency is low; the coefficient table t0 being a coefficient table from which the coefficient is obtained in the coefficient determination step when the fundamental frequency is high, the coefficient table t1 being a coefficient table from which the coefficient is obtained in the coefficient determination step when the fundamental frequency is intermediate, and the coefficient table t2 being a coefficient table from which the coefficient is obtained in the coefficient determination step when the fundamental frequency is low; and wt0(i)<wt1(i)≦wt2(i) being satisfied for some orders i at least, wt0(i)≦wt1(i)<wt2(i) being satisfied for some orders i at least of the other orders i, and wt0(i)≦wt1(i)≦wt2(i) being satisfied for the remaining orders i. 7. A linear prediction analysis device that obtains, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis device comprising: an autocorrelation calculation unit adapted to calculate an autocorrelation RO(i) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; and a prediction coefficient calculation unit adapted to calculate coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying a coefficient wO(i) (i=0, 1, . . . , Pmax) by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; for each order i of some orders i at least, the coefficient wO(i) corresponding to the order i being in a monotonically increasing relationship with an increase in a period, a quantized value of the period, or a value that is negatively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame. 8. A linear prediction analysis device that obtains, in each frame, which is a predetermined time interval coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis device comprising an autocorrelation calculation unit adapted to calculate an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; a coefficient determination unit adapted to obtain a coefficient wO(i) (i=0, 1, . . . , Pmax) from a single coefficient table of two or more coefficient tables by using a period, a quantized value of the period, or a value that is negatively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame, the two or more coefficient tables each storing orders i of i=0, 1, . . . , Pmax in association with coefficients wO(i) corresponding to the orders i; and a prediction coefficient calculation unit adapted to calculate coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying the obtained coefficient wO(i) (i=0, 1, . . . , Pmax) by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; a first coefficient table of the two or more coefficient tables being a coefficient table from which the coefficient determination unit obtains the coefficient wO(i) (i=0, 1, . . . , Pmax) when the period, the quantized value of the period, or the value that is negatively correlated with the fundamental frequency is a first value; a second coefficient table of the two or more coefficient tables being a coefficient table from which the coefficient determination unit obtains the coefficient wO(i) (i=0, 1, . . . , Pmax) when the period, the quantized value of the period, or the value that is negatively correlated with the fundamental frequency is a second value larger than the first value; and for each order i of some orders i at least, the coefficient corresponding to the order i in the second coefficient table being larger than the coefficient corresponding to the order i in the first coefficient table. 9. A linear prediction analysis device that obtains, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis device comprising: an autocorrelation calculation unit adapted to calculate an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; a coefficient determination unit adapted to obtain a coefficient from a single coefficient table of coefficient tables t0, t1, and t2 by using a period, a quantized value of the period, or a value that is negatively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame, the coefficient table t0 storing a coefficient wt0(i) (i=0, 1, . . . , Pmax), the coefficient table t1 storing a coefficient wt1(i) (i=0, 1, . . . , Pmax), and the coefficient table t2 storing a coefficient wt2(i) (i=0, 1, . . . , Pmax); and a prediction coefficient calculation unit adapted to obtain coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying the obtained coefficient by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; depending on the period, the quantized value of the period, or the value that is negatively correlated with the fundamental frequency, the period being classified into one of a case where the period is short, a case where the period is intermediate, and a case where the period is long; the coefficient table t0 being a coefficient table from which the coefficient determination unit obtains the coefficient when the period is short, the coefficient table t1 being a coefficient table from which the coefficient determination unit obtains the coefficient when the period is intermediate, and the coefficient table t2 being a coefficient table from which the coefficient determination unit obtains the coefficient when the period is long; and wt0(i)<wt1(i)≦wt2(i) being satisfied for some orders i at least, wt0(i)≦wt1(i)<wt2(i) being satisfied for some orders i at least of the other orders i, and wt0(i)≦wt1(i)≦wt2(i) being satisfied for the remaining orders i. 10. A linear prediction analysis device that obtains, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis device comprising: an autocorrelation calculation unit adapted to calculate an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; and a prediction coefficient calculation unit adapted to calculate coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying a coefficient wO(i) (i=0, 1, . . . , Pmax) by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; for each order i of some orders i at least, the coefficient wO(i) corresponding to the order i being in a monotonically decreasing relationship with an increase in a value that is positively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame. 11. A linear prediction analysis device that obtains, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis device comprising: an autocorrelation calculation unit adapted to calculate an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; a coefficient determination unit adapted to obtain a coefficient wO(i) (i=0, 1, . . . , Pmax) from a single coefficient table of two or more coefficient tables by using a value that is positively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame, the two or more coefficient tables each storing orders i of i=0, 1, . . . , Pmax in association with coefficients wO(i) corresponding to the orders i; and a prediction coefficient calculation unit adapted to calculate coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying the obtained coefficient wO(i) (i=0, 1, . . . , Pmax) and the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; a first coefficient table of the two or more coefficient tables being a coefficient table from which the coefficient determination unit obtains the coefficient wO(i) (i=0, 1, . . . , Pmax) when the value that is positively correlated with the fundamental frequency is a first value; a second coefficient table of the two or more coefficient tables being a coefficient table from which the coefficient determination unit obtains the coefficient wO(i) (i=0, 1, . . . , Pmax) when the value that is positively correlated with the fundamental frequency is a second value smaller than the first value; and for each order i of some orders i at least, the coefficient corresponding to the order i in the second coefficient table being larger than the coefficient corresponding to the order i in the first coefficient table. 12. A linear prediction analysis device that obtains, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis device comprising: an autocorrelation calculation unit adapted to calculate an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; a coefficient determination unit adapted to obtain a coefficient from a single coefficient table of coefficient tables t0, t1, and t2 by using a value that is positively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame, the coefficient table t0 storing a coefficient wt0(i) (i=0, 1, . . . , Pmax), the coefficient table t1 storing a coefficient wt1(i) (i=0, 1, . . . , Pmax), and the coefficient table t2 storing a coefficient wt2(i) (i=0, 1, . . . , Pmax); and a prediction coefficient calculation unit adapted to calculate coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying the obtained coefficient by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; depending on the value that is positively correlated with the fundamental frequency, the fundamental frequency being classified into one of a case where the fundamental frequency is high, a case where the fundamental frequency is intermediate, and a case where the fundamental frequency is low; the coefficient table t0 being a coefficient table from which the coefficient determination unit obtains the coefficient when the fundamental frequency is high, the coefficient table t1 being a coefficient table from which the coefficient determination unit obtains the coefficient when the fundamental frequency is intermediate, and the coefficient table t2 being a coefficient table from which the coefficient determination unit obtains the coefficient when the fundamental frequency is low; and wt0(i)<wt1(i)≦wt2(i) being satisfied for some orders i at least, wt0(i)≦wt1(i)<wt2(i) being satisfied for some orders i at least of the other orders i, and wt0(i)≦wt1(i)≦wt2(i) being satisfied for the remaining orders i. 13. A program for causing a computer to execute the steps of the linear prediction analysis method according to one of claims 1 to 6. 14. A non-transitory computer-readable recording medium on which a program for causing a computer to execute the steps of the linear prediction analysis method according to one of claims 1 to 6 is recorded.
An autocorrelation calculation unit 21 calculates an autocorrelation R O (i) from an input signal. A prediction coefficient calculation unit 23 performs linear prediction analysis by using a modified autocorrelation R′ O (i) obtained by multiplying a coefficient w O (i) by the autocorrelation R O (i). It is assumed here, for each order i of some orders i at least, that the coefficient w O (i) corresponding to the order i is in a monotonically increasing relationship with an increase in a value that is negatively correlated with a fundamental frequency of the input signal of the current frame or a past frame.1. A linear prediction analysis method of obtaining, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis method comprising: an autocorrelation calculation step of calculating an autocorrelation RO(i) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; and a prediction coefficient calculation step of calculating coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying a coefficient wO(i) (i=0, 1, . . . , Pmax) by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i, for each order i of some orders i at least, the coefficient wO(i) corresponding to the order i being in a monotonically increasing relationship with an increase in a period, a quantized value of the period, or a value that is negatively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame. 2. A linear prediction analysis method of obtaining, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis method comprising: an autocorrelation calculation step of calculating an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; a coefficient determination step of obtaining a coefficient wO(i) (i=0, 1, . . . , Pmax) from a single coefficient table of two or more coefficient tables by using a period, a quantized value of the period, or a value that is negatively correlated with the fundamental frequency based on the input time-series signal of the current frame or a past frame, the two or more coefficient tables each storing orders i of i=0, 1, . . . , Pmax in association with coefficients wO(i) corresponding to the orders i; and a prediction coefficient calculation step of calculating coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying the obtained coefficient wO(i) (i=0, 1, . . . , Pmax) by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; a first coefficient table of the two or more coefficient tables being a coefficient table from which the coefficient wO(i) (i=0, 1, . . . , Pmax) is obtained in the coefficient determination step when the period, the quantized value of the period, or the value that is negatively correlated with the fundamental frequency is a first value; a second coefficient table of the two or more coefficient tables being a coefficient table from which the coefficient wO(i) (i=0, 1, . . . , Pmax) is obtained in the coefficient determination step when the period, the quantized value of the period, or the value that is negatively correlated with the fundamental frequency is a second value larger than the first value; and for each order i of some orders i at least, the coefficient corresponding to the order i in the second coefficient table being larger than the coefficient corresponding to the order i in the first coefficient table. 3. A linear prediction analysis method of obtaining, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis method comprising: an autocorrelation calculation step of calculating an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; a coefficient determination step of obtaining a coefficient from a single coefficient table of coefficient tables t0, t1, and t2 by using a period, a quantized value of the period, or a value that is negatively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame, the coefficient table t0 storing a coefficient wt0(i) (i=0, 1, . . . , Pmax), the coefficient table t1 storing a coefficient wt1(i) (i=0, 1, . . . , Pmax), and the coefficient table t2 storing a coefficient wt2(i)(i=0, 1, . . . , Pmax); and a prediction coefficient calculation step of obtaining coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying the obtained coefficient by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; depending on the period, the quantized value of the period, or the value that is negatively correlated with the fundamental frequency, the period being classified into one of a case where the period is short, a case where the period is intermediate, and a case where the period is long; the coefficient table t0 being a coefficient table from which the coefficient is obtained in the coefficient determination step when the period is short, the coefficient table t1 being a coefficient table from which the coefficient is obtained in the coefficient determination step when the period is intermediate, and the coefficient table t2 being a coefficient table from which the coefficient is obtained in the coefficient determination step when the period is long; and wt0(i)<wt1(i)≦wt2(i) being satisfied for at least some orders i, wt0(i)≦wt1(i)<wt2(i) being satisfied for at least some orders i of the other orders i, and wt0(i)≦wt1(i)≦wt2(i) being satisfied for the remaining orders i. 4. A linear prediction analysis method of obtaining, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis method comprising: an autocorrelation calculation step of calculating an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; and a prediction coefficient calculation step of calculating coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying a coefficient wO(i) (i=0, 1, . . . , Pmax) by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; for each order i of some orders i at least, the coefficient wO(i) corresponding to the order i being in a monotonically decreasing relationship with an increase in a value that is positively correlated with a fundamental frequency based on the input time-series signal of the current or a past frame. 5. A linear prediction analysis method of obtaining, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis method comprising: an autocorrelation calculation step of calculating an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; a coefficient determination step of obtaining a coefficient wO(i) (i=0, 1, . . . , Pmax) from a single coefficient table of two or more coefficient tables by using a value that is positively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame, the two or more coefficient tables each storing orders i of i=0, 1, . . . , Pmax in association with coefficients wO(i) corresponding to the orders i; and a prediction coefficient calculation step of calculating coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying the obtained coefficient wO(i) (i=0, 1, . . . , Pmax) by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; a first coefficient table of the two or more coefficient tables being a coefficient table from which the coefficient wO(i) (i=0, 1, . . . , Pmax) is obtained in the coefficient determination step when the value that is positively correlated with the fundamental frequency is a first value; a second coefficient table of the two or more coefficient tables being a coefficient table from which the coefficient wO(i) (i=0, 1, . . . , Pmax) is obtained in the coefficient determination step when the value that is positively correlated with the fundamental frequency is a second value smaller than the first value; and for each order i of some orders i at least, the coefficient corresponding to the order i in the second coefficient table being larger than the coefficient corresponding to the order i in the first coefficient table. 6. A linear prediction analysis method of obtaining, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis method comprising: an autocorrelation calculation step of calculating an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; a coefficient determination step of obtaining a coefficient from a single coefficient table of coefficient tables t0, t1, and t2 by using a value that is positively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame, the coefficient table t0 storing a coefficient wt0(i) (i=0, 1, . . . , Pmax), the coefficient table t1 storing a coefficient wt1(i) (i=0, 1, . . . , Pmax), and the coefficient table t2 storing a coefficient wO(i) (i=0, 1, . . . , Pmax); and a prediction coefficient calculation step of calculating coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying the obtained coefficient by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; depending on the value that is positively correlated with the fundamental frequency, the fundamental frequency being classified into one of a case where the fundamental frequency is high, a case where the fundamental frequency is intermediate, and a case where the fundamental frequency is low; the coefficient table t0 being a coefficient table from which the coefficient is obtained in the coefficient determination step when the fundamental frequency is high, the coefficient table t1 being a coefficient table from which the coefficient is obtained in the coefficient determination step when the fundamental frequency is intermediate, and the coefficient table t2 being a coefficient table from which the coefficient is obtained in the coefficient determination step when the fundamental frequency is low; and wt0(i)<wt1(i)≦wt2(i) being satisfied for some orders i at least, wt0(i)≦wt1(i)<wt2(i) being satisfied for some orders i at least of the other orders i, and wt0(i)≦wt1(i)≦wt2(i) being satisfied for the remaining orders i. 7. A linear prediction analysis device that obtains, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis device comprising: an autocorrelation calculation unit adapted to calculate an autocorrelation RO(i) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; and a prediction coefficient calculation unit adapted to calculate coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying a coefficient wO(i) (i=0, 1, . . . , Pmax) by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; for each order i of some orders i at least, the coefficient wO(i) corresponding to the order i being in a monotonically increasing relationship with an increase in a period, a quantized value of the period, or a value that is negatively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame. 8. A linear prediction analysis device that obtains, in each frame, which is a predetermined time interval coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis device comprising an autocorrelation calculation unit adapted to calculate an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; a coefficient determination unit adapted to obtain a coefficient wO(i) (i=0, 1, . . . , Pmax) from a single coefficient table of two or more coefficient tables by using a period, a quantized value of the period, or a value that is negatively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame, the two or more coefficient tables each storing orders i of i=0, 1, . . . , Pmax in association with coefficients wO(i) corresponding to the orders i; and a prediction coefficient calculation unit adapted to calculate coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying the obtained coefficient wO(i) (i=0, 1, . . . , Pmax) by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; a first coefficient table of the two or more coefficient tables being a coefficient table from which the coefficient determination unit obtains the coefficient wO(i) (i=0, 1, . . . , Pmax) when the period, the quantized value of the period, or the value that is negatively correlated with the fundamental frequency is a first value; a second coefficient table of the two or more coefficient tables being a coefficient table from which the coefficient determination unit obtains the coefficient wO(i) (i=0, 1, . . . , Pmax) when the period, the quantized value of the period, or the value that is negatively correlated with the fundamental frequency is a second value larger than the first value; and for each order i of some orders i at least, the coefficient corresponding to the order i in the second coefficient table being larger than the coefficient corresponding to the order i in the first coefficient table. 9. A linear prediction analysis device that obtains, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis device comprising: an autocorrelation calculation unit adapted to calculate an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; a coefficient determination unit adapted to obtain a coefficient from a single coefficient table of coefficient tables t0, t1, and t2 by using a period, a quantized value of the period, or a value that is negatively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame, the coefficient table t0 storing a coefficient wt0(i) (i=0, 1, . . . , Pmax), the coefficient table t1 storing a coefficient wt1(i) (i=0, 1, . . . , Pmax), and the coefficient table t2 storing a coefficient wt2(i) (i=0, 1, . . . , Pmax); and a prediction coefficient calculation unit adapted to obtain coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying the obtained coefficient by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; depending on the period, the quantized value of the period, or the value that is negatively correlated with the fundamental frequency, the period being classified into one of a case where the period is short, a case where the period is intermediate, and a case where the period is long; the coefficient table t0 being a coefficient table from which the coefficient determination unit obtains the coefficient when the period is short, the coefficient table t1 being a coefficient table from which the coefficient determination unit obtains the coefficient when the period is intermediate, and the coefficient table t2 being a coefficient table from which the coefficient determination unit obtains the coefficient when the period is long; and wt0(i)<wt1(i)≦wt2(i) being satisfied for some orders i at least, wt0(i)≦wt1(i)<wt2(i) being satisfied for some orders i at least of the other orders i, and wt0(i)≦wt1(i)≦wt2(i) being satisfied for the remaining orders i. 10. A linear prediction analysis device that obtains, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis device comprising: an autocorrelation calculation unit adapted to calculate an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; and a prediction coefficient calculation unit adapted to calculate coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying a coefficient wO(i) (i=0, 1, . . . , Pmax) by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; for each order i of some orders i at least, the coefficient wO(i) corresponding to the order i being in a monotonically decreasing relationship with an increase in a value that is positively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame. 11. A linear prediction analysis device that obtains, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis device comprising: an autocorrelation calculation unit adapted to calculate an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; a coefficient determination unit adapted to obtain a coefficient wO(i) (i=0, 1, . . . , Pmax) from a single coefficient table of two or more coefficient tables by using a value that is positively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame, the two or more coefficient tables each storing orders i of i=0, 1, . . . , Pmax in association with coefficients wO(i) corresponding to the orders i; and a prediction coefficient calculation unit adapted to calculate coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying the obtained coefficient wO(i) (i=0, 1, . . . , Pmax) and the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; a first coefficient table of the two or more coefficient tables being a coefficient table from which the coefficient determination unit obtains the coefficient wO(i) (i=0, 1, . . . , Pmax) when the value that is positively correlated with the fundamental frequency is a first value; a second coefficient table of the two or more coefficient tables being a coefficient table from which the coefficient determination unit obtains the coefficient wO(i) (i=0, 1, . . . , Pmax) when the value that is positively correlated with the fundamental frequency is a second value smaller than the first value; and for each order i of some orders i at least, the coefficient corresponding to the order i in the second coefficient table being larger than the coefficient corresponding to the order i in the first coefficient table. 12. A linear prediction analysis device that obtains, in each frame, which is a predetermined time interval, coefficients that can be transformed to linear prediction coefficients corresponding to an input time-series signal, the linear prediction analysis device comprising: an autocorrelation calculation unit adapted to calculate an autocorrelation RO(i) (i=0, 1, . . . , Pmax) between an input time-series signal XO(n) of a current frame and an input time-series signal XO(n−i) i samples before the input time-series signal XO(n) or an input time-series signal XO(n+i) i samples after the input time-series signal XO(n), for each i of i=0, 1, . . . , Pmax at least; a coefficient determination unit adapted to obtain a coefficient from a single coefficient table of coefficient tables t0, t1, and t2 by using a value that is positively correlated with a fundamental frequency based on the input time-series signal of the current frame or a past frame, the coefficient table t0 storing a coefficient wt0(i) (i=0, 1, . . . , Pmax), the coefficient table t1 storing a coefficient wt1(i) (i=0, 1, . . . , Pmax), and the coefficient table t2 storing a coefficient wt2(i) (i=0, 1, . . . , Pmax); and a prediction coefficient calculation unit adapted to calculate coefficients that can be transformed to first-order to Pmax-order linear prediction coefficients, by using a modified autocorrelation R′O(i) (i=0, 1, . . . , Pmax) obtained by multiplying the obtained coefficient by the autocorrelation RO(i) (i=0, 1, . . . , Pmax) for each i; depending on the value that is positively correlated with the fundamental frequency, the fundamental frequency being classified into one of a case where the fundamental frequency is high, a case where the fundamental frequency is intermediate, and a case where the fundamental frequency is low; the coefficient table t0 being a coefficient table from which the coefficient determination unit obtains the coefficient when the fundamental frequency is high, the coefficient table t1 being a coefficient table from which the coefficient determination unit obtains the coefficient when the fundamental frequency is intermediate, and the coefficient table t2 being a coefficient table from which the coefficient determination unit obtains the coefficient when the fundamental frequency is low; and wt0(i)<wt1(i)≦wt2(i) being satisfied for some orders i at least, wt0(i)≦wt1(i)<wt2(i) being satisfied for some orders i at least of the other orders i, and wt0(i)≦wt1(i)≦wt2(i) being satisfied for the remaining orders i. 13. A program for causing a computer to execute the steps of the linear prediction analysis method according to one of claims 1 to 6. 14. A non-transitory computer-readable recording medium on which a program for causing a computer to execute the steps of the linear prediction analysis method according to one of claims 1 to 6 is recorded.
2,600
9,819
9,819
14,323,241
2,655
A wireless receiver ( 10 ) includes a down converter module ( 210 ) operable to deliver a signal having a signal bandwidth that changes over time, a dynamically controllable filter module ( 200 ) having a filter bandwidth and fed by said down converter module ( 210 ), and a measurement module ( 295 ) operable to at least approximately measure the signal bandwidth, said dynamically controllable filter module ( 200 ) responsive to said measurement module ( 295 ) to dynamically adjust the filter bandwidth to more nearly match the signal bandwidth as it changes over time, whereby output from said filter module ( 200 ) is noise-reduced. Other wireless receivers, electronic circuits, and processes for their operation are disclosed.
1-23. (canceled) 24. A process of operating a wireless receiver, the process comprising altering a filter characteristic including a bandwidth of filter passband dynamically depending on a modulated wireless signal condition involving at least one signal frequency. 25. The process claimed in claim 24 wherein the signal condition includes frequency deviation. 26. The process claimed in claim 24 wherein the signal condition is related to demodulated signal envelope of a frequency modulated wireless signal. 27. The process claimed in claim 24 further comprising demodulating the signal and electronically measuring at least one such signal condition from the demodulated signal for use in such altering of the filter characteristic. 28. The process claimed in claim 24 wherein the filter characteristic includes a low pass filter cutoff frequency. 29. The process claimed in claim 24 further comprising down converting and filtering the signal to supply a baseband modulated signal, demodulating the baseband modulated signal to produce a demodulated signal, electronically generating a first measure of the signal condition based on the baseband modulated signal and a second measure of the signal condition based on the demodulated signal, and electronically deriving the signal condition as a joint logic function of the first measure and the second measure. 30. The process claimed in claim 29 wherein the first measure includes substantially a difference of signal strengths determined under two different filter bandwidths. 31. The process claimed in claim 29 wherein the second measure is based on a demodulated audio envelope. 32. The process claimed in claim 24 further comprising down converting, filtering the signal using the alterable filter characteristic after the down converting, and demodulating the filtered signal. 33. The process claimed in claim 24 further comprising down converting the signal to supply a baseband modulated signal, and electronically generating at least a first measure of the signal condition that includes substantially a difference of signal strengths determined under at least two different filter bandwidths, to control the altering of the filter characteristic. 46. The process as claimed in claim 29, wherein the first measure is a signal level of the baseband modulated signal and the second measure is a noise level derived from the demodulated signal. 47. An apparatus for operating a wireless receiver, the apparatus comprising circuitry for altering a filter characteristic including a bandwidth of filter passband dynamically depending on a modulated wireless signal condition involving at least one signal frequency. 48. The apparatus claimed in claim 47 wherein the signal condition includes frequency deviation. 49. The apparatus claimed in claim 47 wherein the signal condition is related to demodulated signal envelope of a frequency modulated wireless signal. 50. The apparatus claimed in claim 47 further comprising circuitry for demodulating the signal and electronically measuring at least one such signal condition from the demodulated signal for use in such altering of the filter characteristic. 51. The apparatus claimed in claim 47 wherein the filter characteristic includes a low pass filter cutoff frequency. 52. The apparatus claimed in claim 47 further comprising circuitry for down converting and filtering the signal to supply a baseband modulated signal, demodulating the baseband modulated signal to produce a demodulated signal, electronically generating a first measure of the signal condition based on the baseband modulated signal and a second measure of the signal condition based on the demodulated signal, and electronically deriving the signal condition as a joint logic function of the first measure and the second measure. 53. The apparatus claimed in claim 52 wherein the first measure includes substantially a difference of signal strengths determined under two different filter bandwidths. 54. The apparatus claimed in claim 52 wherein the second measure is based on a demodulated audio envelope. 55. The process as claimed in claim 52, wherein the first measure is a signal level ofthe baseband modulated signal and the second measure is a noise level derived from the demodulated signal. 56. The apparatus claimed in claim 47 further comprising circuitry for down converting, filtering the signal using the alterable filter characteristic after the down converting, and demodulating the filtered signal. 57. The apparatus claimed in claim 47 further comprising circuitry for down converting the signal to supply a baseband modulated signal, and electronically generating at least a first measure of the signal condition that includes substantially a difference of signal strengths determined under at least two different filter bandwidths, to control the altering of the filter characteristic.
A wireless receiver ( 10 ) includes a down converter module ( 210 ) operable to deliver a signal having a signal bandwidth that changes over time, a dynamically controllable filter module ( 200 ) having a filter bandwidth and fed by said down converter module ( 210 ), and a measurement module ( 295 ) operable to at least approximately measure the signal bandwidth, said dynamically controllable filter module ( 200 ) responsive to said measurement module ( 295 ) to dynamically adjust the filter bandwidth to more nearly match the signal bandwidth as it changes over time, whereby output from said filter module ( 200 ) is noise-reduced. Other wireless receivers, electronic circuits, and processes for their operation are disclosed.1-23. (canceled) 24. A process of operating a wireless receiver, the process comprising altering a filter characteristic including a bandwidth of filter passband dynamically depending on a modulated wireless signal condition involving at least one signal frequency. 25. The process claimed in claim 24 wherein the signal condition includes frequency deviation. 26. The process claimed in claim 24 wherein the signal condition is related to demodulated signal envelope of a frequency modulated wireless signal. 27. The process claimed in claim 24 further comprising demodulating the signal and electronically measuring at least one such signal condition from the demodulated signal for use in such altering of the filter characteristic. 28. The process claimed in claim 24 wherein the filter characteristic includes a low pass filter cutoff frequency. 29. The process claimed in claim 24 further comprising down converting and filtering the signal to supply a baseband modulated signal, demodulating the baseband modulated signal to produce a demodulated signal, electronically generating a first measure of the signal condition based on the baseband modulated signal and a second measure of the signal condition based on the demodulated signal, and electronically deriving the signal condition as a joint logic function of the first measure and the second measure. 30. The process claimed in claim 29 wherein the first measure includes substantially a difference of signal strengths determined under two different filter bandwidths. 31. The process claimed in claim 29 wherein the second measure is based on a demodulated audio envelope. 32. The process claimed in claim 24 further comprising down converting, filtering the signal using the alterable filter characteristic after the down converting, and demodulating the filtered signal. 33. The process claimed in claim 24 further comprising down converting the signal to supply a baseband modulated signal, and electronically generating at least a first measure of the signal condition that includes substantially a difference of signal strengths determined under at least two different filter bandwidths, to control the altering of the filter characteristic. 46. The process as claimed in claim 29, wherein the first measure is a signal level of the baseband modulated signal and the second measure is a noise level derived from the demodulated signal. 47. An apparatus for operating a wireless receiver, the apparatus comprising circuitry for altering a filter characteristic including a bandwidth of filter passband dynamically depending on a modulated wireless signal condition involving at least one signal frequency. 48. The apparatus claimed in claim 47 wherein the signal condition includes frequency deviation. 49. The apparatus claimed in claim 47 wherein the signal condition is related to demodulated signal envelope of a frequency modulated wireless signal. 50. The apparatus claimed in claim 47 further comprising circuitry for demodulating the signal and electronically measuring at least one such signal condition from the demodulated signal for use in such altering of the filter characteristic. 51. The apparatus claimed in claim 47 wherein the filter characteristic includes a low pass filter cutoff frequency. 52. The apparatus claimed in claim 47 further comprising circuitry for down converting and filtering the signal to supply a baseband modulated signal, demodulating the baseband modulated signal to produce a demodulated signal, electronically generating a first measure of the signal condition based on the baseband modulated signal and a second measure of the signal condition based on the demodulated signal, and electronically deriving the signal condition as a joint logic function of the first measure and the second measure. 53. The apparatus claimed in claim 52 wherein the first measure includes substantially a difference of signal strengths determined under two different filter bandwidths. 54. The apparatus claimed in claim 52 wherein the second measure is based on a demodulated audio envelope. 55. The process as claimed in claim 52, wherein the first measure is a signal level ofthe baseband modulated signal and the second measure is a noise level derived from the demodulated signal. 56. The apparatus claimed in claim 47 further comprising circuitry for down converting, filtering the signal using the alterable filter characteristic after the down converting, and demodulating the filtered signal. 57. The apparatus claimed in claim 47 further comprising circuitry for down converting the signal to supply a baseband modulated signal, and electronically generating at least a first measure of the signal condition that includes substantially a difference of signal strengths determined under at least two different filter bandwidths, to control the altering of the filter characteristic.
2,600
9,820
9,820
15,003,106
2,651
Cables are utilized to both charge components of an auditory prosthesis and transmit data signals between components of the auditory prosthesis. The cable is configured so as to enable a recipient to charge her device without having to lose the hearing function of the auditory prosthesis. Connectors can be utilized to connect to the various components of the auditory prosthesis, as well as to a discrete power source. The cable can be connected directly to each component of the auditory prosthesis or can be connected via cables that are already a part of the auditory prosthesis.
1. An apparatus comprising: a first wearable component; a second wearable component adapted to be worn discrete from the first wearable component; and a cable comprising: a power connector adapted to be connected to a discrete power source; a first connector adapted to be connected to the first wearable component; a second connector adapted to be connected to the second wearable component; and a wire operatively connected to the first connector, the second connector, and the power source connector. 2. The apparatus of claim 1, wherein the first wearable component comprises a sound processor and the second wearable component comprises a transmission element. 3. The apparatus of claim 2, wherein the transmission element comprises at least one of an induction coil and a vibration element. 4. The apparatus of claim 1, wherein the discrete power source comprises at least one of a battery, a building power supply, and an energy scavenging unit. 5. The apparatus of claim 1, wherein the wire comprises a wire bundle comprising a power wire and a data wire. 6. The apparatus of claim 5, wherein when the cable is connected to each of a power source, the first wearable component, and the second wearable component, a data signal is sent on the data wire substantially simultaneously with a power signal being sent on the power wire. 7. The apparatus of claim 1, further comprising a voltage transformer disposed on the power wire, wherein the voltage transformer alters a voltage of the power signal sent to at least one of the first connector and the second connector. 8. The apparatus of claim 1, wherein the wire comprises a wire bundle comprising: a first power wire connecting the power connector to the first connector; and a second power wire connecting the power connector to the second connector. 9. The apparatus of claim 2, wherein the first wearable component comprises a battery and a charging circuit connected to the battery, and wherein the wire is connected to the charging circuit. 10. An apparatus comprising: a cable jacket; a power connector secured to the cable jacket; a first connector secured to the cable jacket and adapted to be connected to a first wearable component; a second connector secured to the cable jacket and adapted to be connected to a second wearable component; and a wire disposed in the cable jacket, wherein the wire connects the first connector and the second connector. 11. The apparatus of claim 10, wherein the wire connects the power connector to both of the first connector and the second connector. 12. The apparatus of claim 11, further comprising a voltage transformer disposed on the wire between the power source connector and at least one of the first connector and the second connector. 13. The apparatus of claim 10, wherein the wire comprises a wire bundle comprising: a first power wire connected to the power connector and the first connector, and a second power wire connected to the power connector and the second connector. 14. The apparatus of claim 13, wherein the wire bundle further comprises a data wire connected to the first connector and the second connector. 15. The apparatus of claim 10, further comprising at least one of: a vehicle power adapter; a battery housing; and a power plug, wherein the at least one of the vehicle power adapter, the battery housing, and the power plug are adapted to be connected to the power connector. 16. The apparatus of claim 10, wherein the first connector comprises a male connector and the second connector comprises a female connector. 17. The apparatus of claim 16, wherein the male connector and the female connector are disposed within an integral housing. 18. The apparatus of claim 10, wherein both of the first connector and the second connector comprise a male connector. 19. A method comprising: receiving, at a first component of an auditory prosthesis, a data signal sent from a second component of the auditory prosthesis; and substantially simultaneously receiving, at the first component, a power signal sent from a power source discrete from both the first component and the second component. 20. The method of claim 18, wherein substantially simultaneously receiving comprises automatically alternatingly receiving the data signal and the power signal.
Cables are utilized to both charge components of an auditory prosthesis and transmit data signals between components of the auditory prosthesis. The cable is configured so as to enable a recipient to charge her device without having to lose the hearing function of the auditory prosthesis. Connectors can be utilized to connect to the various components of the auditory prosthesis, as well as to a discrete power source. The cable can be connected directly to each component of the auditory prosthesis or can be connected via cables that are already a part of the auditory prosthesis.1. An apparatus comprising: a first wearable component; a second wearable component adapted to be worn discrete from the first wearable component; and a cable comprising: a power connector adapted to be connected to a discrete power source; a first connector adapted to be connected to the first wearable component; a second connector adapted to be connected to the second wearable component; and a wire operatively connected to the first connector, the second connector, and the power source connector. 2. The apparatus of claim 1, wherein the first wearable component comprises a sound processor and the second wearable component comprises a transmission element. 3. The apparatus of claim 2, wherein the transmission element comprises at least one of an induction coil and a vibration element. 4. The apparatus of claim 1, wherein the discrete power source comprises at least one of a battery, a building power supply, and an energy scavenging unit. 5. The apparatus of claim 1, wherein the wire comprises a wire bundle comprising a power wire and a data wire. 6. The apparatus of claim 5, wherein when the cable is connected to each of a power source, the first wearable component, and the second wearable component, a data signal is sent on the data wire substantially simultaneously with a power signal being sent on the power wire. 7. The apparatus of claim 1, further comprising a voltage transformer disposed on the power wire, wherein the voltage transformer alters a voltage of the power signal sent to at least one of the first connector and the second connector. 8. The apparatus of claim 1, wherein the wire comprises a wire bundle comprising: a first power wire connecting the power connector to the first connector; and a second power wire connecting the power connector to the second connector. 9. The apparatus of claim 2, wherein the first wearable component comprises a battery and a charging circuit connected to the battery, and wherein the wire is connected to the charging circuit. 10. An apparatus comprising: a cable jacket; a power connector secured to the cable jacket; a first connector secured to the cable jacket and adapted to be connected to a first wearable component; a second connector secured to the cable jacket and adapted to be connected to a second wearable component; and a wire disposed in the cable jacket, wherein the wire connects the first connector and the second connector. 11. The apparatus of claim 10, wherein the wire connects the power connector to both of the first connector and the second connector. 12. The apparatus of claim 11, further comprising a voltage transformer disposed on the wire between the power source connector and at least one of the first connector and the second connector. 13. The apparatus of claim 10, wherein the wire comprises a wire bundle comprising: a first power wire connected to the power connector and the first connector, and a second power wire connected to the power connector and the second connector. 14. The apparatus of claim 13, wherein the wire bundle further comprises a data wire connected to the first connector and the second connector. 15. The apparatus of claim 10, further comprising at least one of: a vehicle power adapter; a battery housing; and a power plug, wherein the at least one of the vehicle power adapter, the battery housing, and the power plug are adapted to be connected to the power connector. 16. The apparatus of claim 10, wherein the first connector comprises a male connector and the second connector comprises a female connector. 17. The apparatus of claim 16, wherein the male connector and the female connector are disposed within an integral housing. 18. The apparatus of claim 10, wherein both of the first connector and the second connector comprise a male connector. 19. A method comprising: receiving, at a first component of an auditory prosthesis, a data signal sent from a second component of the auditory prosthesis; and substantially simultaneously receiving, at the first component, a power signal sent from a power source discrete from both the first component and the second component. 20. The method of claim 18, wherein substantially simultaneously receiving comprises automatically alternatingly receiving the data signal and the power signal.
2,600
9,821
9,821
14,734,085
2,652
Conferencing applications run a variety of devices, including portable devices, such as smart phones, laptop computers, and tablet computers. A source device may be taking a high-resolution video to provide the video as a conference portion of a conference being viewed by a number of devices. The number of devices displaying the conference may only be displaying the conference portion as a thumbnail image, or other low-data image, or not be currently displaying the conference portion at all. The number of devices viewing the conference may then signal back to the source device to downgrade/terminate the capturing and/or transmission of the video. The source device then provides the video or other conference portion in a data-thrifty format more closely matching the conference portion as it is being displayed on the number of devices.
1. A communication device, comprising: a network interface; a video sensor configured to capture a video image; and a video controller configured to receive a throttling signal via the network interface from an endpoint receiving the video image, wherein the throttling signal comprises a conference mode of an endpoint selected by a user of the endpoint, displaying the video image and,. in response to the throttling signal, adjusts an attribute of the video image, the attribute effecting the encoding of the video image uploaded to a network via the network interface. 2. The communication device of claim 1, wherein the conference mode comprises indicia of the endpoint providing conference content. 3. The communication device of claim 1, wherein the video controller, upon receiving the throttling signal from the network interface, causes the video controller to adjust the attribute of the video image captured by the video sensor. 4. The communication device of claim 1, wherein the throttling signal comprises indicia of battery charge for a battery powering the endpoint. 5. The communication device of claim 1, wherein: the throttling signal is generated by a conference application executing on the endpoint and presenting, on the endpoint, a conference content comprising the video image and a conference component. 6. The communication device of claim 5, wherein the conference mode comprises indicia of a communication type of the network interface being utilized by the endpoint to receive a conference comprising the video image. 7. The communication device of claim 5, wherein the conference mode comprises indicia of a display mode of the conference component. 8. The communication device of claim 7, wherein the indicia of the display mode of the conference component, further comprises indicia of the conference component relative to the video image. 9. The communication device of claim 5, wherein: the communication device uploads, via the network interface, a conference comprising the conference content for display by the endpoint and in response to receiving the conference mode, the conference content is modified in accord with the conference mode indicia. 10. The communication device of claim 1, wherein the attribute comprises at least one of frame rate, frame size, resolution, color, encoding method, and compression type. 11. A method, comprising: receiving a conference content from a first conference device, the conference content comprising a video portion and a conference component; broadcasting, via a network connection, a conference comprising the conference content to a second conference device; receiving a conference mode indicia from the second conference device, wherein the conference mode indicia is associated with the conference content as displayed on the second conference device; and signaling the first conference device to alter the conference content in accord with the conference mode indicia. 12. The method of claim 11, further comprising: determining that the conference mode indicia indicates the second conference device has been placed in a state by a user of the second conference device wherein the second conference device is displaying the conference content in a format requiring less bandwidth than the format of conference content received; and wherein the step of signaling the first conference device to alter the conference content in accord with the conference mode indicia, further comprises signaling the first conference device to throttle bandwidth of the conference content. 13. The method of claim 12, wherein the step of broadcasting the conference further comprising, broadcasting the conference with a placeholder content in lieu of at least a portion of the conference content. 14. The method of claim 11, further comprising: determining that the conference mode indicia indicates the second conference device is not displaying the conference content; and wherein the step of signaling the first conference device to alter the conference content in accord with the conference mode indicia, further comprises signaling the first conference device to discontinue providing the conference content. 15. The method of claim 11, wherein the conference mode indicia further comprises a battery capacity indicia associated with the second conference device. 16. The method of claim 11, wherein the first conference device performs the step of broadcasting the conference. 17. A server, comprising: a network interface configured to receive a conference portion comprising a video from a first device and transmit a conference comprising the conference portion to a second device; a processor, configured to receive by the second device and via the network interface, a first indicia of a presentation mode of the conference portion selected by a user of the second device, in response to the first indicia, transmit to the first device, via the network interface, a throttling signal. 18. The server of claim 17, wherein: the processor is further configured to transmit the conference to a third device; and the processor is further configured to receive by the second device and via the network interface, a second indicia of a presentation mode of the conference portion selected by a user of the third device and, in response to the first indicia and the second indicia, transmit to the first device, via the network interface, the throttling signal. 19. The server of claim 17, wherein the first indicia of the presentation mode indicates an absence of presentation of the conference portion by the second device and the throttling signal indicates termination of the conference portion. 20. The server of claim 17, wherein the first indicia of the presentation mode comprises a battery state indicia of the first device.
Conferencing applications run a variety of devices, including portable devices, such as smart phones, laptop computers, and tablet computers. A source device may be taking a high-resolution video to provide the video as a conference portion of a conference being viewed by a number of devices. The number of devices displaying the conference may only be displaying the conference portion as a thumbnail image, or other low-data image, or not be currently displaying the conference portion at all. The number of devices viewing the conference may then signal back to the source device to downgrade/terminate the capturing and/or transmission of the video. The source device then provides the video or other conference portion in a data-thrifty format more closely matching the conference portion as it is being displayed on the number of devices.1. A communication device, comprising: a network interface; a video sensor configured to capture a video image; and a video controller configured to receive a throttling signal via the network interface from an endpoint receiving the video image, wherein the throttling signal comprises a conference mode of an endpoint selected by a user of the endpoint, displaying the video image and,. in response to the throttling signal, adjusts an attribute of the video image, the attribute effecting the encoding of the video image uploaded to a network via the network interface. 2. The communication device of claim 1, wherein the conference mode comprises indicia of the endpoint providing conference content. 3. The communication device of claim 1, wherein the video controller, upon receiving the throttling signal from the network interface, causes the video controller to adjust the attribute of the video image captured by the video sensor. 4. The communication device of claim 1, wherein the throttling signal comprises indicia of battery charge for a battery powering the endpoint. 5. The communication device of claim 1, wherein: the throttling signal is generated by a conference application executing on the endpoint and presenting, on the endpoint, a conference content comprising the video image and a conference component. 6. The communication device of claim 5, wherein the conference mode comprises indicia of a communication type of the network interface being utilized by the endpoint to receive a conference comprising the video image. 7. The communication device of claim 5, wherein the conference mode comprises indicia of a display mode of the conference component. 8. The communication device of claim 7, wherein the indicia of the display mode of the conference component, further comprises indicia of the conference component relative to the video image. 9. The communication device of claim 5, wherein: the communication device uploads, via the network interface, a conference comprising the conference content for display by the endpoint and in response to receiving the conference mode, the conference content is modified in accord with the conference mode indicia. 10. The communication device of claim 1, wherein the attribute comprises at least one of frame rate, frame size, resolution, color, encoding method, and compression type. 11. A method, comprising: receiving a conference content from a first conference device, the conference content comprising a video portion and a conference component; broadcasting, via a network connection, a conference comprising the conference content to a second conference device; receiving a conference mode indicia from the second conference device, wherein the conference mode indicia is associated with the conference content as displayed on the second conference device; and signaling the first conference device to alter the conference content in accord with the conference mode indicia. 12. The method of claim 11, further comprising: determining that the conference mode indicia indicates the second conference device has been placed in a state by a user of the second conference device wherein the second conference device is displaying the conference content in a format requiring less bandwidth than the format of conference content received; and wherein the step of signaling the first conference device to alter the conference content in accord with the conference mode indicia, further comprises signaling the first conference device to throttle bandwidth of the conference content. 13. The method of claim 12, wherein the step of broadcasting the conference further comprising, broadcasting the conference with a placeholder content in lieu of at least a portion of the conference content. 14. The method of claim 11, further comprising: determining that the conference mode indicia indicates the second conference device is not displaying the conference content; and wherein the step of signaling the first conference device to alter the conference content in accord with the conference mode indicia, further comprises signaling the first conference device to discontinue providing the conference content. 15. The method of claim 11, wherein the conference mode indicia further comprises a battery capacity indicia associated with the second conference device. 16. The method of claim 11, wherein the first conference device performs the step of broadcasting the conference. 17. A server, comprising: a network interface configured to receive a conference portion comprising a video from a first device and transmit a conference comprising the conference portion to a second device; a processor, configured to receive by the second device and via the network interface, a first indicia of a presentation mode of the conference portion selected by a user of the second device, in response to the first indicia, transmit to the first device, via the network interface, a throttling signal. 18. The server of claim 17, wherein: the processor is further configured to transmit the conference to a third device; and the processor is further configured to receive by the second device and via the network interface, a second indicia of a presentation mode of the conference portion selected by a user of the third device and, in response to the first indicia and the second indicia, transmit to the first device, via the network interface, the throttling signal. 19. The server of claim 17, wherein the first indicia of the presentation mode indicates an absence of presentation of the conference portion by the second device and the throttling signal indicates termination of the conference portion. 20. The server of claim 17, wherein the first indicia of the presentation mode comprises a battery state indicia of the first device.
2,600
9,822
9,822
15,377,632
2,685
A method for operating a driver assistance system for a motor vehicle, a viewing direction of a driver of the motor vehicle being monitored and compared to at least one setpoint value. The at least one setpoint value is determined as a function of the course of a route, and an attention value is determined for the driver as a function of the comparison.
1-12. (canceled) 13. A method for operating a driver assistance system for a motor vehicle, comprising: monitoring a viewing direction of a driver of the motor vehicle; comparing the viewing direction to at least one setpoint value, wherein the at least one setpoint value is determined as a function of a course of a route, and an attention value is determined for the driver as a function of the comparison. 14. The method as recited in claim 13, wherein the attention value is reduced if the viewing direction deviates from the respective setpoint value at least beyond a specifiable period of time. 15. The method as recited in claim 14, wherein the attention value is reduced as a function of the magnitude of the deviation. 16. The method as recited in claim 13, wherein in each case an attention value is ascertained for predetermined route sections or locations on the route. 17. The method as recited in claim 13, wherein the course of the route is ascertained with the aid of a driving environment sensor system of the motor vehicle. 18. The method as recited in claim 13, wherein the course of the route is ascertained with the aid of data of a satellite-based navigation system. 19. The method as recited in claim 13, wherein the attention value is compared to a limiting value, and if the attention value drops below the limiting value, at least one safety measure is initiated. 20. The method as recited in claim 13, wherein as safety measure, a warning message is output to the driver. 21. The method as recited in claim 13, wherein at least one of an instantaneous position of the head, and an alignment of the head is ascertained in order to monitor the viewing direction. 22. The method as recited in claim 13, wherein as setpoint values, at least one of an ideal head position, an ideal head pitch angle, head yaw angle and head roll angle are determined for the head of the driver as a function of an instantaneous position of the vehicle on the route. 23. An apparatus for operating a driver assistance system for a motor vehicle, comprising: a sensor device to detect a viewing direction of a driver of the vehicle; and a specially adapted control unit designed to monitor the viewing direction of a driver of the motor vehicle, and compare the viewing direction to at least one setpoint value, wherein the at least one setpoint value is determined as a function of a course of a route, and an attention value is determined for the driver as a function of the comparison. 24. A driver assistance system for a motor vehicle, comprising: an apparatus for operating a driver assistance system for a motor vehicle, the apparatus including a sensor device to detect a viewing direction of a driver of the vehicle, and a specially adapted control unit designed to monitor the viewing direction of a driver of the motor vehicle, and compare the viewing direction to at least one setpoint value, wherein the at least one setpoint value is determined as a function of a course of a route, and an attention value is determined for the driver as a function of the comparison.
A method for operating a driver assistance system for a motor vehicle, a viewing direction of a driver of the motor vehicle being monitored and compared to at least one setpoint value. The at least one setpoint value is determined as a function of the course of a route, and an attention value is determined for the driver as a function of the comparison.1-12. (canceled) 13. A method for operating a driver assistance system for a motor vehicle, comprising: monitoring a viewing direction of a driver of the motor vehicle; comparing the viewing direction to at least one setpoint value, wherein the at least one setpoint value is determined as a function of a course of a route, and an attention value is determined for the driver as a function of the comparison. 14. The method as recited in claim 13, wherein the attention value is reduced if the viewing direction deviates from the respective setpoint value at least beyond a specifiable period of time. 15. The method as recited in claim 14, wherein the attention value is reduced as a function of the magnitude of the deviation. 16. The method as recited in claim 13, wherein in each case an attention value is ascertained for predetermined route sections or locations on the route. 17. The method as recited in claim 13, wherein the course of the route is ascertained with the aid of a driving environment sensor system of the motor vehicle. 18. The method as recited in claim 13, wherein the course of the route is ascertained with the aid of data of a satellite-based navigation system. 19. The method as recited in claim 13, wherein the attention value is compared to a limiting value, and if the attention value drops below the limiting value, at least one safety measure is initiated. 20. The method as recited in claim 13, wherein as safety measure, a warning message is output to the driver. 21. The method as recited in claim 13, wherein at least one of an instantaneous position of the head, and an alignment of the head is ascertained in order to monitor the viewing direction. 22. The method as recited in claim 13, wherein as setpoint values, at least one of an ideal head position, an ideal head pitch angle, head yaw angle and head roll angle are determined for the head of the driver as a function of an instantaneous position of the vehicle on the route. 23. An apparatus for operating a driver assistance system for a motor vehicle, comprising: a sensor device to detect a viewing direction of a driver of the vehicle; and a specially adapted control unit designed to monitor the viewing direction of a driver of the motor vehicle, and compare the viewing direction to at least one setpoint value, wherein the at least one setpoint value is determined as a function of a course of a route, and an attention value is determined for the driver as a function of the comparison. 24. A driver assistance system for a motor vehicle, comprising: an apparatus for operating a driver assistance system for a motor vehicle, the apparatus including a sensor device to detect a viewing direction of a driver of the vehicle, and a specially adapted control unit designed to monitor the viewing direction of a driver of the motor vehicle, and compare the viewing direction to at least one setpoint value, wherein the at least one setpoint value is determined as a function of a course of a route, and an attention value is determined for the driver as a function of the comparison.
2,600
9,823
9,823
14,802,439
2,633
A receiver according to an embodiment includes a receiver circuit to receive a transition in a first direction, a second transition after the first transition in a second direction, and a third transition after the second transition in the first direction and a fourth transition in the second direction of a signal. The receiver circuit is adapted to determine a first time period between the first and third transitions and to determine a second time period between the second and fourth transitions. The receiver circuit is adapted to determine a datum based on at least one of the first time period and the second time period. Furthermore, the receiver is adapted to indicate an error, if the determined first and second time periods do not fulfil a predetermined verification relationship.
1. A receiver, comprising: a receiver circuit to receive a pulse width encoded signal having a first transition in a first direction, a second transition after the first transition in a second direction, a third transition after the second transition in the first direction and a fourth transition in the second direction of the signal, wherein the receiver circuit is adapted to determine a first time period between the first and third transitions and to determine a second time period between the second and fourth transitions, and wherein the receiver circuit is adapted to determine a datum based on at least one of the first time period and the second time period; and wherein the receiver is adapted to indicate an error, if the determined first and second time periods do not fulfill a predetermined verification relationship. 2. The receiver according to claim 1, wherein the receiver is adapted such that the predetermined verification relationship is fulfilled, when a ratio of the determined first time period with respect to the determined second time period assumes a predetermined ratio value or falls within a predetermined range of ratios. 3. The receiver according to claim 1, wherein the receiver is adapted such that the predetermined verification relationship is fulfilled, when the first and second time periods are essentially equal. 4. The receiver according to claim 1, wherein the receiver circuit is adapted to determine the first time period based on the first and third transitions as transitions from a common predefined first signal level to a common predefined second signal level. 5. The receiver according to claim 1, wherein the receiver circuit is adapted to determine the second time period based on the second and fourth transitions as transitions from a common predefined second signal level to a common predefined first signal level. 6. The receiver according to claim 1, wherein the receiver circuit is adapted to determine the datum by processing the at least one respective time period, which is variable and depending on the datum. 7. The receiver according to claim 1, wherein the receiver is adapted to not indicate an error, if the first and second time periods fulfill the predetermined verification relationship. 8. The receiver according to claim 1, wherein the receiver circuit is adapted to determine a time basis for determining at least one of the first and second time periods based on a synchronization frame received before receiving the first, second, third and fourth transitions. 9. A receiver, comprising: a receiver circuit to receive a pulse width encoded signal having a first transition in a first direction and a second transition after the first transition in a second direction in a signal, and wherein the receiver circuit is adapted to determine a duration between the first and second transitions; and wherein the receiver is adapted to indicate an error, if a predetermined value and the determined duration essentially deviate from one another. 10. The receiver according to claim 9, wherein the receiver is adapted to obtain the predetermined value by reading the value from a storage location. 11. The receiver according to claim 9, wherein the predetermined value is based on a calibration. 12. The receiver according to claim 11, wherein the receiver is adapted to at least one of calibrate and re-calibrate the predetermined value during operation of the receiver. 13. The receiver according to claim 12, wherein the receiver is adapted to at least one of calibrate and re-calibrate the predetermined value by a low pass filter. 14. The receiver according to claim 9, wherein the receiver circuit is adapted to receive a third transition in the first direction, to determine a first time period between the first and third transitions and to determine a datum to be received based on the determined first time period. 15. The receiver according to claim 9, wherein the receiver circuit is adapted to receive a third transition in the first direction and a fourth transition in the second direction after the third transition and to determine a further duration (300) between the third and fourth transitions, and wherein the receiver is adapted to indicate the error, if the determined further duration and the predetermined value or a further predetermined value essentially deviate from one another. 16. The receiver according to claim 15, wherein the receiver circuit is adapted to receive at least one of the third transition after the second transition (220), the third transition directly after the second transition (220), the fourth transition after the third transition and the fourth transition directly after the third transition (230). 17. The receiver according to claim 9, wherein the receiver circuit is adapted to determine a time basis for determining the duration between the first and second transitions based on a synchronization frame received before receiving the first and second transitions. 18. A method for detecting an error in a signal comprising a datum, the method comprising: receiving a first transition in a first direction, a second transition after the first transition in a second direction, a third transition after the second transition in the first direction and a fourth transition in the second direction of the signal; determining a first time period between the first and third transitions; determining a second time period between the second and fourth transitions; determining the datum to be received based on at least one of the first time period and the second time period; and indicating an error, if the determined first and second time periods do not fulfil a predetermined verification relationship. 19. A method for detecting an error in a signal, the method comprising: receiving a first transition in a first direction and a second transition after the first transition in a second direction in the signal; determine a duration between the first and second transitions; and indicting an error, if a predetermined value and the determined duration essentially deviate from one another. 20. A computer program having a program code for performing the method of claim 18, when the computer program is executed on a computer, a processor or another programmable hardware.
A receiver according to an embodiment includes a receiver circuit to receive a transition in a first direction, a second transition after the first transition in a second direction, and a third transition after the second transition in the first direction and a fourth transition in the second direction of a signal. The receiver circuit is adapted to determine a first time period between the first and third transitions and to determine a second time period between the second and fourth transitions. The receiver circuit is adapted to determine a datum based on at least one of the first time period and the second time period. Furthermore, the receiver is adapted to indicate an error, if the determined first and second time periods do not fulfil a predetermined verification relationship.1. A receiver, comprising: a receiver circuit to receive a pulse width encoded signal having a first transition in a first direction, a second transition after the first transition in a second direction, a third transition after the second transition in the first direction and a fourth transition in the second direction of the signal, wherein the receiver circuit is adapted to determine a first time period between the first and third transitions and to determine a second time period between the second and fourth transitions, and wherein the receiver circuit is adapted to determine a datum based on at least one of the first time period and the second time period; and wherein the receiver is adapted to indicate an error, if the determined first and second time periods do not fulfill a predetermined verification relationship. 2. The receiver according to claim 1, wherein the receiver is adapted such that the predetermined verification relationship is fulfilled, when a ratio of the determined first time period with respect to the determined second time period assumes a predetermined ratio value or falls within a predetermined range of ratios. 3. The receiver according to claim 1, wherein the receiver is adapted such that the predetermined verification relationship is fulfilled, when the first and second time periods are essentially equal. 4. The receiver according to claim 1, wherein the receiver circuit is adapted to determine the first time period based on the first and third transitions as transitions from a common predefined first signal level to a common predefined second signal level. 5. The receiver according to claim 1, wherein the receiver circuit is adapted to determine the second time period based on the second and fourth transitions as transitions from a common predefined second signal level to a common predefined first signal level. 6. The receiver according to claim 1, wherein the receiver circuit is adapted to determine the datum by processing the at least one respective time period, which is variable and depending on the datum. 7. The receiver according to claim 1, wherein the receiver is adapted to not indicate an error, if the first and second time periods fulfill the predetermined verification relationship. 8. The receiver according to claim 1, wherein the receiver circuit is adapted to determine a time basis for determining at least one of the first and second time periods based on a synchronization frame received before receiving the first, second, third and fourth transitions. 9. A receiver, comprising: a receiver circuit to receive a pulse width encoded signal having a first transition in a first direction and a second transition after the first transition in a second direction in a signal, and wherein the receiver circuit is adapted to determine a duration between the first and second transitions; and wherein the receiver is adapted to indicate an error, if a predetermined value and the determined duration essentially deviate from one another. 10. The receiver according to claim 9, wherein the receiver is adapted to obtain the predetermined value by reading the value from a storage location. 11. The receiver according to claim 9, wherein the predetermined value is based on a calibration. 12. The receiver according to claim 11, wherein the receiver is adapted to at least one of calibrate and re-calibrate the predetermined value during operation of the receiver. 13. The receiver according to claim 12, wherein the receiver is adapted to at least one of calibrate and re-calibrate the predetermined value by a low pass filter. 14. The receiver according to claim 9, wherein the receiver circuit is adapted to receive a third transition in the first direction, to determine a first time period between the first and third transitions and to determine a datum to be received based on the determined first time period. 15. The receiver according to claim 9, wherein the receiver circuit is adapted to receive a third transition in the first direction and a fourth transition in the second direction after the third transition and to determine a further duration (300) between the third and fourth transitions, and wherein the receiver is adapted to indicate the error, if the determined further duration and the predetermined value or a further predetermined value essentially deviate from one another. 16. The receiver according to claim 15, wherein the receiver circuit is adapted to receive at least one of the third transition after the second transition (220), the third transition directly after the second transition (220), the fourth transition after the third transition and the fourth transition directly after the third transition (230). 17. The receiver according to claim 9, wherein the receiver circuit is adapted to determine a time basis for determining the duration between the first and second transitions based on a synchronization frame received before receiving the first and second transitions. 18. A method for detecting an error in a signal comprising a datum, the method comprising: receiving a first transition in a first direction, a second transition after the first transition in a second direction, a third transition after the second transition in the first direction and a fourth transition in the second direction of the signal; determining a first time period between the first and third transitions; determining a second time period between the second and fourth transitions; determining the datum to be received based on at least one of the first time period and the second time period; and indicating an error, if the determined first and second time periods do not fulfil a predetermined verification relationship. 19. A method for detecting an error in a signal, the method comprising: receiving a first transition in a first direction and a second transition after the first transition in a second direction in the signal; determine a duration between the first and second transitions; and indicting an error, if a predetermined value and the determined duration essentially deviate from one another. 20. A computer program having a program code for performing the method of claim 18, when the computer program is executed on a computer, a processor or another programmable hardware.
2,600
9,824
9,824
14,923,293
2,611
A method of displaying virtual information in a view of a real environment comprising the following steps: providing the system relative to at least one part of the real environment and providing accuracy information of the current pose, providing multiple pieces of virtual information, and assigning a respective one of the pieces of virtual information to one of different parameters indicative of different pose accuracy information, and displaying at least one of the pieces of virtual information in the view of the real environment according to the accuracy information of the current pose in relation to the assigned parameter.
1. (canceled) 2. A method of displaying virtual information in a view of a real environment, comprising: determining a current pose of a system that displays virtual information in a real environment, wherein the current pose is determined relative to at least one part of the real environment; determining an uncertainty value for the current pose; obtaining, based on the current pose, a plurality of points of interest and an uncertainty parameter for each of the plurality of points of interest; and determining a display method for each of the plurality of points of interest based on the uncertainty value and each points of interest's corresponding uncertainty parameter. 3. The method of claim 2, wherein at least one of the determined display methods results in a point of interest not being displayed. 4. The method of claim 2, wherein the display method for each of the plurality of points of interest is selected from a group including a 3D model and an image overlay. 5. The method of claim 2, wherein determining a display method comprises: determining, for a particular point of interest, whether the uncertainty value satisfies an uncertainty threshold associated with the particular point of interest, and displaying the point of interest in a view of the real environment when the uncertainty value is determined to satisfy the uncertainty threshold for the particular point of interest. 6. The method of claim 2, wherein determining a display method comprises: determining a current distance from the system to a particular point of interest, and displaying the particular point of interest when the current distance satisfies a distance parameter associated with the particular point of interest. 7. The method of claim 2, wherein the uncertainty value is based on whether the current pose is determined using an optical pose determination method. 8. The method of claim 2, wherein the uncertainty parameter for each of the plurality of points of interest is one of a plurality of uncertainty parameters for each of the plurality of points of interest, and wherein determining a display method comprises determining a display method for each of the plurality of points of interest based on the uncertainty value and each points of interest's corresponding plurality of uncertainty parameters. 9. A computer readable medium for displaying virtual information in a view of a real environment, comprising computer readable code executable by a processor to: determine a current pose of a system that displays virtual information in a real environment, wherein the current pose is determined relative to at least one part of the real environment; determine an uncertainty value for the current pose; obtain, based on the current pose, a plurality of points of interest and an uncertainty parameter for each of the plurality of points of interest; and determine a display method for each of the plurality of points of interest based on the uncertainty value and each points of interest's corresponding uncertainty parameter. 10. The computer readable medium of claim 9, wherein at least one of the determined display methods results in a point of interest not being displayed. 11. The computer readable medium of claim 9, wherein the display method for each of the plurality of points of interest is selected from a group including a 3D model and an image overlay. 12. The computer readable medium of claim 9, wherein the computer readable code to determine a display method comprises computer readable code to: determine, for a particular point of interest, whether the uncertainty value satisfies an uncertainty threshold associated with the particular point of interest, and display the point of interest in a view of the real environment when the uncertainty value is determined to satisfy the uncertainty threshold for the particular point of interest. 13. The computer readable medium of claim 9, wherein the computer readable code to determine a display method comprises computer readable code to: determine a current distance from the system to a particular point of interest, and display the particular point of interest when the current distance satisfies a distance parameter associated with the particular point of interest. 14. The computer readable medium of claim 9, wherein the uncertainty value is based on whether the current pose is determined using an optical pose determination method. 15. The computer readable medium of claim 9, wherein the uncertainty parameter for each of the plurality of points of interest is one of a plurality of uncertainty parameters for each of the plurality of points of interest, and wherein determining a display method comprises determining a display method for each of the plurality of points of interest based on the uncertainty value and each points of interest's corresponding plurality of uncertainty parameters. 16. A system for displaying virtual information in a view of a real environment, the system comprising: one or more processors; and a memory comprising computer readable code executable by the one or more processors to: determine a current pose of a system that displays virtual information in a real environment, wherein the current pose is determined relative to at least one part of the real environment; determine an uncertainty value for the current pose; obtain, based on the current pose, a plurality of points of interest and an uncertainty parameter for each of the plurality of points of interest; and determine a display method for each of the plurality of points of interest based on the uncertainty value and each points of interest's corresponding uncertainty parameter. 17. The system of claim 16, wherein at least one of the determined display methods results in a point of interest not being displayed. 18. The system of claim 16, wherein the display method for each of the plurality of points of interest is selected from a group including a 3D model and an image overlay. 19. The system of claim 16, wherein the computer readable code to determine a display method comprises computer readable code to: determine, for a particular point of interest, whether the uncertainty value satisfies an uncertainty threshold associated with the particular point of interest, and display the point of interest in a view of the real environment when the uncertainty value is determined to satisfy the uncertainty threshold for the particular point of interest. 20. The system of claim 16, wherein the computer readable code to determine a display method comprises computer readable code to: determine a current distance from the system to a particular point of interest, and display the particular point of interest when the current distance satisfies a distance parameter associated with the particular point of interest. 21. The system of claim 16, wherein the uncertainty value is based on whether the current pose is determined using an optical pose determination method. 22. The system of claim 16, wherein the uncertainty parameter for each of the plurality of points of interest is one of a plurality of uncertainty parameters for each of the plurality of points of interest, and wherein determining a display method comprises determining a display method for each of the plurality of points of interest based on the uncertainty value and each points of interest's corresponding plurality of uncertainty parameters.
A method of displaying virtual information in a view of a real environment comprising the following steps: providing the system relative to at least one part of the real environment and providing accuracy information of the current pose, providing multiple pieces of virtual information, and assigning a respective one of the pieces of virtual information to one of different parameters indicative of different pose accuracy information, and displaying at least one of the pieces of virtual information in the view of the real environment according to the accuracy information of the current pose in relation to the assigned parameter.1. (canceled) 2. A method of displaying virtual information in a view of a real environment, comprising: determining a current pose of a system that displays virtual information in a real environment, wherein the current pose is determined relative to at least one part of the real environment; determining an uncertainty value for the current pose; obtaining, based on the current pose, a plurality of points of interest and an uncertainty parameter for each of the plurality of points of interest; and determining a display method for each of the plurality of points of interest based on the uncertainty value and each points of interest's corresponding uncertainty parameter. 3. The method of claim 2, wherein at least one of the determined display methods results in a point of interest not being displayed. 4. The method of claim 2, wherein the display method for each of the plurality of points of interest is selected from a group including a 3D model and an image overlay. 5. The method of claim 2, wherein determining a display method comprises: determining, for a particular point of interest, whether the uncertainty value satisfies an uncertainty threshold associated with the particular point of interest, and displaying the point of interest in a view of the real environment when the uncertainty value is determined to satisfy the uncertainty threshold for the particular point of interest. 6. The method of claim 2, wherein determining a display method comprises: determining a current distance from the system to a particular point of interest, and displaying the particular point of interest when the current distance satisfies a distance parameter associated with the particular point of interest. 7. The method of claim 2, wherein the uncertainty value is based on whether the current pose is determined using an optical pose determination method. 8. The method of claim 2, wherein the uncertainty parameter for each of the plurality of points of interest is one of a plurality of uncertainty parameters for each of the plurality of points of interest, and wherein determining a display method comprises determining a display method for each of the plurality of points of interest based on the uncertainty value and each points of interest's corresponding plurality of uncertainty parameters. 9. A computer readable medium for displaying virtual information in a view of a real environment, comprising computer readable code executable by a processor to: determine a current pose of a system that displays virtual information in a real environment, wherein the current pose is determined relative to at least one part of the real environment; determine an uncertainty value for the current pose; obtain, based on the current pose, a plurality of points of interest and an uncertainty parameter for each of the plurality of points of interest; and determine a display method for each of the plurality of points of interest based on the uncertainty value and each points of interest's corresponding uncertainty parameter. 10. The computer readable medium of claim 9, wherein at least one of the determined display methods results in a point of interest not being displayed. 11. The computer readable medium of claim 9, wherein the display method for each of the plurality of points of interest is selected from a group including a 3D model and an image overlay. 12. The computer readable medium of claim 9, wherein the computer readable code to determine a display method comprises computer readable code to: determine, for a particular point of interest, whether the uncertainty value satisfies an uncertainty threshold associated with the particular point of interest, and display the point of interest in a view of the real environment when the uncertainty value is determined to satisfy the uncertainty threshold for the particular point of interest. 13. The computer readable medium of claim 9, wherein the computer readable code to determine a display method comprises computer readable code to: determine a current distance from the system to a particular point of interest, and display the particular point of interest when the current distance satisfies a distance parameter associated with the particular point of interest. 14. The computer readable medium of claim 9, wherein the uncertainty value is based on whether the current pose is determined using an optical pose determination method. 15. The computer readable medium of claim 9, wherein the uncertainty parameter for each of the plurality of points of interest is one of a plurality of uncertainty parameters for each of the plurality of points of interest, and wherein determining a display method comprises determining a display method for each of the plurality of points of interest based on the uncertainty value and each points of interest's corresponding plurality of uncertainty parameters. 16. A system for displaying virtual information in a view of a real environment, the system comprising: one or more processors; and a memory comprising computer readable code executable by the one or more processors to: determine a current pose of a system that displays virtual information in a real environment, wherein the current pose is determined relative to at least one part of the real environment; determine an uncertainty value for the current pose; obtain, based on the current pose, a plurality of points of interest and an uncertainty parameter for each of the plurality of points of interest; and determine a display method for each of the plurality of points of interest based on the uncertainty value and each points of interest's corresponding uncertainty parameter. 17. The system of claim 16, wherein at least one of the determined display methods results in a point of interest not being displayed. 18. The system of claim 16, wherein the display method for each of the plurality of points of interest is selected from a group including a 3D model and an image overlay. 19. The system of claim 16, wherein the computer readable code to determine a display method comprises computer readable code to: determine, for a particular point of interest, whether the uncertainty value satisfies an uncertainty threshold associated with the particular point of interest, and display the point of interest in a view of the real environment when the uncertainty value is determined to satisfy the uncertainty threshold for the particular point of interest. 20. The system of claim 16, wherein the computer readable code to determine a display method comprises computer readable code to: determine a current distance from the system to a particular point of interest, and display the particular point of interest when the current distance satisfies a distance parameter associated with the particular point of interest. 21. The system of claim 16, wherein the uncertainty value is based on whether the current pose is determined using an optical pose determination method. 22. The system of claim 16, wherein the uncertainty parameter for each of the plurality of points of interest is one of a plurality of uncertainty parameters for each of the plurality of points of interest, and wherein determining a display method comprises determining a display method for each of the plurality of points of interest based on the uncertainty value and each points of interest's corresponding plurality of uncertainty parameters.
2,600
9,825
9,825
14,796,949
2,642
Digital multimedia is broadcast to wireless receivers on a unidirectional wireless broadcast channel, while control data necessary for presentation of the multimedia is provided on a bidirectional, point-to-point wireless link.
1-18. (canceled) 19. A wireless client station capable of communicating using at least two communication links, comprising: at least one processor receiving a digital multimedia stream received on a broadcast channel and control data received on a bidirectional wireless link; wherein the processor uses the control data to enable presentation of the multimedia stream on a display and the control data includes at least one key useful in decrypting the multimedia stream and further includes data useful decompressing the digital multimedia stream. 20. The client station of claim 19, wherein the bidirectional wireless link is not associated with the broadcast channel. 21. The client station of claim 19, wherein the broadcast channel is unidirectional. 22. The client station of claim 19, wherein the bidirectional wireless link is at least one of: a CDMA link, a 802.11 link, a GSM link, a satellite link, and a Bluetooth link. 23. The client station of claim 19, wherein the bidirectional wireless link is a point-to-point wireless communication link. 24. The client station of claim 19, wherein the control data includes data associated with a subscription to a multimedia broadcast service and/or data associated with a registration on a multimedia broadcast network. 25. The client station of claim 19, wherein the control data includes at least one application useful in playing the multimedia data. 26. The client station of claim 19, wherein the control data includes billing information data related to user preferences and/or data related to levels of service related to providing the multimedia stream. 27. The client station of claim 19, wherein the multimedia stream is digital and is encrypted. 28. The client station of claim 19, wherein the control data includes data useful for de-interleaving, decompressing, and decoding the multimedia stream. 29. The client station of claim 19, wherein the control data includes data useful for indexing into the multimedia stream for channel selection and tracking. 30. The client station of claim 19, wherein at least one of: services, and products, can be ordered over the bidirectional link. 31. A method of receiving a multimedia stream at a wireless client station capable of communicating using at least two communication links, comprising: receiving the digital multimedia stream received on a wireless broadcast channel and control data on a bidirectional wireless link at the wireless client station; wherein the control data received over the bidirectional wireless link includes at least one key useful in decrypting the multimedia stream and further includes data useful decompressing the digital multimedia stream. 32. The method of claim 31, wherein the bidirectional wireless link is not associated with the broadcast channel. 33. The method of claim 31, wherein the broadcast channel is unidirectional. 34. The method of claim 31, wherein the bidirectional wireless link is at least one of: a CDMA link, a 802.11 link, a GSM link, a satellite link, and a Bluetooth link. 35. The method of claim 31, wherein the bidirectional wireless link is a point-to-point wireless communication link. 36. The method of claim 31, wherein the control data includes data associated with a subscription to a multimedia broadcast service and/or data associated with a registration on a multimedia broadcast network. 37. The method of claim 31, wherein the control data includes at least one application useful in playing the multimedia data. 38. The method of claim 31, wherein the control data includes billing information data related to user preferences and/or data related to levels of service related to providing the multimedia stream. 39. The method of claim 31, wherein the multimedia stream is digital and is encrypted. 40. The method of claim 31, wherein the control data includes data useful for de-interleaving, decompressing, and decoding the multimedia stream. 41. The method of claim 31, wherein the control data includes data useful for indexing into the multimedia stream for channel selection and tracking. 42. The method of claim 31, wherein at least one of: services, and products, can be ordered over the bidirectional link. 43. An apparatus for receiving a multimedia stream at a wireless client station capable of communicating using at least two communication links, comprising: means for receiving the digital multimedia stream received on a wireless broadcast channel and control data on a bidirectional wireless link at the wireless client station; wherein the control data received over the bidirectional wireless link includes at least one key useful in decrypting the multimedia stream and further includes data useful decompressing the digital multimedia stream. 44. The apparatus of claim 43, wherein the bidirectional wireless link is not associated with the broadcast channel. 45. The apparatus of claim 43, wherein the broadcast channel is unidirectional. 46. The apparatus of claim 43, wherein the bidirectional wireless link is at least one of: a CDMA link, a 802.11 link, a GSM link, a satellite link, and a Bluetooth link. 47. The apparatus of claim 43, wherein the bidirectional wireless link is a point-to-point wireless communication link. 48. The apparatus of claim 43, wherein the control data includes data associated with a subscription to a multimedia broadcast service and/or data associated with a registration on a multimedia broadcast network. 49. The apparatus of claim 43, wherein the control data includes at least one application useful in playing the multimedia data. 50. The apparatus of claim 43, wherein the control data includes billing information data related to user preferences and/or data related to levels of service related to providing the multimedia stream. 51. The apparatus of claim 43, wherein the multimedia stream is digital and is encrypted. 52. The apparatus of claim 43, wherein the control data includes data useful for de-interleaving, decompressing, and decoding the multimedia stream. 53. The apparatus of claim 43, wherein the control data includes data useful for indexing into the multimedia stream for channel selection and tracking. 54. The apparatus of claim 43, wherein at least one of: services, and products, can be ordered over the bidirectional link.
Digital multimedia is broadcast to wireless receivers on a unidirectional wireless broadcast channel, while control data necessary for presentation of the multimedia is provided on a bidirectional, point-to-point wireless link.1-18. (canceled) 19. A wireless client station capable of communicating using at least two communication links, comprising: at least one processor receiving a digital multimedia stream received on a broadcast channel and control data received on a bidirectional wireless link; wherein the processor uses the control data to enable presentation of the multimedia stream on a display and the control data includes at least one key useful in decrypting the multimedia stream and further includes data useful decompressing the digital multimedia stream. 20. The client station of claim 19, wherein the bidirectional wireless link is not associated with the broadcast channel. 21. The client station of claim 19, wherein the broadcast channel is unidirectional. 22. The client station of claim 19, wherein the bidirectional wireless link is at least one of: a CDMA link, a 802.11 link, a GSM link, a satellite link, and a Bluetooth link. 23. The client station of claim 19, wherein the bidirectional wireless link is a point-to-point wireless communication link. 24. The client station of claim 19, wherein the control data includes data associated with a subscription to a multimedia broadcast service and/or data associated with a registration on a multimedia broadcast network. 25. The client station of claim 19, wherein the control data includes at least one application useful in playing the multimedia data. 26. The client station of claim 19, wherein the control data includes billing information data related to user preferences and/or data related to levels of service related to providing the multimedia stream. 27. The client station of claim 19, wherein the multimedia stream is digital and is encrypted. 28. The client station of claim 19, wherein the control data includes data useful for de-interleaving, decompressing, and decoding the multimedia stream. 29. The client station of claim 19, wherein the control data includes data useful for indexing into the multimedia stream for channel selection and tracking. 30. The client station of claim 19, wherein at least one of: services, and products, can be ordered over the bidirectional link. 31. A method of receiving a multimedia stream at a wireless client station capable of communicating using at least two communication links, comprising: receiving the digital multimedia stream received on a wireless broadcast channel and control data on a bidirectional wireless link at the wireless client station; wherein the control data received over the bidirectional wireless link includes at least one key useful in decrypting the multimedia stream and further includes data useful decompressing the digital multimedia stream. 32. The method of claim 31, wherein the bidirectional wireless link is not associated with the broadcast channel. 33. The method of claim 31, wherein the broadcast channel is unidirectional. 34. The method of claim 31, wherein the bidirectional wireless link is at least one of: a CDMA link, a 802.11 link, a GSM link, a satellite link, and a Bluetooth link. 35. The method of claim 31, wherein the bidirectional wireless link is a point-to-point wireless communication link. 36. The method of claim 31, wherein the control data includes data associated with a subscription to a multimedia broadcast service and/or data associated with a registration on a multimedia broadcast network. 37. The method of claim 31, wherein the control data includes at least one application useful in playing the multimedia data. 38. The method of claim 31, wherein the control data includes billing information data related to user preferences and/or data related to levels of service related to providing the multimedia stream. 39. The method of claim 31, wherein the multimedia stream is digital and is encrypted. 40. The method of claim 31, wherein the control data includes data useful for de-interleaving, decompressing, and decoding the multimedia stream. 41. The method of claim 31, wherein the control data includes data useful for indexing into the multimedia stream for channel selection and tracking. 42. The method of claim 31, wherein at least one of: services, and products, can be ordered over the bidirectional link. 43. An apparatus for receiving a multimedia stream at a wireless client station capable of communicating using at least two communication links, comprising: means for receiving the digital multimedia stream received on a wireless broadcast channel and control data on a bidirectional wireless link at the wireless client station; wherein the control data received over the bidirectional wireless link includes at least one key useful in decrypting the multimedia stream and further includes data useful decompressing the digital multimedia stream. 44. The apparatus of claim 43, wherein the bidirectional wireless link is not associated with the broadcast channel. 45. The apparatus of claim 43, wherein the broadcast channel is unidirectional. 46. The apparatus of claim 43, wherein the bidirectional wireless link is at least one of: a CDMA link, a 802.11 link, a GSM link, a satellite link, and a Bluetooth link. 47. The apparatus of claim 43, wherein the bidirectional wireless link is a point-to-point wireless communication link. 48. The apparatus of claim 43, wherein the control data includes data associated with a subscription to a multimedia broadcast service and/or data associated with a registration on a multimedia broadcast network. 49. The apparatus of claim 43, wherein the control data includes at least one application useful in playing the multimedia data. 50. The apparatus of claim 43, wherein the control data includes billing information data related to user preferences and/or data related to levels of service related to providing the multimedia stream. 51. The apparatus of claim 43, wherein the multimedia stream is digital and is encrypted. 52. The apparatus of claim 43, wherein the control data includes data useful for de-interleaving, decompressing, and decoding the multimedia stream. 53. The apparatus of claim 43, wherein the control data includes data useful for indexing into the multimedia stream for channel selection and tracking. 54. The apparatus of claim 43, wherein at least one of: services, and products, can be ordered over the bidirectional link.
2,600
9,826
9,826
14,479,412
2,625
Various embodiments associated with a heads-up display capable of functioning while underwater are described. An underwater mask can have a segment that a diver sees through and this segmented can be augmented with various portions that disclose information to the diver. These portions can relate to the diver herself or relate to other information such as the location of a source transmitting a signal. With these portions the diver can quickly learn about important information and act on that information.
1. An underwater mask, comprising: an eyewear element configured to be substantially transparent; and a disclosure component configured to cause disclosure of a heads-up display upon the eyewear element while the eyewear element is submerged underwater. 2. The underwater mask of claim 1, where the heads-up display comprises a directional portion configured to indicate a location of a signal relative to a direction a wearer of the eyewear element faces. 3. The underwater mask of claim 1, where the heads-up display comprises a directional portion configured to indicate a location of a signal relative to a direction of an antenna that receives the signal. 4. The underwater mask of claim 1, where the heads-up display comprises a distance portion configured to indicate a distance of a signal source relative to a location of a wearer of the eyewear element. 5. The underwater mask of claim 1, where the heads-up display comprises a level portion configured to disclose a level of an oxygen level for a tank set of a wearer of the eyewear element. 6. The underwater mask of claim 1, where the heads-up display comprises a warning portion configured to indicate an equipment error for equipment employed by the wearer of the eyewear element. 7. The underwater mask of claim 1, where the heads-up display comprises an identification portion configured to identify a specific transmitter associated with a specific signal received by a reception component. 8. The underwater mask of claim 7, where the heads-up display comprises a physical vital portion configured to disclose physical vital information about a person associated with the specific transmitter. 9. The underwater mask of claim 1, where the heads-up display comprises a positional portion configured to disclose positional information of the wearer of the eyewear element. 10. The underwater mask of claim 1, where the heads-up display comprises a physical vital portion configured to disclose physical vital information about the wearer of the eyewear element. 11. The underwater mask of claim 1, comprising: a housing configured to retain the disclosure component such that the disclosure component functions without substantial adverse impact when submerged about 350 meter or less underwater. 12. A system, comprising: an access component that accesses a data set that pertains to an underwater diver; a configuration component that configures a heads-up display in accordance with the data set, where the heads-up display, as configured, is disclosed upon an eyewear element of the underwater diver; and a non-transitory computer-readable medium that retains at least one instruction associated with the access component, the configuration component, or a combination thereof. 13. The system of claim 12, comprising: a disclosure component that causes disclosure of the heads-up display, as configured, upon the eyewear element while the eyewear element is submerged underwater. 14. The system of claim 13, comprising: a housing configured to retain the disclosure component, the access component, the configuration component, and the non-transitory computer-readable medium such that the disclosure component, the access component, the configuration component, and the non-transitory computer-readable medium function without substantial adverse impact when submerged about 350 meter or less underwater. 15. The system of claim 12, where: the heads-up display comprises a directional portion configured to indicate a location of a signal relative to a direction of an antenna that receives the signal and the heads-up display comprises a distance portion configured to indicate a distance of a signal source relative to a location of a wearer of the eyewear element. 16. The system of claim 12, where the heads-up display comprises a warning portion configured to indicate an equipment error for equipment employed by the wearer of the eyewear element. 17. The system of claim 12, where the heads-up display comprises an identification portion configured to identify a specific transmitter associated with a specific signal received by a reception component. 18. The system of claim 12, where the heads-up display comprises a directional portion configured to indicate a location of a signal relative to a direction a wearer of the eyewear element faces. 19. A system, comprising: an access component that accesses a data set that pertains to an underwater diver; a configuration component that configures a heads-up display in accordance with the data set; where the heads-up display, as configured, is disclosed upon an eyewear element of the underwater diver; a disclosure component that causes disclosure of the heads-up display, as configured, upon the eyewear element while the eyewear element is submerged underwater; and a housing that retains the access component, the configuration component, and the disclosure component such that the access component, the configuration component, and the disclosure component function without substantial adverse impact when submerged underwater at a distance of about 50 meters. 20. The system of claim 19, where: the heads-up display comprises a directional portion configured to indicate a location of a signal relative to a direction a wearer of the eyewear element faces, the heads-up display comprises a depth portion configured to indicate a depth level of the underwater diver, the heads-up display comprises a distance portion configured to indicate a distance of a signal source relative to a location of a wearer of the eyewear element, the heads-up display comprises a warning portion configured to indicate an equipment error for equipment employed by the wearer of the eyewear element, the heads-up display comprises an identification portion configured to identify a specific transmitter associated with a specific signal received by a reception component, the heads-up display comprises a first physical vital portion configured to disclose physical vital information about a person associated with the specific transmitter, and the heads-up display comprises a second physical vital portion configured to disclose physical vital information about the underwater diver.
Various embodiments associated with a heads-up display capable of functioning while underwater are described. An underwater mask can have a segment that a diver sees through and this segmented can be augmented with various portions that disclose information to the diver. These portions can relate to the diver herself or relate to other information such as the location of a source transmitting a signal. With these portions the diver can quickly learn about important information and act on that information.1. An underwater mask, comprising: an eyewear element configured to be substantially transparent; and a disclosure component configured to cause disclosure of a heads-up display upon the eyewear element while the eyewear element is submerged underwater. 2. The underwater mask of claim 1, where the heads-up display comprises a directional portion configured to indicate a location of a signal relative to a direction a wearer of the eyewear element faces. 3. The underwater mask of claim 1, where the heads-up display comprises a directional portion configured to indicate a location of a signal relative to a direction of an antenna that receives the signal. 4. The underwater mask of claim 1, where the heads-up display comprises a distance portion configured to indicate a distance of a signal source relative to a location of a wearer of the eyewear element. 5. The underwater mask of claim 1, where the heads-up display comprises a level portion configured to disclose a level of an oxygen level for a tank set of a wearer of the eyewear element. 6. The underwater mask of claim 1, where the heads-up display comprises a warning portion configured to indicate an equipment error for equipment employed by the wearer of the eyewear element. 7. The underwater mask of claim 1, where the heads-up display comprises an identification portion configured to identify a specific transmitter associated with a specific signal received by a reception component. 8. The underwater mask of claim 7, where the heads-up display comprises a physical vital portion configured to disclose physical vital information about a person associated with the specific transmitter. 9. The underwater mask of claim 1, where the heads-up display comprises a positional portion configured to disclose positional information of the wearer of the eyewear element. 10. The underwater mask of claim 1, where the heads-up display comprises a physical vital portion configured to disclose physical vital information about the wearer of the eyewear element. 11. The underwater mask of claim 1, comprising: a housing configured to retain the disclosure component such that the disclosure component functions without substantial adverse impact when submerged about 350 meter or less underwater. 12. A system, comprising: an access component that accesses a data set that pertains to an underwater diver; a configuration component that configures a heads-up display in accordance with the data set, where the heads-up display, as configured, is disclosed upon an eyewear element of the underwater diver; and a non-transitory computer-readable medium that retains at least one instruction associated with the access component, the configuration component, or a combination thereof. 13. The system of claim 12, comprising: a disclosure component that causes disclosure of the heads-up display, as configured, upon the eyewear element while the eyewear element is submerged underwater. 14. The system of claim 13, comprising: a housing configured to retain the disclosure component, the access component, the configuration component, and the non-transitory computer-readable medium such that the disclosure component, the access component, the configuration component, and the non-transitory computer-readable medium function without substantial adverse impact when submerged about 350 meter or less underwater. 15. The system of claim 12, where: the heads-up display comprises a directional portion configured to indicate a location of a signal relative to a direction of an antenna that receives the signal and the heads-up display comprises a distance portion configured to indicate a distance of a signal source relative to a location of a wearer of the eyewear element. 16. The system of claim 12, where the heads-up display comprises a warning portion configured to indicate an equipment error for equipment employed by the wearer of the eyewear element. 17. The system of claim 12, where the heads-up display comprises an identification portion configured to identify a specific transmitter associated with a specific signal received by a reception component. 18. The system of claim 12, where the heads-up display comprises a directional portion configured to indicate a location of a signal relative to a direction a wearer of the eyewear element faces. 19. A system, comprising: an access component that accesses a data set that pertains to an underwater diver; a configuration component that configures a heads-up display in accordance with the data set; where the heads-up display, as configured, is disclosed upon an eyewear element of the underwater diver; a disclosure component that causes disclosure of the heads-up display, as configured, upon the eyewear element while the eyewear element is submerged underwater; and a housing that retains the access component, the configuration component, and the disclosure component such that the access component, the configuration component, and the disclosure component function without substantial adverse impact when submerged underwater at a distance of about 50 meters. 20. The system of claim 19, where: the heads-up display comprises a directional portion configured to indicate a location of a signal relative to a direction a wearer of the eyewear element faces, the heads-up display comprises a depth portion configured to indicate a depth level of the underwater diver, the heads-up display comprises a distance portion configured to indicate a distance of a signal source relative to a location of a wearer of the eyewear element, the heads-up display comprises a warning portion configured to indicate an equipment error for equipment employed by the wearer of the eyewear element, the heads-up display comprises an identification portion configured to identify a specific transmitter associated with a specific signal received by a reception component, the heads-up display comprises a first physical vital portion configured to disclose physical vital information about a person associated with the specific transmitter, and the heads-up display comprises a second physical vital portion configured to disclose physical vital information about the underwater diver.
2,600
9,827
9,827
12,947,321
2,622
Systems and methods for increasing the haptic bandwidth of an electronic device are disclosed. One disclosed embodiment of a system is an apparatus having a first actuator; a second actuator; and a processor coupled to the first and second actuators, the processor configured to apply a first command signal to the first actuator to output a first haptic effect from a first start time to a first stop time, the processor configured to apply a second command signal to the second actuator to output a second haptic effect from a second start time to a second stop time.
1. An apparatus comprising: a first actuator; a second actuator; a processor coupled to the first and second actuators, the processor configured to apply a first command signal to the first actuator to output a first haptic effect from a first start time to a first stop time, the processor configured to apply a second command signal to the second actuator to output a second haptic effect from a second start time to a second stop time. 2. The apparatus of claim 1, wherein the first haptic effect and the second haptic effect are vibrations. 3. The apparatus of claim 1, wherein the first actuator is an eccentric rotating mass. 4. The apparatus of claim 1, wherein the second actuator is an eccentric rotating mass. 5. The apparatus of claim 1, wherein the first actuator is a linear resonating actuator. 6. The apparatus of claim 1, wherein the second actuator is a linear resonating actuator. 7. The apparatus of claim 1, further comprising: a touch sensitive component coupled to the processor, the touch sensitive component configured to display a graphical object thereon; and a sensor coupled to the touch sensitive component and the processor, the sensor configured to detect a position of a user input on the touch sensitive component, wherein the processor is configured to activate the first and second actuators upon the sensor indicating the user's input on a haptic enabled area in the graphical object. 8. The apparatus of claim 1, wherein the processor is configured to activate the first and second actuators upon a software program indicating a haptic event has occurred. 9. The apparatus of claim 1, further comprising a third actuator coupled to the processor, wherein the third actuator outputs a third haptic effect different than the first and second haptic effects. 10. The apparatus of claim 9, wherein the third actuator is coupled to a touch sensitive component, wherein the third haptic effect is a high-frequency vibration applied to the touch sensitive component to provide a texture effect or to reduce a friction force between the touch sensitive component and a user's input. 11. The apparatus of claim 1, wherein the processor applies a AC voltage to at least a portion of the first command signal to achieve a desired change in velocity from the first actuator. 12. The apparatus of claim 1, wherein the processor applies a AC voltage to at least a portion of the second command signal to achieve a desired change in velocity from the second actuator. 13. A method comprising: receiving an interaction signal at a processor of an interaction occurring within a graphical environment, the interaction corresponding to a haptic effect; applying a first input signal to a first actuator to output a first haptic effect, wherein the first actuator outputs the first haptic effect beginning at a first time and terminating at a second time; and applying a second input signal to a second actuator to output a second haptic effect, wherein the second actuator outputs the second haptic effect beginning at a third time, wherein the third time occurs after the second time. 14. The method of claim 13, wherein the second haptic effect terminates at a fourth time, the method further comprising: applying the first input signal to the first actuator to output the first haptic effect beginning at a fifth time, wherein the fifth time occurs after the fourth time. 15. The method of claim 13, further comprising: displaying the graphical environment via a touch sensitive component coupled to the processor; detecting a selection of a haptic area in the graphical environment; and sending the interaction signal corresponding to the selection of the haptic area to the processor. 16. The method of claim 13, further comprising outputting a third haptic effect via a third actuator upon receiving a corresponding input command signal from the processor, wherein the third haptic effect different than the first and second haptic effects. 17. An electronic device comprising: a body; a processor within the body; and a plurality of actuators within the body and coupled to the processor, each actuator configured to output a corresponding haptic effect upon receiving a respective input signal from the processor, wherein the processor is configured to: receive an interaction signal indicating an interaction, the interaction corresponding to a haptic effect; apply a first input signal to a first actuator of the plurality of actuators to output a first haptic effect, wherein the first actuator outputs the first haptic effect beginning at a first time and terminating at a second time; and apply a second input signal to a second actuator of the plurality of actuators to output a second haptic effect, wherein the second actuator outputs the second haptic effect beginning at a third time, wherein the third time occurs after the second time. 18. The device of claim 17, further comprising: a touch sensitive component coupled to the processor and the body, the touch sensitive component configured to display a graphical object thereon; and a sensor coupled to the touch sensitive component and the processor, the sensor configured to detect a position of a user input on the touch sensitive component, wherein the processor is configured to activate at least the first and second actuators upon the sensor indicating the user's input on a haptic enabled area in the graphical object. 19. The device of claim 17, wherein the processor is configured to activate the first and second actuators upon a software program indicating a haptic event has occurred. 20. The device of claim 17, further comprising a third actuator coupled to the processor, wherein the third actuator outputs a third haptic effect different than the first and second haptic effects. 21. A system comprising: a piezoelectric actuator; a second actuator; and a processor in communication with the first actuator and the second actuator, the processor configured to: generate a first actuator signal, the first actuator signal configured to cause a vibration at a frequency of greater than approximately 20 kHz; generate a second actuator signal, the second actuator signal configured to cause a vibration between approximately 100-300 Hz; transmit the first actuator signal to the piezoelectric actuator; and transmit the second actuator signal to the second actuator. 22. The system of claim 21, further comprising a computer-readable medium, the computer-readable medium configured to store first and second actuator information, the first actuator information comprising at least one parameter describing a characteristic of the first actuator, and the second actuator information comprising at least one parameters describing a characteristic of the second actuator. 23. The system of claim 22, wherein the processor is configured to: receive a command; determine a haptic effect based on the command; select one of the piezoelectric actuator or the second actuator based at least in part on the haptic effect, the first actuator information, and the second actuator information; if the piezoelectric actuator is selected, generate the first actuator signal and transmit the first actuator signal to the piezoelectric actuator, if the second actuator is selected, generate the second actuator signal and transmit the first actuator signal to the second actuator. 24. The system of claim 21, wherein the processor is further configured to: receive a command, determine a haptic effect based at least in part on the command, transmit the first actuator signal to the piezoelectric actuator if the haptic effect comprises a friction haptic effect, and transmit the second actuator signal to the second actuator if the haptic effect comprises a vibrational haptic effect. 25. The system of claim 21, further comprising a touch-sensitive input device, and wherein the piezoelectric actuator is coupled to the touch-sensitive input device. 26. The system of claim 21, wherein the second actuator comprises one of an eccentric rotating mass, a linear resonant actuator, or a piezoelectric actuator. 27. The system of claim 21, further comprising a third actuator, the third actuator comprising a second piezoelectric actuator, wherein the piezoelectric actuator is a first piezoelectric actuator and is configured to output haptic effects in a first direction, and wherein the second piezoelectric actuator is configured to output haptic effects in a second direction, the second direction different from the first direction.
Systems and methods for increasing the haptic bandwidth of an electronic device are disclosed. One disclosed embodiment of a system is an apparatus having a first actuator; a second actuator; and a processor coupled to the first and second actuators, the processor configured to apply a first command signal to the first actuator to output a first haptic effect from a first start time to a first stop time, the processor configured to apply a second command signal to the second actuator to output a second haptic effect from a second start time to a second stop time.1. An apparatus comprising: a first actuator; a second actuator; a processor coupled to the first and second actuators, the processor configured to apply a first command signal to the first actuator to output a first haptic effect from a first start time to a first stop time, the processor configured to apply a second command signal to the second actuator to output a second haptic effect from a second start time to a second stop time. 2. The apparatus of claim 1, wherein the first haptic effect and the second haptic effect are vibrations. 3. The apparatus of claim 1, wherein the first actuator is an eccentric rotating mass. 4. The apparatus of claim 1, wherein the second actuator is an eccentric rotating mass. 5. The apparatus of claim 1, wherein the first actuator is a linear resonating actuator. 6. The apparatus of claim 1, wherein the second actuator is a linear resonating actuator. 7. The apparatus of claim 1, further comprising: a touch sensitive component coupled to the processor, the touch sensitive component configured to display a graphical object thereon; and a sensor coupled to the touch sensitive component and the processor, the sensor configured to detect a position of a user input on the touch sensitive component, wherein the processor is configured to activate the first and second actuators upon the sensor indicating the user's input on a haptic enabled area in the graphical object. 8. The apparatus of claim 1, wherein the processor is configured to activate the first and second actuators upon a software program indicating a haptic event has occurred. 9. The apparatus of claim 1, further comprising a third actuator coupled to the processor, wherein the third actuator outputs a third haptic effect different than the first and second haptic effects. 10. The apparatus of claim 9, wherein the third actuator is coupled to a touch sensitive component, wherein the third haptic effect is a high-frequency vibration applied to the touch sensitive component to provide a texture effect or to reduce a friction force between the touch sensitive component and a user's input. 11. The apparatus of claim 1, wherein the processor applies a AC voltage to at least a portion of the first command signal to achieve a desired change in velocity from the first actuator. 12. The apparatus of claim 1, wherein the processor applies a AC voltage to at least a portion of the second command signal to achieve a desired change in velocity from the second actuator. 13. A method comprising: receiving an interaction signal at a processor of an interaction occurring within a graphical environment, the interaction corresponding to a haptic effect; applying a first input signal to a first actuator to output a first haptic effect, wherein the first actuator outputs the first haptic effect beginning at a first time and terminating at a second time; and applying a second input signal to a second actuator to output a second haptic effect, wherein the second actuator outputs the second haptic effect beginning at a third time, wherein the third time occurs after the second time. 14. The method of claim 13, wherein the second haptic effect terminates at a fourth time, the method further comprising: applying the first input signal to the first actuator to output the first haptic effect beginning at a fifth time, wherein the fifth time occurs after the fourth time. 15. The method of claim 13, further comprising: displaying the graphical environment via a touch sensitive component coupled to the processor; detecting a selection of a haptic area in the graphical environment; and sending the interaction signal corresponding to the selection of the haptic area to the processor. 16. The method of claim 13, further comprising outputting a third haptic effect via a third actuator upon receiving a corresponding input command signal from the processor, wherein the third haptic effect different than the first and second haptic effects. 17. An electronic device comprising: a body; a processor within the body; and a plurality of actuators within the body and coupled to the processor, each actuator configured to output a corresponding haptic effect upon receiving a respective input signal from the processor, wherein the processor is configured to: receive an interaction signal indicating an interaction, the interaction corresponding to a haptic effect; apply a first input signal to a first actuator of the plurality of actuators to output a first haptic effect, wherein the first actuator outputs the first haptic effect beginning at a first time and terminating at a second time; and apply a second input signal to a second actuator of the plurality of actuators to output a second haptic effect, wherein the second actuator outputs the second haptic effect beginning at a third time, wherein the third time occurs after the second time. 18. The device of claim 17, further comprising: a touch sensitive component coupled to the processor and the body, the touch sensitive component configured to display a graphical object thereon; and a sensor coupled to the touch sensitive component and the processor, the sensor configured to detect a position of a user input on the touch sensitive component, wherein the processor is configured to activate at least the first and second actuators upon the sensor indicating the user's input on a haptic enabled area in the graphical object. 19. The device of claim 17, wherein the processor is configured to activate the first and second actuators upon a software program indicating a haptic event has occurred. 20. The device of claim 17, further comprising a third actuator coupled to the processor, wherein the third actuator outputs a third haptic effect different than the first and second haptic effects. 21. A system comprising: a piezoelectric actuator; a second actuator; and a processor in communication with the first actuator and the second actuator, the processor configured to: generate a first actuator signal, the first actuator signal configured to cause a vibration at a frequency of greater than approximately 20 kHz; generate a second actuator signal, the second actuator signal configured to cause a vibration between approximately 100-300 Hz; transmit the first actuator signal to the piezoelectric actuator; and transmit the second actuator signal to the second actuator. 22. The system of claim 21, further comprising a computer-readable medium, the computer-readable medium configured to store first and second actuator information, the first actuator information comprising at least one parameter describing a characteristic of the first actuator, and the second actuator information comprising at least one parameters describing a characteristic of the second actuator. 23. The system of claim 22, wherein the processor is configured to: receive a command; determine a haptic effect based on the command; select one of the piezoelectric actuator or the second actuator based at least in part on the haptic effect, the first actuator information, and the second actuator information; if the piezoelectric actuator is selected, generate the first actuator signal and transmit the first actuator signal to the piezoelectric actuator, if the second actuator is selected, generate the second actuator signal and transmit the first actuator signal to the second actuator. 24. The system of claim 21, wherein the processor is further configured to: receive a command, determine a haptic effect based at least in part on the command, transmit the first actuator signal to the piezoelectric actuator if the haptic effect comprises a friction haptic effect, and transmit the second actuator signal to the second actuator if the haptic effect comprises a vibrational haptic effect. 25. The system of claim 21, further comprising a touch-sensitive input device, and wherein the piezoelectric actuator is coupled to the touch-sensitive input device. 26. The system of claim 21, wherein the second actuator comprises one of an eccentric rotating mass, a linear resonant actuator, or a piezoelectric actuator. 27. The system of claim 21, further comprising a third actuator, the third actuator comprising a second piezoelectric actuator, wherein the piezoelectric actuator is a first piezoelectric actuator and is configured to output haptic effects in a first direction, and wherein the second piezoelectric actuator is configured to output haptic effects in a second direction, the second direction different from the first direction.
2,600
9,828
9,828
14,381,398
2,683
A wireless location system for energizing and/or determining location information indicative of a location of at least one transceiver tag ( 2 ) of a plurality of transceiver tags distributed within a tracking area, the transceiver tags being operable to wirelessly transmit an identification signal when an energizing electromagnetic signal is present at the location of the transceiver tag, comprises at least one mobile energizer node ( 4 ) comprising a transmitter ( 24 ) operable to transmit the energizing electromagnetic signal for the at least one transceiver tag ( 2 ). The mobile energizer node ( 4 ) is mounted on a movable device ( 6 ). At least one receiver ( 10 ) is operable to receive the identification signal from the transceiver tags and at least one location evaluator ( 14 ) is operable to determine the location information using a location of the mobile energizing node ( 4 ) and/or a signal characteristic of the received identification signal of the at least one transceiver tag ( 2 ).
1-22. (canceled) 23. Method for energizing a transceiver tag (2), the transceiver tag (2) being operable to wirelessly transmit an identification signal when an energizing electromagnetic signal is present at a location of the transceiver tag (2), the method comprising: moving a mobile energizer node (4) transmitting the energizing electromagnetic signal within at least an energizing range (8) into a vicinity of the transceiver tag (2), such that the transceiver tag (2) is located within the energizing range (8) of the mobile energizer node (4). 24. The method of claim 23, further comprising: receiving the identification signal from the transceiver tag (2); and determining location information indicative of the location of the transceiver tag (2) using a location of the mobile energizing node (4) and/or a signal characteristic of the received identification signal. 25. The method of claim 23, wherein the transceiver tag (2) is a passive Radio Frequency Identification (RFID) tag. 26. A mobile energizer node (4) comprising a transmitter (24) operable to transmit an energizing electromagnetic signal for a transceiver tag (2), the mobile energizer node (4) further comprising a mobility adaptor (26) for mounting the mobile energizer (4) node on a movable device (6). 27. The mobile energizer node of claim 26, further comprising a movable device (6) coupled with the mobile energizer node (4) via the mobility adaptor (26), the movable device (6) being operable to move the mobile energizer node (4) in space. 28. The mobile energizer node (4) of claim 26, further comprising a wireless receiver interface operable to receive query information, the query information identifying a particular transceiver tag, wherein the transmitter (24) is further operable to transmit the query information to trigger the submission of the identification signal of the particular transceiver tag identified by the query information. 29. The mobile energizer node of claim 26, further comprising a further transceiver tag, the further transceiver tag being operable to wirelessly transmit an identification signal in the presence of an energizing electromagnetic signal, wherein the further transceiver tag is coupled to the mobile energizer node. 30. The mobile energizer node of claim 26, further comprising a beam former operable to direct a maximum of a directivity pattern of the energizing electromagnetic signal to a desired solid angle. 31. The mobile energizer node of claim 30, wherein the beam former is operable to rotate a direction of the maximum of the directivity pattern with a constant or with a variable angular velocity. 32. The mobile energizer node of claim 26, wherein the transmitter of the mobile energizer node (4) is operable to transmit an energizing electromagnetic signal having a frequency corresponding to a sending frequency of a Radio Frequency Identification (RFID) system. 33. The mobile energizer node of claim 26, having coupled thereto a positioning device operable to determine data indicating a location of the mobile energizer node. 34. The mobile energizer node of claim 33, wherein the data indicating the location of the mobile energizer node is based on odometric data and/or on information determined by an inertial navigational system. 35. Wireless location system for determining location information indicative of a location of at least one transceiver tag (2) of a plurality of transceiver tags distributed within a tracking area, the transceiver tags being operable to wirelessly transmit an identification signal when an energizing electromagnetic signal is present at the location of the transceiver tag, comprising: at least one mobile energizer node (4) comprising a transmitter (24) operable to transmit the energizing electromagnetic signal for the at least one transceiver tag (2), the mobile energizer node (4) being mounted on a movable device (6); at least one receiver (10) operable to receive the identification signal from the transceiver tags; and at least one location evaluator (14) operable to determine the location information using a location of the mobile energizing node (4) and/or a signal characteristic of the received identification signal of the at least one transceiver tag (2). 36. The wireless location system of claim 35, further comprising a movement scheduler (16) operable to determine a movement path for the movable device (4), the movement path indicating a desired movement of the movable device (6) within the tracking area. 37. The wireless location system of claim 36, wherein the movement scheduler (10) is operable to determine the movement path such that the desired movement repeatedly follows a predetermined path. 38. The wireless location system of claim 36, wherein the movement scheduler (10) is operable to determine the movement path such that the movable device (6) stays for a longer time period in an area comprising a higher number of tags than in an area comprising a lower number of tags. 39. The wireless location system of claim 36, wherein the movement scheduler (6) is further operable to communicate with and to control the movable device (6) such that the movable device (6) autonomously moves along the movement path as determined by the movement scheduler (16). 40. The wireless location system of claim 35, wherein the movable device (6) is one of the group of a crane, a fork truck, a truck, a carriage, a drone, a vacuum cleaner, an autonomous robot, or an utility machine. 41. The wireless location system of claim 13, further comprising one or more infrastructure transceiver tags (12 a, 12 d), an infrastructure transceiver tag being placed at a known position within the tracking area and operable to wirelessly transmit an identification signal when an energizing electromagnetic signal is present at the location of an infrastructure transceiver tag (2). 42. The wireless location system of claim 35, wherein the mobile energizer node comprises a beam former operable to direct a maximum of a directivity pattern of the energizing electromagnetic signal to a desired solid angle, the wireless location system further comprising: a further mobile energizer node comprising a transmitter operable to transmit the energizing electromagnetic signal for the at least one transceiver tag (2), the further mobile energizer node being mounted on a further movable device and comprising a further beam former operable to direct a maximum of a directivity pattern of the energizing electromagnetic signal of the further mobile energizer node to a further desired solid angle, wherein the beam former and the further beam former are operable to rotate a direction of the maximum of their respective directivity patterns with a different angular velocity or according to a different variation scheme.
A wireless location system for energizing and/or determining location information indicative of a location of at least one transceiver tag ( 2 ) of a plurality of transceiver tags distributed within a tracking area, the transceiver tags being operable to wirelessly transmit an identification signal when an energizing electromagnetic signal is present at the location of the transceiver tag, comprises at least one mobile energizer node ( 4 ) comprising a transmitter ( 24 ) operable to transmit the energizing electromagnetic signal for the at least one transceiver tag ( 2 ). The mobile energizer node ( 4 ) is mounted on a movable device ( 6 ). At least one receiver ( 10 ) is operable to receive the identification signal from the transceiver tags and at least one location evaluator ( 14 ) is operable to determine the location information using a location of the mobile energizing node ( 4 ) and/or a signal characteristic of the received identification signal of the at least one transceiver tag ( 2 ).1-22. (canceled) 23. Method for energizing a transceiver tag (2), the transceiver tag (2) being operable to wirelessly transmit an identification signal when an energizing electromagnetic signal is present at a location of the transceiver tag (2), the method comprising: moving a mobile energizer node (4) transmitting the energizing electromagnetic signal within at least an energizing range (8) into a vicinity of the transceiver tag (2), such that the transceiver tag (2) is located within the energizing range (8) of the mobile energizer node (4). 24. The method of claim 23, further comprising: receiving the identification signal from the transceiver tag (2); and determining location information indicative of the location of the transceiver tag (2) using a location of the mobile energizing node (4) and/or a signal characteristic of the received identification signal. 25. The method of claim 23, wherein the transceiver tag (2) is a passive Radio Frequency Identification (RFID) tag. 26. A mobile energizer node (4) comprising a transmitter (24) operable to transmit an energizing electromagnetic signal for a transceiver tag (2), the mobile energizer node (4) further comprising a mobility adaptor (26) for mounting the mobile energizer (4) node on a movable device (6). 27. The mobile energizer node of claim 26, further comprising a movable device (6) coupled with the mobile energizer node (4) via the mobility adaptor (26), the movable device (6) being operable to move the mobile energizer node (4) in space. 28. The mobile energizer node (4) of claim 26, further comprising a wireless receiver interface operable to receive query information, the query information identifying a particular transceiver tag, wherein the transmitter (24) is further operable to transmit the query information to trigger the submission of the identification signal of the particular transceiver tag identified by the query information. 29. The mobile energizer node of claim 26, further comprising a further transceiver tag, the further transceiver tag being operable to wirelessly transmit an identification signal in the presence of an energizing electromagnetic signal, wherein the further transceiver tag is coupled to the mobile energizer node. 30. The mobile energizer node of claim 26, further comprising a beam former operable to direct a maximum of a directivity pattern of the energizing electromagnetic signal to a desired solid angle. 31. The mobile energizer node of claim 30, wherein the beam former is operable to rotate a direction of the maximum of the directivity pattern with a constant or with a variable angular velocity. 32. The mobile energizer node of claim 26, wherein the transmitter of the mobile energizer node (4) is operable to transmit an energizing electromagnetic signal having a frequency corresponding to a sending frequency of a Radio Frequency Identification (RFID) system. 33. The mobile energizer node of claim 26, having coupled thereto a positioning device operable to determine data indicating a location of the mobile energizer node. 34. The mobile energizer node of claim 33, wherein the data indicating the location of the mobile energizer node is based on odometric data and/or on information determined by an inertial navigational system. 35. Wireless location system for determining location information indicative of a location of at least one transceiver tag (2) of a plurality of transceiver tags distributed within a tracking area, the transceiver tags being operable to wirelessly transmit an identification signal when an energizing electromagnetic signal is present at the location of the transceiver tag, comprising: at least one mobile energizer node (4) comprising a transmitter (24) operable to transmit the energizing electromagnetic signal for the at least one transceiver tag (2), the mobile energizer node (4) being mounted on a movable device (6); at least one receiver (10) operable to receive the identification signal from the transceiver tags; and at least one location evaluator (14) operable to determine the location information using a location of the mobile energizing node (4) and/or a signal characteristic of the received identification signal of the at least one transceiver tag (2). 36. The wireless location system of claim 35, further comprising a movement scheduler (16) operable to determine a movement path for the movable device (4), the movement path indicating a desired movement of the movable device (6) within the tracking area. 37. The wireless location system of claim 36, wherein the movement scheduler (10) is operable to determine the movement path such that the desired movement repeatedly follows a predetermined path. 38. The wireless location system of claim 36, wherein the movement scheduler (10) is operable to determine the movement path such that the movable device (6) stays for a longer time period in an area comprising a higher number of tags than in an area comprising a lower number of tags. 39. The wireless location system of claim 36, wherein the movement scheduler (6) is further operable to communicate with and to control the movable device (6) such that the movable device (6) autonomously moves along the movement path as determined by the movement scheduler (16). 40. The wireless location system of claim 35, wherein the movable device (6) is one of the group of a crane, a fork truck, a truck, a carriage, a drone, a vacuum cleaner, an autonomous robot, or an utility machine. 41. The wireless location system of claim 13, further comprising one or more infrastructure transceiver tags (12 a, 12 d), an infrastructure transceiver tag being placed at a known position within the tracking area and operable to wirelessly transmit an identification signal when an energizing electromagnetic signal is present at the location of an infrastructure transceiver tag (2). 42. The wireless location system of claim 35, wherein the mobile energizer node comprises a beam former operable to direct a maximum of a directivity pattern of the energizing electromagnetic signal to a desired solid angle, the wireless location system further comprising: a further mobile energizer node comprising a transmitter operable to transmit the energizing electromagnetic signal for the at least one transceiver tag (2), the further mobile energizer node being mounted on a further movable device and comprising a further beam former operable to direct a maximum of a directivity pattern of the energizing electromagnetic signal of the further mobile energizer node to a further desired solid angle, wherein the beam former and the further beam former are operable to rotate a direction of the maximum of their respective directivity patterns with a different angular velocity or according to a different variation scheme.
2,600
9,829
9,829
11,367,426
2,625
A game apparatus includes an LCD, and a touch panel is provided in relation to the LCD. The LCD displays a game screen, and the player performs touch operations (sliding, click, etc.) on the touch panel with use of a stick to draw, correct and decide at random a moving path of an object. When the movement path of the object is decided and some point on the movement path is clicked, the object moves to the clicked position according to the movement path.
1. An object movement control apparatus, comprising: a display means for displaying a movable object; a pointing device provided in relation to said display means; an input detection means for detecting input coordinates input by said pointing device; a determination means for determining whether or not the input coordinates detected by said input detection means matches a display position of said movable object; a drawn locus creation means for, when result of the determination by said determination means shows that there is a match, creating a first locus according to the input coordinates continuously detected by said input detection means; and an object movement means for, after said drawn locus creation means has created said first locus and then said input detection means has detected no input coordinates temporarily, some input coordinates detected again by said input detection means indicates a first position on said first locus, moving said movable object from a start point of the first locus to the first position, in accordance with the first locus. 2. An object movement control apparatus according to claim 1, further comprising a locus extension means for, after said drawn locus creation means has created said first locus and said input detection means has detected no input coordinates temporarily, when some input coordinates detected again by said input detection means indicates a second position not on said first locus, extending said first locus from an end point of said first locus to the second position, in accordance with a predetermined rule. 3. An object movement control apparatus according to claim 1, further comprising a locus shortening means for, when said first position is a point other than the end point of said first locus, shortening the first locus from the start point to the first position. 4. An object movement control apparatus according to claim 3, wherein after said locus shortening means shortens said first locus down to said first position, said drawn locus creation means extends the first locus based on the input coordinates continuously detected by said input detection means, and after said extended first locus has been created and said input detection means has detected no input coordinates temporarily, when some input coordinates detected again by said input detection means indicates a third position on said extended first locus, said object movement means moves said movable object from the start point of the first locus to the third position, in accordance with the first locus. 5. An object movement control apparatus according to claim 1, wherein a plurality of said movable objects exist, a current position of one of said movable objects selected by said pointing device is the start point of said first locus, and said object movement means moves the selected one movable object in accordance with said first locus. 6. An object movement control apparatus according to claim 1, further comprising: an allowable movement range determination means for determining whether or not said first locus exceeds an allowable movement range of said movable object; and a locus decision means for, when said first locus exceeds the allowable movement range of said movable object, deciding a second locus different from the first locus, which links said start point with a fourth position corresponding to current input coordinates detected by said input detection means, in accordance with a predetermined rule. 7. An object movement control apparatus according to claim 6, further comprising a locus erase means for erasing said first locus when said locus decision means decides the second locus. 8. A storage medium for storing an object movement control program for an object movement control apparatus comprising a display means for displaying a movable object and a pointing device provided in relation to said display means, wherein said object movement control program causes a processor of said object movement control apparatus to execute: an input detection step of detecting input coordinates input by said pointing device; a determination step of determining whether or not the input coordinates detected in said input detection step matches a display position of said movable object; a drawn locus creation step of, when result of the determination in said determination step shows that there is a match, creating a first locus according to the input coordinates continuously detected in said input detection step; and an object movement step of, after said first locus has been created in said drawn locus creation step and said input detection step has detected no input coordinates temporarily, when some input coordinates detected again in said input detection step indicates a first position on said first locus, moving said movable object from a start point of the first locus to the first position, in accordance with the first locus. 9. An object movement control method for an object movement control apparatus comprising a display means for displaying a movable object, a pointing device provided in relation to said display means and an input detection means for detecting coordinates input by the pointing device, including following steps of: (a) determining whether or not the input coordinates detected in said input detection means matches a display position of said movable object; (b) when result of the determination in said step (a) shows that there is a match, creating a first locus according to the input coordinates continuously detected in said input detection means; and (c) after said first locus has been created in said step (b) and said input detection means has detected no input coordinates temporarily, when some input coordinates detected again in said input detection means indicates a first position on said first locus, moving said movable object from a start point of the first locus to the first position, in accordance with the first locus.
A game apparatus includes an LCD, and a touch panel is provided in relation to the LCD. The LCD displays a game screen, and the player performs touch operations (sliding, click, etc.) on the touch panel with use of a stick to draw, correct and decide at random a moving path of an object. When the movement path of the object is decided and some point on the movement path is clicked, the object moves to the clicked position according to the movement path.1. An object movement control apparatus, comprising: a display means for displaying a movable object; a pointing device provided in relation to said display means; an input detection means for detecting input coordinates input by said pointing device; a determination means for determining whether or not the input coordinates detected by said input detection means matches a display position of said movable object; a drawn locus creation means for, when result of the determination by said determination means shows that there is a match, creating a first locus according to the input coordinates continuously detected by said input detection means; and an object movement means for, after said drawn locus creation means has created said first locus and then said input detection means has detected no input coordinates temporarily, some input coordinates detected again by said input detection means indicates a first position on said first locus, moving said movable object from a start point of the first locus to the first position, in accordance with the first locus. 2. An object movement control apparatus according to claim 1, further comprising a locus extension means for, after said drawn locus creation means has created said first locus and said input detection means has detected no input coordinates temporarily, when some input coordinates detected again by said input detection means indicates a second position not on said first locus, extending said first locus from an end point of said first locus to the second position, in accordance with a predetermined rule. 3. An object movement control apparatus according to claim 1, further comprising a locus shortening means for, when said first position is a point other than the end point of said first locus, shortening the first locus from the start point to the first position. 4. An object movement control apparatus according to claim 3, wherein after said locus shortening means shortens said first locus down to said first position, said drawn locus creation means extends the first locus based on the input coordinates continuously detected by said input detection means, and after said extended first locus has been created and said input detection means has detected no input coordinates temporarily, when some input coordinates detected again by said input detection means indicates a third position on said extended first locus, said object movement means moves said movable object from the start point of the first locus to the third position, in accordance with the first locus. 5. An object movement control apparatus according to claim 1, wherein a plurality of said movable objects exist, a current position of one of said movable objects selected by said pointing device is the start point of said first locus, and said object movement means moves the selected one movable object in accordance with said first locus. 6. An object movement control apparatus according to claim 1, further comprising: an allowable movement range determination means for determining whether or not said first locus exceeds an allowable movement range of said movable object; and a locus decision means for, when said first locus exceeds the allowable movement range of said movable object, deciding a second locus different from the first locus, which links said start point with a fourth position corresponding to current input coordinates detected by said input detection means, in accordance with a predetermined rule. 7. An object movement control apparatus according to claim 6, further comprising a locus erase means for erasing said first locus when said locus decision means decides the second locus. 8. A storage medium for storing an object movement control program for an object movement control apparatus comprising a display means for displaying a movable object and a pointing device provided in relation to said display means, wherein said object movement control program causes a processor of said object movement control apparatus to execute: an input detection step of detecting input coordinates input by said pointing device; a determination step of determining whether or not the input coordinates detected in said input detection step matches a display position of said movable object; a drawn locus creation step of, when result of the determination in said determination step shows that there is a match, creating a first locus according to the input coordinates continuously detected in said input detection step; and an object movement step of, after said first locus has been created in said drawn locus creation step and said input detection step has detected no input coordinates temporarily, when some input coordinates detected again in said input detection step indicates a first position on said first locus, moving said movable object from a start point of the first locus to the first position, in accordance with the first locus. 9. An object movement control method for an object movement control apparatus comprising a display means for displaying a movable object, a pointing device provided in relation to said display means and an input detection means for detecting coordinates input by the pointing device, including following steps of: (a) determining whether or not the input coordinates detected in said input detection means matches a display position of said movable object; (b) when result of the determination in said step (a) shows that there is a match, creating a first locus according to the input coordinates continuously detected in said input detection means; and (c) after said first locus has been created in said step (b) and said input detection means has detected no input coordinates temporarily, when some input coordinates detected again in said input detection means indicates a first position on said first locus, moving said movable object from a start point of the first locus to the first position, in accordance with the first locus.
2,600
9,830
9,830
14,373,279
2,689
The present invention relates to a tag for identifying an object image, including: a feature changing module, including multiple light sources and changing intensity and a wavelength of the light source according to a feature signal; and a communication module, receiving a feature signal generated from the feature changing module, and sending an output signal related to the feature signal. The present invention further relates to a device for identifying an object image, including: a communication module, where the communication module receives a feature signal, where the feature signal includes a command or related information for controlling multiple light sources; a processing unit, where the processing unit receives the feature signal and an image signal from an image sensor, and generates an image identification result according to the image signal and the feature signal; and a storage module, used for storing the image identification result. The present invention can precisely identify an object and provide features related to the object to be identified without using a complex image identification algorithm or a technology for improving resolution of an image.
1.-16. (canceled) 17. A tag for identifying an object in image, comprising: a feature changing module, comprising one or more light sources, and changing the light source according to a feature signal; a location module, used for acquiring location information of the tag, transmitting the location information to a communication module, and sending through the communication module the location information to a pairing side; and the communication module, receiving or sending a feature signal, receiving or sending a radio signal related to the feature signal, and sending location information of the location module, wherein the pairing side allocates different feature signals to tags that are possibly located within a range of a sensor. 18. The tag for identifying an object in image according to claim 17, wherein the feature signal comprises a command for controlling one or more light sources in a light emitting module, and the feature changing module can control, according to the command comprised in the feature signal, a change and a duration of one or more light sources. 19. The tag for identifying an object in image according to claim 17, wherein the communication module receives a feature signal, and transmits the feature signal to the feature changing module, so as to instruct the feature changing module to generate the feature signal. 20. The tag for identifying an object in image according to claim 17, wherein the light source of the feature changing module is a lamp. 21. The tag for identifying an object in image according to claim 17, wherein the light source of the feature changing module is pixels on a screen. 22. The tag for identifying an object in image according to claim 17, wherein the light source of the feature changing module is an infra-red source. 23. The tag for identifying an object in image according to claim 17, wherein a change of the light source is intensity of light. 24. The tag for identifying an object in image according to claim 17, wherein a change of the light source is a wavelength of light. 25. The tag for identifying an object in image according to claim 17, wherein the communication module is a radio frequency communication module. 26. The tag for identifying an object in image according to claim 17, wherein the communication module is an infra-red communication module. 27. A device for identifying an object in image, comprising: a communication module, wherein the communication module receives or sends a feature signal, wherein the feature signal comprises a command or related information for controlling a light source; a processing unit, wherein the processing unit receives, from a paring side through the communication module, information of tags that are possibly located within a range of an image sensor, feature signals that are allocated to the tags, and an image signal of the image sensor, generates an image identification result, and stores the image identification result in a storage module; the storage module, used for storing the image identification result; and a location module, used for acquiring sensing range information of the image sensor, transmitting the sensing range information to the communication module, and sending through the communication module the sensing range information to the pairing side. 28. The device for identifying an object in image according to claim 27, wherein the pairing side matches collected tag locations with the sensing range of the image sensor, finds out tags that are possibly located within the range of the sensor, and transmits the list of tags to the device for identifying an object in image image. 29. The device for identifying an object in image according to claim 28, wherein the pairing side allocates different feature signals to tags that are possibly located within the range of the sensor.
The present invention relates to a tag for identifying an object image, including: a feature changing module, including multiple light sources and changing intensity and a wavelength of the light source according to a feature signal; and a communication module, receiving a feature signal generated from the feature changing module, and sending an output signal related to the feature signal. The present invention further relates to a device for identifying an object image, including: a communication module, where the communication module receives a feature signal, where the feature signal includes a command or related information for controlling multiple light sources; a processing unit, where the processing unit receives the feature signal and an image signal from an image sensor, and generates an image identification result according to the image signal and the feature signal; and a storage module, used for storing the image identification result. The present invention can precisely identify an object and provide features related to the object to be identified without using a complex image identification algorithm or a technology for improving resolution of an image.1.-16. (canceled) 17. A tag for identifying an object in image, comprising: a feature changing module, comprising one or more light sources, and changing the light source according to a feature signal; a location module, used for acquiring location information of the tag, transmitting the location information to a communication module, and sending through the communication module the location information to a pairing side; and the communication module, receiving or sending a feature signal, receiving or sending a radio signal related to the feature signal, and sending location information of the location module, wherein the pairing side allocates different feature signals to tags that are possibly located within a range of a sensor. 18. The tag for identifying an object in image according to claim 17, wherein the feature signal comprises a command for controlling one or more light sources in a light emitting module, and the feature changing module can control, according to the command comprised in the feature signal, a change and a duration of one or more light sources. 19. The tag for identifying an object in image according to claim 17, wherein the communication module receives a feature signal, and transmits the feature signal to the feature changing module, so as to instruct the feature changing module to generate the feature signal. 20. The tag for identifying an object in image according to claim 17, wherein the light source of the feature changing module is a lamp. 21. The tag for identifying an object in image according to claim 17, wherein the light source of the feature changing module is pixels on a screen. 22. The tag for identifying an object in image according to claim 17, wherein the light source of the feature changing module is an infra-red source. 23. The tag for identifying an object in image according to claim 17, wherein a change of the light source is intensity of light. 24. The tag for identifying an object in image according to claim 17, wherein a change of the light source is a wavelength of light. 25. The tag for identifying an object in image according to claim 17, wherein the communication module is a radio frequency communication module. 26. The tag for identifying an object in image according to claim 17, wherein the communication module is an infra-red communication module. 27. A device for identifying an object in image, comprising: a communication module, wherein the communication module receives or sends a feature signal, wherein the feature signal comprises a command or related information for controlling a light source; a processing unit, wherein the processing unit receives, from a paring side through the communication module, information of tags that are possibly located within a range of an image sensor, feature signals that are allocated to the tags, and an image signal of the image sensor, generates an image identification result, and stores the image identification result in a storage module; the storage module, used for storing the image identification result; and a location module, used for acquiring sensing range information of the image sensor, transmitting the sensing range information to the communication module, and sending through the communication module the sensing range information to the pairing side. 28. The device for identifying an object in image according to claim 27, wherein the pairing side matches collected tag locations with the sensing range of the image sensor, finds out tags that are possibly located within the range of the sensor, and transmits the list of tags to the device for identifying an object in image image. 29. The device for identifying an object in image according to claim 28, wherein the pairing side allocates different feature signals to tags that are possibly located within the range of the sensor.
2,600
9,831
9,831
13,098,576
2,628
A pixel circuit and a display panel with an IR-drop compensation function are disclosed. The display panel includes multiple pixel circuits and multiple compensation circuits. Each of the pixel circuits includes a detecting switch. After a real work voltage of a pixel circuit is transmitted to a corresponding compensation circuit through a corresponding detecting switch, a data transmitted to the pixel circuit is adjusted by the compensation circuit according to a relationship between the real work voltage and an original work voltage.
1. A pixel circuit electrically coupled to a current-supply line, at least a control line and a data line which is for providing data, the pixel circuit comprising: a current-driven device comprising a first end and a second end, wherein a light is emitted from the current-driven device while a current is flowing through the current-driven device from the first end to the second end; a current-control circuit for receiving data from the data line and storing the received data as driving data according to the voltage level of a first control line, receiving a real work voltage from the current-supply line, and controlling the current intensity flowing to the current-driven device via the current-control circuit from the current-supply line according to the driving data; and a detecting switch comprising a control end, a first channel end and a second channel end, wherein the first channel end of the detecting switch is electrically coupled to the current-control circuit and is used for retrieving the real work voltage, the control end of the detecting switch is electrically coupled to a second control line and is used for determining the electrical conduction between the first and the second channel ends of the detecting switch. 2. The pixel circuit according to claim 1, wherein the current-control circuit comprises: a first switch comprising a control end, a first channel end and a second channel end, wherein the control end of the first switch is electrically coupled to the first control line, the first channel end of the first switch is electrically coupled to the data line; a capacitor, wherein one end of the capacitor and the second channel end of the first switch are both electrically coupled to a data-store node, and the other end of the capacitor is electrically coupled to the current-supply line; and a second switch comprising a control end, a first channel end and a second channel end, wherein the control end of the second switch is electrically coupled to the data-store node, the first channel end of the second switch is electrically coupled to the current-supply line and the second channel end of the second switch is electrically coupled to the first end of the current-driven device. 3. The pixel circuit according to claim 1, wherein the first control line and the second control is a same control line. 4. The pixel circuit according to claim 1, wherein the first control line and the second control line are used for transmitting two different time-sequence signals, the enable period of the time-sequence signal transmitted by the first control line is after the enable period of the time-sequence signal transmitted by the second control line, without overlap there. 5. The pixel circuit according to claim 4, wherein the second channel end of the detecting switch is electrically coupled to the data line. 6. A display panel, comprising: a plurality of data lines; a plurality of control lines; a plurality of power-supply lines; a plurality of pixel circuits, each of the pixel circuits is electrically coupled to at least one of the control lines, one of the power-supply lines and one of the data lines, and each of the pixel circuits comprising: a current-driven device comprising a first end and a second end, wherein a light is emitted from the current-driven device while a current is flowing through the current-driven device from the first end to the second end; a current-control circuit for receiving data from the data line and storing the received data as driving data according to the voltage level of a first control line, receiving a real work voltage from the current-supply line, and controlling the current intensity flowing to the current-driven device via the current-control circuit from the current-supply line according to the driving data; and a detecting switch comprising a control end, a first channel end and a second channel end, wherein the first channel end of the detecting switch is electrically coupled to the current-control circuit and is used for retrieving the real work voltage, the control end of the detecting switch is electrically coupled to a second control line and is used for determining the electrical conduction between the first and the second channel ends of the detecting switch; and a plurality of compensation circuits, wherein the second channel end of the detecting switch of each of the pixel circuits is electrically coupled to a corresponding compensation circuit, and the corresponding compensation circuit is for modulating the data received from the data line which is electrically coupled to a corresponding pixel circuit according to a relationship between an original work voltage and the voltage received from the second channel end of the detecting switch. 7. The display panel according to claim 6, wherein the current-control circuit comprises: a first switch comprising a control end, a first channel end and a second channel end, wherein the control end of the first switch is electrically coupled to the first control line, the first channel end of the first switch is electrically coupled to its corresponding data line; a capacitor, wherein one end of the capacitor and the second channel end of the first switch are both electrically coupled to a data-store node, and the other end of the capacitor is electrically coupled to its corresponding current-supply line; and a second switch comprising a control end, a first channel end and a second channel end, wherein the control end of the second switch is electrically coupled to the data-store node, the first channel end of the second switch is electrically coupled to its corresponding current-supply line and the second channel end of the second switch is electrically coupled to the first end of the current-driven device. 8. The display panel according to claim 6, wherein the first control line and the second control is a same control line. 9. The display panel according to claim 6, wherein the first control line and the second control line are used for transmitting two different time-sequence signals, the enable period of the time-sequence signal transmitted by the first control line is appeared after the enable period of the time-sequence signal transmitted by the second control line, and there is no overlap between the enable periods of the time-sequence signals transmitted by the first and the second control lines. 10. The display panel according to claim 9, wherein second channel end of the detecting switch is electrically coupled to the data line. 11. The display panel according to claim 10, further comprising: a plurality of switching unit, wherein each of the switching units is corresponding to one of the data lines and one of the compensation circuit, and each of the switching units can be switched to make its corresponding data line electrical coupled to either the output end or the input end of the corresponding compensation circuit. 12. The display panel according to claim 6, wherein each of the compensation circuit comprises: a voltage-reader unit comprising an input end and an output end, wherein the input end of the voltage-reader unit is electrically coupled to its corresponding detecting switch and the voltage-reader unit is user for reading and outputting the voltage at the corresponding detecting switch; and a comparing unit, electrically coupled to the output end of the voltage-reader unit, for computing a difference through comparing the voltage at the output end of the voltage-reader unit and the original work voltage, and modulating the data of the corresponding data line according to the difference.
A pixel circuit and a display panel with an IR-drop compensation function are disclosed. The display panel includes multiple pixel circuits and multiple compensation circuits. Each of the pixel circuits includes a detecting switch. After a real work voltage of a pixel circuit is transmitted to a corresponding compensation circuit through a corresponding detecting switch, a data transmitted to the pixel circuit is adjusted by the compensation circuit according to a relationship between the real work voltage and an original work voltage.1. A pixel circuit electrically coupled to a current-supply line, at least a control line and a data line which is for providing data, the pixel circuit comprising: a current-driven device comprising a first end and a second end, wherein a light is emitted from the current-driven device while a current is flowing through the current-driven device from the first end to the second end; a current-control circuit for receiving data from the data line and storing the received data as driving data according to the voltage level of a first control line, receiving a real work voltage from the current-supply line, and controlling the current intensity flowing to the current-driven device via the current-control circuit from the current-supply line according to the driving data; and a detecting switch comprising a control end, a first channel end and a second channel end, wherein the first channel end of the detecting switch is electrically coupled to the current-control circuit and is used for retrieving the real work voltage, the control end of the detecting switch is electrically coupled to a second control line and is used for determining the electrical conduction between the first and the second channel ends of the detecting switch. 2. The pixel circuit according to claim 1, wherein the current-control circuit comprises: a first switch comprising a control end, a first channel end and a second channel end, wherein the control end of the first switch is electrically coupled to the first control line, the first channel end of the first switch is electrically coupled to the data line; a capacitor, wherein one end of the capacitor and the second channel end of the first switch are both electrically coupled to a data-store node, and the other end of the capacitor is electrically coupled to the current-supply line; and a second switch comprising a control end, a first channel end and a second channel end, wherein the control end of the second switch is electrically coupled to the data-store node, the first channel end of the second switch is electrically coupled to the current-supply line and the second channel end of the second switch is electrically coupled to the first end of the current-driven device. 3. The pixel circuit according to claim 1, wherein the first control line and the second control is a same control line. 4. The pixel circuit according to claim 1, wherein the first control line and the second control line are used for transmitting two different time-sequence signals, the enable period of the time-sequence signal transmitted by the first control line is after the enable period of the time-sequence signal transmitted by the second control line, without overlap there. 5. The pixel circuit according to claim 4, wherein the second channel end of the detecting switch is electrically coupled to the data line. 6. A display panel, comprising: a plurality of data lines; a plurality of control lines; a plurality of power-supply lines; a plurality of pixel circuits, each of the pixel circuits is electrically coupled to at least one of the control lines, one of the power-supply lines and one of the data lines, and each of the pixel circuits comprising: a current-driven device comprising a first end and a second end, wherein a light is emitted from the current-driven device while a current is flowing through the current-driven device from the first end to the second end; a current-control circuit for receiving data from the data line and storing the received data as driving data according to the voltage level of a first control line, receiving a real work voltage from the current-supply line, and controlling the current intensity flowing to the current-driven device via the current-control circuit from the current-supply line according to the driving data; and a detecting switch comprising a control end, a first channel end and a second channel end, wherein the first channel end of the detecting switch is electrically coupled to the current-control circuit and is used for retrieving the real work voltage, the control end of the detecting switch is electrically coupled to a second control line and is used for determining the electrical conduction between the first and the second channel ends of the detecting switch; and a plurality of compensation circuits, wherein the second channel end of the detecting switch of each of the pixel circuits is electrically coupled to a corresponding compensation circuit, and the corresponding compensation circuit is for modulating the data received from the data line which is electrically coupled to a corresponding pixel circuit according to a relationship between an original work voltage and the voltage received from the second channel end of the detecting switch. 7. The display panel according to claim 6, wherein the current-control circuit comprises: a first switch comprising a control end, a first channel end and a second channel end, wherein the control end of the first switch is electrically coupled to the first control line, the first channel end of the first switch is electrically coupled to its corresponding data line; a capacitor, wherein one end of the capacitor and the second channel end of the first switch are both electrically coupled to a data-store node, and the other end of the capacitor is electrically coupled to its corresponding current-supply line; and a second switch comprising a control end, a first channel end and a second channel end, wherein the control end of the second switch is electrically coupled to the data-store node, the first channel end of the second switch is electrically coupled to its corresponding current-supply line and the second channel end of the second switch is electrically coupled to the first end of the current-driven device. 8. The display panel according to claim 6, wherein the first control line and the second control is a same control line. 9. The display panel according to claim 6, wherein the first control line and the second control line are used for transmitting two different time-sequence signals, the enable period of the time-sequence signal transmitted by the first control line is appeared after the enable period of the time-sequence signal transmitted by the second control line, and there is no overlap between the enable periods of the time-sequence signals transmitted by the first and the second control lines. 10. The display panel according to claim 9, wherein second channel end of the detecting switch is electrically coupled to the data line. 11. The display panel according to claim 10, further comprising: a plurality of switching unit, wherein each of the switching units is corresponding to one of the data lines and one of the compensation circuit, and each of the switching units can be switched to make its corresponding data line electrical coupled to either the output end or the input end of the corresponding compensation circuit. 12. The display panel according to claim 6, wherein each of the compensation circuit comprises: a voltage-reader unit comprising an input end and an output end, wherein the input end of the voltage-reader unit is electrically coupled to its corresponding detecting switch and the voltage-reader unit is user for reading and outputting the voltage at the corresponding detecting switch; and a comparing unit, electrically coupled to the output end of the voltage-reader unit, for computing a difference through comparing the voltage at the output end of the voltage-reader unit and the original work voltage, and modulating the data of the corresponding data line according to the difference.
2,600
9,832
9,832
14,042,954
2,652
An auditory prosthesis can be powered by an on-board battery that is placed into a receptacle in the auditory prosthesis. When longer battery life is desired, a recipient can selectively utilize a discrete power supply to power the auditory prosthesis. The discrete power supply provides power to the auditory prosthesis and, in certain embodiments, has a longer life than the on-board battery. The discrete power supply includes an adapter having a form factor that mates with the battery receptacle on the auditory prosthesis.
1. An apparatus comprising: an auditory prosthesis housing defining a receptacle configured to matingly receive a first battery, so as to receive power from the first battery; and a power source unit discrete from the auditory prosthesis housing, wherein the power source unit comprises an adapter configured to be matingly received in the receptacle, so as to receive power from a second power source. 2. The apparatus of claim 1, further comprising a cover configured to cover the receptacle when the adapter is matingly received in the receptacle. 3. The apparatus of claim 1, wherein the power source unit comprises: a battery unit housing adapted to receive the second battery; and a cable connecting the housing to the adapter. 4. The apparatus of claim 1, wherein the adapter comprises a form factor substantially similar to the first battery. 5. The apparatus of claim 1, wherein the first battery comprises a first form factor and the second power source comprises a second form factor. 6. The apparatus of claim 5, wherein the first form factor is different from the second form factor. 7. The apparatus of claim 1, wherein the second power source comprises at least one of an energy scavenging unit and a building power supply. 8. An apparatus comprising: a battery unit defining a battery unit receptacle for receiving a battery comprising a first form factor; a cable extending from the battery unit; and an adapter connected to the cable, the adapter configured to be matingly received in a housing receptacle, wherein the housing receptacle is configured to matingly receive a battery comprising a second form factor. 9. The apparatus of claim 8, further comprising a cover disposed proximate the adapter, wherein the cover is configured to mate with an opening proximate the housing receptacle. 10. The apparatus of claim 9, wherein the cover comprises a gasket to form a substantially water-tight seal when mating with the opening. 11. The apparatus of claim 8, wherein the battery unit comprises a substantially water-tight housing. 12. The apparatus of claim 8, wherein the battery unit comprises a voltage modification circuit. 13. The apparatus of claim 8, wherein the battery unit comprises a clip for attaching to an article of clothing. 14. The apparatus of claim 8, wherein the battery unit is disposed in a housing comprising an ear hook for wearing on an ear. 15. An apparatus comprising: a housing comprising a battery receptacle; a first receptacle terminal disposed within the receptacle for receiving power; and a second receptacle terminal disposed within the battery receptacle for receiving a signal. 16. The apparatus of claim 15, further comprising: a discrete power supply unit configured to be connected to the housing, wherein the discrete power supply comprises an adapter configured to be matingly received in the battery receptacle, wherein the adapter comprises: a battery contact configured to mate with the first receptacle terminal; and a signal contact adapted to mate with the second receptacle terminal. 17. The apparatus of claim 15, wherein the power supply unit comprises a receptacle for receiving a first battery. 18. The apparatus of claim 17, the battery compartment is configured to selectively receive the adapter of a second power source unit. 19. The apparatus of claim 18, wherein the first battery comprises a first form factor and the second power source unit comprises a second form factor. 20. The apparatus of claim 18, wherein the first battery comprises a first capacity and the second power source unit comprises a second capacity.
An auditory prosthesis can be powered by an on-board battery that is placed into a receptacle in the auditory prosthesis. When longer battery life is desired, a recipient can selectively utilize a discrete power supply to power the auditory prosthesis. The discrete power supply provides power to the auditory prosthesis and, in certain embodiments, has a longer life than the on-board battery. The discrete power supply includes an adapter having a form factor that mates with the battery receptacle on the auditory prosthesis.1. An apparatus comprising: an auditory prosthesis housing defining a receptacle configured to matingly receive a first battery, so as to receive power from the first battery; and a power source unit discrete from the auditory prosthesis housing, wherein the power source unit comprises an adapter configured to be matingly received in the receptacle, so as to receive power from a second power source. 2. The apparatus of claim 1, further comprising a cover configured to cover the receptacle when the adapter is matingly received in the receptacle. 3. The apparatus of claim 1, wherein the power source unit comprises: a battery unit housing adapted to receive the second battery; and a cable connecting the housing to the adapter. 4. The apparatus of claim 1, wherein the adapter comprises a form factor substantially similar to the first battery. 5. The apparatus of claim 1, wherein the first battery comprises a first form factor and the second power source comprises a second form factor. 6. The apparatus of claim 5, wherein the first form factor is different from the second form factor. 7. The apparatus of claim 1, wherein the second power source comprises at least one of an energy scavenging unit and a building power supply. 8. An apparatus comprising: a battery unit defining a battery unit receptacle for receiving a battery comprising a first form factor; a cable extending from the battery unit; and an adapter connected to the cable, the adapter configured to be matingly received in a housing receptacle, wherein the housing receptacle is configured to matingly receive a battery comprising a second form factor. 9. The apparatus of claim 8, further comprising a cover disposed proximate the adapter, wherein the cover is configured to mate with an opening proximate the housing receptacle. 10. The apparatus of claim 9, wherein the cover comprises a gasket to form a substantially water-tight seal when mating with the opening. 11. The apparatus of claim 8, wherein the battery unit comprises a substantially water-tight housing. 12. The apparatus of claim 8, wherein the battery unit comprises a voltage modification circuit. 13. The apparatus of claim 8, wherein the battery unit comprises a clip for attaching to an article of clothing. 14. The apparatus of claim 8, wherein the battery unit is disposed in a housing comprising an ear hook for wearing on an ear. 15. An apparatus comprising: a housing comprising a battery receptacle; a first receptacle terminal disposed within the receptacle for receiving power; and a second receptacle terminal disposed within the battery receptacle for receiving a signal. 16. The apparatus of claim 15, further comprising: a discrete power supply unit configured to be connected to the housing, wherein the discrete power supply comprises an adapter configured to be matingly received in the battery receptacle, wherein the adapter comprises: a battery contact configured to mate with the first receptacle terminal; and a signal contact adapted to mate with the second receptacle terminal. 17. The apparatus of claim 15, wherein the power supply unit comprises a receptacle for receiving a first battery. 18. The apparatus of claim 17, the battery compartment is configured to selectively receive the adapter of a second power source unit. 19. The apparatus of claim 18, wherein the first battery comprises a first form factor and the second power source unit comprises a second form factor. 20. The apparatus of claim 18, wherein the first battery comprises a first capacity and the second power source unit comprises a second capacity.
2,600
9,833
9,833
14,920,021
2,657
Recognizing a user's speech is a computationally demanding task. If a user calls a destination server, little may be known about the user or the user's speech profile. The user's source system (device and/or server) may have an extensive profile of the user. As provided herein, a source device may provide translated text and/or speech attributes to a destination server. As a benefit, the recognition algorithm may be well tuned to the user and provide the recognized content to the destination. Additionally, the destination may provide domain attributes to allow the source recognition engine to better recognize the spoken content.
1. A destination server, comprising: a network interface to a communications network; a microprocessor having access to the network interface; and the microprocessor, via the network interface, engages in a call with a source server, the call comprising a voice channel comprising a spoken portion provided by a source user and a data channel comprising a machine-readable cue of the spoken portion; wherein the microprocessor executes a speech recognition algorithm to recognize of the spoken portion and wherein the speech recognition algorithm is seeded with the machine-readable cue; and wherein the microprocessor executes instructions in accordance with the microprocessor recognizing speech on the voice channel and the machine-readable cue received on the data channel. 2. The destination server of claim 1, wherein the microprocessor receives indicia of source-side speech recognition. 3. The destination server of claim 2, wherein the microprocessor, in response to receiving the indicia of source-side speech recognition, replies via the data channel with a domain attribute associated with the destination server. 4. The destination server of claim 1, wherein the machine-readable cue further comprises a machine-readable speech attribute of the source user. 5. The destination server of claim 4, wherein the microprocessor executes a speech recognition algorithm utilizing an acoustic model selected in accord with the machine-readable speech attribute of the source user and derives machine-readable content from a waveform portion of the call. 6. The destination server of claim 1, wherein the cue further comprises human-readable text of a machine-readable recognition of the spoken portion. 7. The destination server of claim 1, wherein the data channel comprises a Real-Time Transport Protocol (RTP) text stream. 8. A source server, comprising: a network interface to a communications network; a microprocessor having access to the network interface; and the microprocessor, via the network interface, establishes a call comprising a spoken portion and utilizing a voice channel between a source user and a destination endpoint; the microprocessor, via the network interface, establishes a data connection on a data channel between the source server and the destination endpoint; and the microprocessor monitors the call and provides a machine-readable recognition of a spoken portion to the destination endpoint via the data channel. 9. The source server of claim 8, wherein the microprocessor provides a call setup to the destination endpoint comprising indicia of source-side speech recognition. 10. The source server of claim 9, wherein, in response to the call setup, the microprocessor receives a domain attribute indicating an expected vocabulary encountered during the call. 11. The source server of claim 10, wherein the processor provides the machine-readable recognition of the spoken portion comprising executing a speech recognition algorithm seeded with a domain attribute. 12. The source server of claim 9, wherein the domain attribute is a domain lexicon. 13. The source server of claim 8, wherein the microprocessor accesses a user profile associated with prior speech of the source user, derives a machine-readable speech attribute of the source user, and provides machine-readable speech attribute of the source user to the destination endpoint. 14. The source server of claim 8, further comprising an Internet Protocol Private Branch Exchange (IP PBX). 15. The source server of claim 8, wherein the microprocessor, prior to the call, is operable to monitor speech of the source user during at least one prior call and derive a speech attribute of the source user therefrom. 16. The source server of claim 14, further comprising a data storage accessible to the microprocessor and wherein the microprocessor causes the speech attribute of the source user to be stored in a user profile associated with the source user; and wherein the microprocessor accesses the stored speech attribute to seed a speech recognition algorithm seeded with the stored speech attribute. 17. The source server of claim 8, wherein the microprocessor identifies the source user prior to the call. 18. A method, comprising: establishing a call between a source endpoint and a destination server on a voice channel; indicating an ability to perform speech recognition at a processing component associated with the source endpoint; monitoring, by the processing component, the call for speech provided to the source endpoint; analyzing, by the processing component, the speech to provide a machine-readable content of a portion of the speech; and providing, by the processing component, the machine-readable content of the portion of the speech to the destination server on a data channel established between the processing component and the destination server. 19. The method of claim 18, further comprising: receiving a domain attribute from the destination server; and wherein the step of analyzing the speech to provide the machine-readable content, further comprises utilizing the domain attribute to improve at least one of the performance and the accuracy of the speech analysis. 20. The method of claim 18, wherein the source endpoint embodies the processing component.
Recognizing a user's speech is a computationally demanding task. If a user calls a destination server, little may be known about the user or the user's speech profile. The user's source system (device and/or server) may have an extensive profile of the user. As provided herein, a source device may provide translated text and/or speech attributes to a destination server. As a benefit, the recognition algorithm may be well tuned to the user and provide the recognized content to the destination. Additionally, the destination may provide domain attributes to allow the source recognition engine to better recognize the spoken content.1. A destination server, comprising: a network interface to a communications network; a microprocessor having access to the network interface; and the microprocessor, via the network interface, engages in a call with a source server, the call comprising a voice channel comprising a spoken portion provided by a source user and a data channel comprising a machine-readable cue of the spoken portion; wherein the microprocessor executes a speech recognition algorithm to recognize of the spoken portion and wherein the speech recognition algorithm is seeded with the machine-readable cue; and wherein the microprocessor executes instructions in accordance with the microprocessor recognizing speech on the voice channel and the machine-readable cue received on the data channel. 2. The destination server of claim 1, wherein the microprocessor receives indicia of source-side speech recognition. 3. The destination server of claim 2, wherein the microprocessor, in response to receiving the indicia of source-side speech recognition, replies via the data channel with a domain attribute associated with the destination server. 4. The destination server of claim 1, wherein the machine-readable cue further comprises a machine-readable speech attribute of the source user. 5. The destination server of claim 4, wherein the microprocessor executes a speech recognition algorithm utilizing an acoustic model selected in accord with the machine-readable speech attribute of the source user and derives machine-readable content from a waveform portion of the call. 6. The destination server of claim 1, wherein the cue further comprises human-readable text of a machine-readable recognition of the spoken portion. 7. The destination server of claim 1, wherein the data channel comprises a Real-Time Transport Protocol (RTP) text stream. 8. A source server, comprising: a network interface to a communications network; a microprocessor having access to the network interface; and the microprocessor, via the network interface, establishes a call comprising a spoken portion and utilizing a voice channel between a source user and a destination endpoint; the microprocessor, via the network interface, establishes a data connection on a data channel between the source server and the destination endpoint; and the microprocessor monitors the call and provides a machine-readable recognition of a spoken portion to the destination endpoint via the data channel. 9. The source server of claim 8, wherein the microprocessor provides a call setup to the destination endpoint comprising indicia of source-side speech recognition. 10. The source server of claim 9, wherein, in response to the call setup, the microprocessor receives a domain attribute indicating an expected vocabulary encountered during the call. 11. The source server of claim 10, wherein the processor provides the machine-readable recognition of the spoken portion comprising executing a speech recognition algorithm seeded with a domain attribute. 12. The source server of claim 9, wherein the domain attribute is a domain lexicon. 13. The source server of claim 8, wherein the microprocessor accesses a user profile associated with prior speech of the source user, derives a machine-readable speech attribute of the source user, and provides machine-readable speech attribute of the source user to the destination endpoint. 14. The source server of claim 8, further comprising an Internet Protocol Private Branch Exchange (IP PBX). 15. The source server of claim 8, wherein the microprocessor, prior to the call, is operable to monitor speech of the source user during at least one prior call and derive a speech attribute of the source user therefrom. 16. The source server of claim 14, further comprising a data storage accessible to the microprocessor and wherein the microprocessor causes the speech attribute of the source user to be stored in a user profile associated with the source user; and wherein the microprocessor accesses the stored speech attribute to seed a speech recognition algorithm seeded with the stored speech attribute. 17. The source server of claim 8, wherein the microprocessor identifies the source user prior to the call. 18. A method, comprising: establishing a call between a source endpoint and a destination server on a voice channel; indicating an ability to perform speech recognition at a processing component associated with the source endpoint; monitoring, by the processing component, the call for speech provided to the source endpoint; analyzing, by the processing component, the speech to provide a machine-readable content of a portion of the speech; and providing, by the processing component, the machine-readable content of the portion of the speech to the destination server on a data channel established between the processing component and the destination server. 19. The method of claim 18, further comprising: receiving a domain attribute from the destination server; and wherein the step of analyzing the speech to provide the machine-readable content, further comprises utilizing the domain attribute to improve at least one of the performance and the accuracy of the speech analysis. 20. The method of claim 18, wherein the source endpoint embodies the processing component.
2,600
9,834
9,834
15,262,995
2,663
A system and method for tagging an image of an individual in a plurality of photos is disclosed herein. A feature vector of an individual is used to analyze a set of photos on a social networking website such as Facebook® to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database.
1. A system for identifying an individual, the system comprising: a server configured to receive a first set of one or more photos from a computing device, the first set of one or more photos including at least one photo containing a facial image of an individual; and a database accessible to the server including a second set of two or more photos corresponding to a plurality of existing feature vectors of images of target individuals depicted in the second set of two or more photos, wherein the server is configured to process the first set of one or more photos to generate a feature vector for the facial image of the individual; wherein the server is configured to compare the generated feature vector to the plurality of existing feature vectors to find an existing feature vector of an image of a target individual that most closely matches the generated feature vector; wherein the server, based on the closest match, is configured to identify each photo among the second set of two or more photos in which the image of the target individual is present; wherein the server is configured to create a third set of two or more photos that includes the at least one photo containing the facial image of the individual and each photo among the second set of two or more photos in which the image of the target individual is present; and wherein the server is configured to send the third set of two or more photos to the computing device. 2. The system according to claim 1, wherein the server is further configured to mark a location of the image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos. 3. The system according to claim 2, wherein the server is further configured to mark the location by: determine X and Y coordinates of each image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos in which the image of the target individual is present; determine a size of each image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos in which the image of the target individual is present; and determine a tilt of each image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos in which the image of the target individual is present. 4. The system according to claim 2, wherein the server is configured to store the location of the target individual within each photo among the second set of two or more photos in a database. 5. The system according to claim 1, wherein the server is configured to store an identifier of the target individual within each photo among the second set of two or more photos, wherein the identifier further comprises at least one of: the name of the individual and meta-data. 6. The system according to claim 5, wherein the meta-data comprises at least one of: internet references, sponsored links, and advertisements. 7. The system according to claim 1, wherein the generated feature vector is based on a plurality of factors comprising at least one of: facial expression, hair style, hair color, facial pose, eye color, texture of the face, face color, and facial hair. 8. The system according to claim 1, wherein the server is further configured to tag each photo among the second set by listing a URL or a URI. 9. A method for identifying an individual, the method comprising: receiving, at a server, a first set of one or more photos from a computing device, the first set of one or more photos including at least one photo containing a facial image of an individual; accessing, by the server, a database that includes a second set of two or more photos corresponding to a plurality of existing feature vectors of images of target individuals depicted in the second set of two or more photos; processing, at the server, the first set of one or more photos to generate a feature vector for the facial image of the individual; comparing, at the server, the generated feature vector to the plurality of existing feature vectors to find an existing feature vector of an image of a target individual that most closely matches the generated feature vector; identifying based on the closest match, at the server, each photo among the second set of two or more photos in which the image of the target individual is present; creating, at the server, a third set of two or more photos that includes the at least one photo containing the facial image of the individual and each photo among the second set of two or more photos in which the image of the target individual is present; and sending, at the server, the third set of two or more photos to the computing device. 10. The method according to claim 9, further comprising: marking, at the server, a location of the image of the target individual within each photo among the second set of two or more photos included in the third set of photos. 11. The method according to claim 10, wherein marking the location further comprises: determining X and Y coordinates of each image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos in which the image of the target individual is present; determining a size of each image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos in which the image of the target individual is present; and determining a tilt of each image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos in which the image of the target individual is present. 12. The method according to claim 10, further comprising: storing the location of the target individual within each photo among the second set of two or more photos in a database. 13. The method according to claim 9, further comprising: storing an identifier of the target individual within each photo among the second set of two or more photos, wherein the identifier further comprises at least one of: the name of the individual and meta-data. 14. The method according to claim 9, wherein the generated feature vector is based on a plurality of factors comprising at least one of: facial expression, hair style, hair color, facial pose, eye color, texture of the face, face color, and facial hair. 15. The method according to claim 9, further comprising: tagging each photo among the second set of two or more photos by listing a URL or a URI. 16. A computer readable medium having stored thereon instructions that when executed by a processor cause the processor to identify an individual in a photo, the instructions comprising instructions to: receive, at a server, a first set of one or more photos from a computing device, the first set of two or more photos including at least one photo containing a facial image of an individual; access, by the server, a database that includes a second set of plurality of photos corresponding to a plurality of existing feature vectors of images of target individuals depicted in the second set of two or more photos; process, at the server, the first set of two or more photos to generate a feature vector for the facial image of the individual; compare, at the server, the generated feature vector to the plurality of existing feature vectors to find an existing feature vector of an image of a target individual that most closely matches the generated feature vector; identify based on the closest match, at the server, each photo among the second set of two or more photos in which the image of the target individual is present; create, at the server, a third set of two or more photos that includes the at least one photo containing the facial image of the individual and each photo among the second set of two or more photos in which the image of the target individual is present; and send, at the server, the third set of two or more photos to the computing device. 17. The computer readable medium according to claim 16 further comprising instructions to: mark, at the server, a location of the image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos. 18. The computer readable medium according to claim 17, wherein the mark further comprises: determine X and Y coordinates of each image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos in which the image of the target individual is present; determine a size of each image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos in which the image of the target individual is present; and determine a tilt of each image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos in which the image of the target individual is present. 19. The computer readable medium according to claim 17 further comprising instructions to: store the location of the target individual within each photo among the second set of two or more photos in a database. 20. The computer readable medium according to claim 16 further comprising instructions to: store an identifier of the target individual within each photo among the second set of two or more photos, wherein the identifier further comprises at least one of: the name of the individual and meta-data.
A system and method for tagging an image of an individual in a plurality of photos is disclosed herein. A feature vector of an individual is used to analyze a set of photos on a social networking website such as Facebook® to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database.1. A system for identifying an individual, the system comprising: a server configured to receive a first set of one or more photos from a computing device, the first set of one or more photos including at least one photo containing a facial image of an individual; and a database accessible to the server including a second set of two or more photos corresponding to a plurality of existing feature vectors of images of target individuals depicted in the second set of two or more photos, wherein the server is configured to process the first set of one or more photos to generate a feature vector for the facial image of the individual; wherein the server is configured to compare the generated feature vector to the plurality of existing feature vectors to find an existing feature vector of an image of a target individual that most closely matches the generated feature vector; wherein the server, based on the closest match, is configured to identify each photo among the second set of two or more photos in which the image of the target individual is present; wherein the server is configured to create a third set of two or more photos that includes the at least one photo containing the facial image of the individual and each photo among the second set of two or more photos in which the image of the target individual is present; and wherein the server is configured to send the third set of two or more photos to the computing device. 2. The system according to claim 1, wherein the server is further configured to mark a location of the image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos. 3. The system according to claim 2, wherein the server is further configured to mark the location by: determine X and Y coordinates of each image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos in which the image of the target individual is present; determine a size of each image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos in which the image of the target individual is present; and determine a tilt of each image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos in which the image of the target individual is present. 4. The system according to claim 2, wherein the server is configured to store the location of the target individual within each photo among the second set of two or more photos in a database. 5. The system according to claim 1, wherein the server is configured to store an identifier of the target individual within each photo among the second set of two or more photos, wherein the identifier further comprises at least one of: the name of the individual and meta-data. 6. The system according to claim 5, wherein the meta-data comprises at least one of: internet references, sponsored links, and advertisements. 7. The system according to claim 1, wherein the generated feature vector is based on a plurality of factors comprising at least one of: facial expression, hair style, hair color, facial pose, eye color, texture of the face, face color, and facial hair. 8. The system according to claim 1, wherein the server is further configured to tag each photo among the second set by listing a URL or a URI. 9. A method for identifying an individual, the method comprising: receiving, at a server, a first set of one or more photos from a computing device, the first set of one or more photos including at least one photo containing a facial image of an individual; accessing, by the server, a database that includes a second set of two or more photos corresponding to a plurality of existing feature vectors of images of target individuals depicted in the second set of two or more photos; processing, at the server, the first set of one or more photos to generate a feature vector for the facial image of the individual; comparing, at the server, the generated feature vector to the plurality of existing feature vectors to find an existing feature vector of an image of a target individual that most closely matches the generated feature vector; identifying based on the closest match, at the server, each photo among the second set of two or more photos in which the image of the target individual is present; creating, at the server, a third set of two or more photos that includes the at least one photo containing the facial image of the individual and each photo among the second set of two or more photos in which the image of the target individual is present; and sending, at the server, the third set of two or more photos to the computing device. 10. The method according to claim 9, further comprising: marking, at the server, a location of the image of the target individual within each photo among the second set of two or more photos included in the third set of photos. 11. The method according to claim 10, wherein marking the location further comprises: determining X and Y coordinates of each image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos in which the image of the target individual is present; determining a size of each image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos in which the image of the target individual is present; and determining a tilt of each image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos in which the image of the target individual is present. 12. The method according to claim 10, further comprising: storing the location of the target individual within each photo among the second set of two or more photos in a database. 13. The method according to claim 9, further comprising: storing an identifier of the target individual within each photo among the second set of two or more photos, wherein the identifier further comprises at least one of: the name of the individual and meta-data. 14. The method according to claim 9, wherein the generated feature vector is based on a plurality of factors comprising at least one of: facial expression, hair style, hair color, facial pose, eye color, texture of the face, face color, and facial hair. 15. The method according to claim 9, further comprising: tagging each photo among the second set of two or more photos by listing a URL or a URI. 16. A computer readable medium having stored thereon instructions that when executed by a processor cause the processor to identify an individual in a photo, the instructions comprising instructions to: receive, at a server, a first set of one or more photos from a computing device, the first set of two or more photos including at least one photo containing a facial image of an individual; access, by the server, a database that includes a second set of plurality of photos corresponding to a plurality of existing feature vectors of images of target individuals depicted in the second set of two or more photos; process, at the server, the first set of two or more photos to generate a feature vector for the facial image of the individual; compare, at the server, the generated feature vector to the plurality of existing feature vectors to find an existing feature vector of an image of a target individual that most closely matches the generated feature vector; identify based on the closest match, at the server, each photo among the second set of two or more photos in which the image of the target individual is present; create, at the server, a third set of two or more photos that includes the at least one photo containing the facial image of the individual and each photo among the second set of two or more photos in which the image of the target individual is present; and send, at the server, the third set of two or more photos to the computing device. 17. The computer readable medium according to claim 16 further comprising instructions to: mark, at the server, a location of the image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos. 18. The computer readable medium according to claim 17, wherein the mark further comprises: determine X and Y coordinates of each image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos in which the image of the target individual is present; determine a size of each image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos in which the image of the target individual is present; and determine a tilt of each image of the target individual within each photo among the second set of two or more photos included in the third set of two or more photos in which the image of the target individual is present. 19. The computer readable medium according to claim 17 further comprising instructions to: store the location of the target individual within each photo among the second set of two or more photos in a database. 20. The computer readable medium according to claim 16 further comprising instructions to: store an identifier of the target individual within each photo among the second set of two or more photos, wherein the identifier further comprises at least one of: the name of the individual and meta-data.
2,600
9,835
9,835
15,371,808
2,657
Mechanisms are provided for automatically identifying required tools for performing actions specified in electronic documents. The mechanisms perform natural language processing of content of a training corpus of electronic documents to identify associations of action terms with required tools for performing actions corresponding to the action terms. The mechanisms train an ontology model based on the identified associations. The mechanisms perform analysis of electronic documents of one or more other corpora based on the trained ontology model to identify required tools for performing actions specified in the electronic documents. The mechanisms annotate one or more of the electronic documents of the one or more corpora to include required tools annotation metadata identifying tools required to perform actions corresponding to action terms present in the one or more electronic documents.
1. A method, in a data processing system comprising a processor and a memory, wherein the memory comprises instructions executed by the processor to cause the processor to implement the method, comprising: performing, by the data processing system, natural language processing of content of a training corpus of electronic documents to identify associations of action terms with required tools for performing actions corresponding to the action terms; training, by the data processing system, an ontology model based on the identified associations of the action terms with required tools for performing actions corresponding to the action terms; performing, by the data processing system, analysis of electronic documents of one or more other corpora based on the trained ontology model to identify required tools for performing actions specified in the electronic documents; and annotating, by the data processing system, one or more of the electronic documents of the one or more other corpora to include required tools annotation metadata identifying tools required to perform actions corresponding to action terms present in the one or more electronic documents to thereby generate at least one updated corpus. 2. The method of claim 1, wherein at least one electronic document of the one or more other corpora does not explicitly indicate, in content of the electronic document, a tool to perform a task specified in the content of the electronic document, and wherein performing analysis of electronic documents of one or more other corpora based on the trained ontology model comprises associating a tool with the task based on at least one action term specified in the content of the electronic document in association with the task. 3. The method of claim 1, wherein performing natural language processing of content of a training corpus comprises performing a morphological similarity analysis that identifies action terms and entities specified in natural language content of the training corpus that have an overlapping stem. 4. The method of claim 1, wherein performing natural language processing of content of a training corpus comprises performing a posteriori tool mention analysis which identifies text in the content of the training corpus where a tool required to perform an action corresponding to an action term in the text is present in the text after an instance of the action term corresponding to the action. 5. The method of claim 1, wherein the ontology model comprises a plurality of tuples, wherein each tuple associates an action term with a corresponding tool for performing an action corresponding to the action term, and wherein performing analysis of electronic documents of one or more other corpora based on the trained ontology model to identify required tools for performing actions specified in the electronic documents comprises matching action terms in the electronic documents with action terms in tuples of the plurality of tuples. 6. The method of claim 5, wherein each tuple further associates the action term and corresponding tool with at least one of an object upon which the action is performed or a confidence score indicating a confidence that the corresponding tool is required to perform an action corresponding to the action term. 7. The method of claim 6, wherein the confidence score is calculated by the data processing system based on a function of a frequency of occurrence of the tuple present in the training corpus. 8. The method of claim 1, wherein performing natural language processing of content of a training corpus comprises: applying one or more pre-defined textual patterns to the content of the training corpus, wherein each pre-defined textual pattern specifies a mention of a tool name and an action term; identifying matching portions of text in the content of the training corpus that match one or more of the pre-defined textual patterns; and extracting, from the matching portions of text, a tool name and action term. 9. The method of claim 1, further comprising: receiving, by the data processing system from a user computing device, a request for a project to be performed by the user; performing, by the data processing system, a search of the updated corpus for a document specifying instructions for performing a project matching criteria specified in the request; identifying, by the data processing system, based on the search results, tools required to perform tasks specified in the instructions for performing the project; and outputting, by the data processing system, the document specifying instructions for performing the project and a listing of required tools for performing tasks specified in the search results to a user via the user computing device. 10. The method of claim 9, further comprising: performing, by the data processing system, for at least one tool in the listing of required tools, a search of at least one tool provider information source for tool information describing the at least one tool in the listing of required tools; and outputting, by the data processing system, the tool information obtained from the at least one tool provider information source. 11. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a data processing system, causes the data processing system to: perform natural language processing of content of a training corpus of electronic documents to identify associations of action terms with required tools for performing actions corresponding to the action terms; train an ontology model based on the identified associations of the action terms with required tools for performing actions corresponding to the action terms; perform analysis of electronic documents of one or more other corpora based on the trained ontology model to identify required tools for performing actions specified in the electronic documents; and annotate one or more of the electronic documents of the one or more other corpora to include required tools annotation metadata identifying tools required to perform actions corresponding to action terms present in the one or more electronic documents to thereby generate at least one updated corpus. 12. The computer program product of claim 11, wherein at least one electronic document of the one or more other corpora does not explicitly indicate, in content of the electronic document, a tool to perform a task specified in the content of the electronic document, and wherein the computer readable program further causes the data processing system to perform analysis of electronic documents of one or more other corpora based on the trained ontology model at least by associating a tool with the task based on at least one action term specified in the content of the electronic document in association with the task. 13. The computer program product of claim 11, wherein the computer readable program further causes the data processing system to perform natural language processing of content of a training corpus at least by performing a morphological similarity analysis that identifies action terms and entities specified in natural language content of the training corpus that have an overlapping stem. 14. The computer program product of claim 11, wherein the computer readable program further causes the data processing system to perform natural language processing of content of a training corpus at least by performing a posteriori tool mention analysis which identifies text in the content of the training corpus where a tool required to perform an action corresponding to an action term in the text is present in the text after an instance of the action term corresponding to the action. 15. The computer program product of claim 11, wherein the ontology model comprises a plurality of tuples, wherein each tuple associates an action term with a corresponding tool for performing an action corresponding to the action term, and wherein the computer readable program further causes the data processing system to perform analysis of electronic documents of one or more other corpora based on the trained ontology model to identify required tools for performing actions specified in the electronic documents at least by matching action terms in the electronic documents with action terms in tuples of the plurality of tuples. 16. The computer program product of claim 15, wherein each tuple further associates the action term and corresponding tool with at least one of an object upon which the action is performed or a confidence score indicating a confidence that the corresponding tool is required to perform an action corresponding to the action term. 17. The computer program product of claim 16, wherein the confidence score is calculated by the data processing system based on a function of a frequency of occurrence of the tuple present in the training corpus. 18. The computer program product of claim 11, wherein the computer readable program further causes the data processing system to perform natural language processing of content of a training corpus at least by: applying one or more pre-defined textual patterns to the content of the training corpus, wherein each pre-defined textual pattern specifies a mention of a tool name and an action term; identifying matching portions of text in the content of the training corpus that match one or more of the pre-defined textual patterns; and extracting, from the matching portions of text, a tool name and action term. 19. The computer program product of claim 11, wherein the computer readable program further causes the data processing system to: receive, from a user computing device, a request for a project to be performed by the user; perform a search of the updated corpus for a document specifying instructions for performing a project matching criteria specified in the request; identify, based on the search results, tools required to perform tasks specified in the instructions for performing the project; and output the document specifying instructions for performing the project and a listing of required tools for performing tasks specified in the search results to a user via the user computing device. 20. An apparatus comprising: a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: perform natural language processing of content of a training corpus of electronic documents to identify associations of action terms with required tools for performing actions corresponding to the action terms; train an ontology model based on the identified associations of the action terms with required tools for performing actions corresponding to the action terms; perform analysis of electronic documents of one or more other corpora based on the trained ontology model to identify required tools for performing actions specified in the electronic documents; and annotate one or more of the electronic documents of the one or more other corpora to include required tools annotation metadata identifying tools required to perform actions corresponding to action terms present in the one or more electronic documents to thereby generate at least one updated corpus.
Mechanisms are provided for automatically identifying required tools for performing actions specified in electronic documents. The mechanisms perform natural language processing of content of a training corpus of electronic documents to identify associations of action terms with required tools for performing actions corresponding to the action terms. The mechanisms train an ontology model based on the identified associations. The mechanisms perform analysis of electronic documents of one or more other corpora based on the trained ontology model to identify required tools for performing actions specified in the electronic documents. The mechanisms annotate one or more of the electronic documents of the one or more corpora to include required tools annotation metadata identifying tools required to perform actions corresponding to action terms present in the one or more electronic documents.1. A method, in a data processing system comprising a processor and a memory, wherein the memory comprises instructions executed by the processor to cause the processor to implement the method, comprising: performing, by the data processing system, natural language processing of content of a training corpus of electronic documents to identify associations of action terms with required tools for performing actions corresponding to the action terms; training, by the data processing system, an ontology model based on the identified associations of the action terms with required tools for performing actions corresponding to the action terms; performing, by the data processing system, analysis of electronic documents of one or more other corpora based on the trained ontology model to identify required tools for performing actions specified in the electronic documents; and annotating, by the data processing system, one or more of the electronic documents of the one or more other corpora to include required tools annotation metadata identifying tools required to perform actions corresponding to action terms present in the one or more electronic documents to thereby generate at least one updated corpus. 2. The method of claim 1, wherein at least one electronic document of the one or more other corpora does not explicitly indicate, in content of the electronic document, a tool to perform a task specified in the content of the electronic document, and wherein performing analysis of electronic documents of one or more other corpora based on the trained ontology model comprises associating a tool with the task based on at least one action term specified in the content of the electronic document in association with the task. 3. The method of claim 1, wherein performing natural language processing of content of a training corpus comprises performing a morphological similarity analysis that identifies action terms and entities specified in natural language content of the training corpus that have an overlapping stem. 4. The method of claim 1, wherein performing natural language processing of content of a training corpus comprises performing a posteriori tool mention analysis which identifies text in the content of the training corpus where a tool required to perform an action corresponding to an action term in the text is present in the text after an instance of the action term corresponding to the action. 5. The method of claim 1, wherein the ontology model comprises a plurality of tuples, wherein each tuple associates an action term with a corresponding tool for performing an action corresponding to the action term, and wherein performing analysis of electronic documents of one or more other corpora based on the trained ontology model to identify required tools for performing actions specified in the electronic documents comprises matching action terms in the electronic documents with action terms in tuples of the plurality of tuples. 6. The method of claim 5, wherein each tuple further associates the action term and corresponding tool with at least one of an object upon which the action is performed or a confidence score indicating a confidence that the corresponding tool is required to perform an action corresponding to the action term. 7. The method of claim 6, wherein the confidence score is calculated by the data processing system based on a function of a frequency of occurrence of the tuple present in the training corpus. 8. The method of claim 1, wherein performing natural language processing of content of a training corpus comprises: applying one or more pre-defined textual patterns to the content of the training corpus, wherein each pre-defined textual pattern specifies a mention of a tool name and an action term; identifying matching portions of text in the content of the training corpus that match one or more of the pre-defined textual patterns; and extracting, from the matching portions of text, a tool name and action term. 9. The method of claim 1, further comprising: receiving, by the data processing system from a user computing device, a request for a project to be performed by the user; performing, by the data processing system, a search of the updated corpus for a document specifying instructions for performing a project matching criteria specified in the request; identifying, by the data processing system, based on the search results, tools required to perform tasks specified in the instructions for performing the project; and outputting, by the data processing system, the document specifying instructions for performing the project and a listing of required tools for performing tasks specified in the search results to a user via the user computing device. 10. The method of claim 9, further comprising: performing, by the data processing system, for at least one tool in the listing of required tools, a search of at least one tool provider information source for tool information describing the at least one tool in the listing of required tools; and outputting, by the data processing system, the tool information obtained from the at least one tool provider information source. 11. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a data processing system, causes the data processing system to: perform natural language processing of content of a training corpus of electronic documents to identify associations of action terms with required tools for performing actions corresponding to the action terms; train an ontology model based on the identified associations of the action terms with required tools for performing actions corresponding to the action terms; perform analysis of electronic documents of one or more other corpora based on the trained ontology model to identify required tools for performing actions specified in the electronic documents; and annotate one or more of the electronic documents of the one or more other corpora to include required tools annotation metadata identifying tools required to perform actions corresponding to action terms present in the one or more electronic documents to thereby generate at least one updated corpus. 12. The computer program product of claim 11, wherein at least one electronic document of the one or more other corpora does not explicitly indicate, in content of the electronic document, a tool to perform a task specified in the content of the electronic document, and wherein the computer readable program further causes the data processing system to perform analysis of electronic documents of one or more other corpora based on the trained ontology model at least by associating a tool with the task based on at least one action term specified in the content of the electronic document in association with the task. 13. The computer program product of claim 11, wherein the computer readable program further causes the data processing system to perform natural language processing of content of a training corpus at least by performing a morphological similarity analysis that identifies action terms and entities specified in natural language content of the training corpus that have an overlapping stem. 14. The computer program product of claim 11, wherein the computer readable program further causes the data processing system to perform natural language processing of content of a training corpus at least by performing a posteriori tool mention analysis which identifies text in the content of the training corpus where a tool required to perform an action corresponding to an action term in the text is present in the text after an instance of the action term corresponding to the action. 15. The computer program product of claim 11, wherein the ontology model comprises a plurality of tuples, wherein each tuple associates an action term with a corresponding tool for performing an action corresponding to the action term, and wherein the computer readable program further causes the data processing system to perform analysis of electronic documents of one or more other corpora based on the trained ontology model to identify required tools for performing actions specified in the electronic documents at least by matching action terms in the electronic documents with action terms in tuples of the plurality of tuples. 16. The computer program product of claim 15, wherein each tuple further associates the action term and corresponding tool with at least one of an object upon which the action is performed or a confidence score indicating a confidence that the corresponding tool is required to perform an action corresponding to the action term. 17. The computer program product of claim 16, wherein the confidence score is calculated by the data processing system based on a function of a frequency of occurrence of the tuple present in the training corpus. 18. The computer program product of claim 11, wherein the computer readable program further causes the data processing system to perform natural language processing of content of a training corpus at least by: applying one or more pre-defined textual patterns to the content of the training corpus, wherein each pre-defined textual pattern specifies a mention of a tool name and an action term; identifying matching portions of text in the content of the training corpus that match one or more of the pre-defined textual patterns; and extracting, from the matching portions of text, a tool name and action term. 19. The computer program product of claim 11, wherein the computer readable program further causes the data processing system to: receive, from a user computing device, a request for a project to be performed by the user; perform a search of the updated corpus for a document specifying instructions for performing a project matching criteria specified in the request; identify, based on the search results, tools required to perform tasks specified in the instructions for performing the project; and output the document specifying instructions for performing the project and a listing of required tools for performing tasks specified in the search results to a user via the user computing device. 20. An apparatus comprising: a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: perform natural language processing of content of a training corpus of electronic documents to identify associations of action terms with required tools for performing actions corresponding to the action terms; train an ontology model based on the identified associations of the action terms with required tools for performing actions corresponding to the action terms; perform analysis of electronic documents of one or more other corpora based on the trained ontology model to identify required tools for performing actions specified in the electronic documents; and annotate one or more of the electronic documents of the one or more other corpora to include required tools annotation metadata identifying tools required to perform actions corresponding to action terms present in the one or more electronic documents to thereby generate at least one updated corpus.
2,600
9,836
9,836
15,104,292
2,647
Disclosed is a method to operate a wireless device, comprising a communication unit, in a wireless cellular network comprising at least two radio access networks, the communication unit being configured to communicate to a network node of the wireless cellular network, the network node being associated to a first radio access network. The method comprises the steps for the communication unit of: detecting if a second radio access network of said wireless cellular network fulfills a predefined set of access parameters, wherein the second radio access network supports a different access technology than the first radio access network. If the second radio access network fulfills said set of access parameters: sending a connection release indication to the network node, receiving from the network node a connection release confirmation message, sending a connection request message to the wireless cellular network with selection of said second radio access network.
1. Method to operate a wireless device in a wireless cellular network, the wireless device comprising a communication unit, the wireless cellular network comprising at least two radio access networks, the communication unit being configured to communicate to a network node being part of the wireless cellular network, the network node being associated to a first radio access network, the method comprising the steps for the communication unit of: detecting if a second radio access network of said wireless cellular network fulfills a predefined set of access parameters, wherein the second radio access network supports a different access technology than the first radio access network, if the detection step indicates that said second radio access network fulfills said set of access parameters: sending a connection release indication to the network node, receiving from the network node a connection release confirmation message, sending a connection request message to the wireless cellular network with selection of said second radio access network. 2. Method according to claim 1, the wireless device comprising at least one storage element storing network access rights, wherein the set of access parameters further comprises the requirement that said network access rights indicate that: the wireless device is entitled to access the second radio access network and/or the second radio access network is configured as preferred radio access network. 3. Method according to claim 1, wherein the set of access parameters further comprises the requirement that the second radio access network is available. 4. Method according to claim 3, wherein the availability of the second radio access network is determined by measuring signalling from network nodes associated to neighbour cells. 5. Method according to claim 3, wherein the availability of the second radio access network is determined by executing a network scan. 6. Method according to claim 3, comprising the step of storing measurements indicating a high likelihood of availability of a second radio access network, wherein the availability of the second radio access network is determined by accessing said stored previous measurements. 7. Method according to claim 1, wherein if the detection step indicates that no second radio access network fulfills said set of access parameters, the method comprising the steps of: starting a timer, measuring a first indication of the current location, after expiry of the timer measuring a second indication of the current location, if the first and second indication of the current location materially differ, repeating the detection step, and otherwise restarting the timer. 8. Wireless device configured to operate in a wireless cellular network, the wireless device comprising a communication unit, the wireless cellular network comprising at least two radio access networks, the communication unit being configured to communicate to a network node being part of the wireless cellular network, the network node being associated to a first radio access network, wherein the wireless device is configured: to detect if a second radio access network of said wireless cellular network fulfills a predefined set of access parameters, wherein the second radio access network supports a different access technology than the first radio access network, if the detection indicates that said radio second radio access network fulfils said set of access parameters, the communication unit is configured: to send a connection release indication to the network node, to receive from the network node a connection release confirmation, and to send a connection request message to the wireless cellular network with selection of said second radio access network. 9. Wireless device according to claim 8, further comprising at least one storage element storing network access rights, wherein the set of access parameters further comprises the requirement that said network access rights indicate that: the wireless device is entitled to access the second radio access network and/or the second radio access network is configured as preferred radio access network. 10. Wireless device according to claim 8, wherein the set of access parameters further comprises the requirement that the second radio access network is available. 11. Wireless device according to claim 10, configured to determine the availability of the second radio access network by measuring signalling from network nodes associated to neighbour cells. 12. Wireless device according to claim 10, configured to determine the availability of the second radio access network by executing a network scan. 13. Wireless device according to claim 10, further configured to store measurements indicating a high likelihood of availability of a second radio access network, wherein the availability of the second radio access network is determined by accessing said stored previous measurements. 14. Wireless device according claim 8, configured to, if said detection indicates that no second radio access network fulfills said set of access parameter, start a timer, measure a first indication of the current location, after expiry of the timer measure a second indication of the current location, if the first and second indication of the current location materially differ, repeat said detection, and otherwise restart the timer.
Disclosed is a method to operate a wireless device, comprising a communication unit, in a wireless cellular network comprising at least two radio access networks, the communication unit being configured to communicate to a network node of the wireless cellular network, the network node being associated to a first radio access network. The method comprises the steps for the communication unit of: detecting if a second radio access network of said wireless cellular network fulfills a predefined set of access parameters, wherein the second radio access network supports a different access technology than the first radio access network. If the second radio access network fulfills said set of access parameters: sending a connection release indication to the network node, receiving from the network node a connection release confirmation message, sending a connection request message to the wireless cellular network with selection of said second radio access network.1. Method to operate a wireless device in a wireless cellular network, the wireless device comprising a communication unit, the wireless cellular network comprising at least two radio access networks, the communication unit being configured to communicate to a network node being part of the wireless cellular network, the network node being associated to a first radio access network, the method comprising the steps for the communication unit of: detecting if a second radio access network of said wireless cellular network fulfills a predefined set of access parameters, wherein the second radio access network supports a different access technology than the first radio access network, if the detection step indicates that said second radio access network fulfills said set of access parameters: sending a connection release indication to the network node, receiving from the network node a connection release confirmation message, sending a connection request message to the wireless cellular network with selection of said second radio access network. 2. Method according to claim 1, the wireless device comprising at least one storage element storing network access rights, wherein the set of access parameters further comprises the requirement that said network access rights indicate that: the wireless device is entitled to access the second radio access network and/or the second radio access network is configured as preferred radio access network. 3. Method according to claim 1, wherein the set of access parameters further comprises the requirement that the second radio access network is available. 4. Method according to claim 3, wherein the availability of the second radio access network is determined by measuring signalling from network nodes associated to neighbour cells. 5. Method according to claim 3, wherein the availability of the second radio access network is determined by executing a network scan. 6. Method according to claim 3, comprising the step of storing measurements indicating a high likelihood of availability of a second radio access network, wherein the availability of the second radio access network is determined by accessing said stored previous measurements. 7. Method according to claim 1, wherein if the detection step indicates that no second radio access network fulfills said set of access parameters, the method comprising the steps of: starting a timer, measuring a first indication of the current location, after expiry of the timer measuring a second indication of the current location, if the first and second indication of the current location materially differ, repeating the detection step, and otherwise restarting the timer. 8. Wireless device configured to operate in a wireless cellular network, the wireless device comprising a communication unit, the wireless cellular network comprising at least two radio access networks, the communication unit being configured to communicate to a network node being part of the wireless cellular network, the network node being associated to a first radio access network, wherein the wireless device is configured: to detect if a second radio access network of said wireless cellular network fulfills a predefined set of access parameters, wherein the second radio access network supports a different access technology than the first radio access network, if the detection indicates that said radio second radio access network fulfils said set of access parameters, the communication unit is configured: to send a connection release indication to the network node, to receive from the network node a connection release confirmation, and to send a connection request message to the wireless cellular network with selection of said second radio access network. 9. Wireless device according to claim 8, further comprising at least one storage element storing network access rights, wherein the set of access parameters further comprises the requirement that said network access rights indicate that: the wireless device is entitled to access the second radio access network and/or the second radio access network is configured as preferred radio access network. 10. Wireless device according to claim 8, wherein the set of access parameters further comprises the requirement that the second radio access network is available. 11. Wireless device according to claim 10, configured to determine the availability of the second radio access network by measuring signalling from network nodes associated to neighbour cells. 12. Wireless device according to claim 10, configured to determine the availability of the second radio access network by executing a network scan. 13. Wireless device according to claim 10, further configured to store measurements indicating a high likelihood of availability of a second radio access network, wherein the availability of the second radio access network is determined by accessing said stored previous measurements. 14. Wireless device according claim 8, configured to, if said detection indicates that no second radio access network fulfills said set of access parameter, start a timer, measure a first indication of the current location, after expiry of the timer measure a second indication of the current location, if the first and second indication of the current location materially differ, repeat said detection, and otherwise restart the timer.
2,600
9,837
9,837
14,223,601
2,625
The present disclosure is related to a method and device for detecting a touch between at least part of a first object and at least part of a second object, wherein the at least part of the first object has a different temperature than the at least part of the second object. The method includes providing at least one thermal image of a portion of the second object, determining in at least part of the at least one thermal image a pattern which is indicative of a particular value or range of temperature or a particular value or range of temperature change, and using the determined pattern for detecting a touch between the at least part of the first object and the at least part of the second object.
1. A method of detecting a touch between at least part of a first object and at least part of a second object, wherein the at least part of the first object has a different temperature than the at least part of the second object, comprising the steps of: providing at least one thermal image of a portion of the second object; determining at least two temperature differences between temperatures measured at different positions in the at least one thermal image or at least two derivatives of temperature in the at least one thermal image; determining in at least part of the at least one thermal image a pattern which is indicative of a particular value or range of temperature or a particular value or range of temperature change, wherein determining the pattern comprises determining one or more differences between the at least two temperature differences or between the at least two derivatives; and using the determined pattern for detecting a touch between the at least part of the first object and the at least part of the second object. 2. The method according to claim 1, where the at least part of the second object and the portion of the second object overlap or do not overlap. 3. The method according to claim 1, wherein the first object is at least part of a human body. 4. The method according to claim 1, further comprising determining a position, a size, an orientation, a direction, a trajectory, or a shape of the detected touch in the thermal image. 5. The method according to claim 1, further providing the detected touch as an input to a machine interface program, wherein the detected touch changes a state in the machine interface program. 6. The method according to claim 5, wherein the position and/or orientation of a camera or an eye or a display relative to the second object is used as a further input to the machine interface program. 7. The method according to claim 1, wherein the at least two derivatives of temperature are with respect to position. 8. The method according to claim 1, wherein determining the pattern comprises determining a temperature distribution of at least one sample line in the at least one thermal image which can have any orientation within the thermal image. 9. The method according to claim 1, wherein determining the pattern comprises determining a cluster in the thermal image which satisfies one or more constraints on its size and/or average temperature. 10. The method according to claim 1, further comprising providing a sequence of thermal images which comprises at least two thermal images of a portion of the second object. 11. The method according to claim 10, wherein determining the pattern comprises determining a change of temperature between the at least two thermal images and determining whether the change is above a defined threshold. 12. The method according to claim 10, wherein determining the pattern comprises determining a derivative of temperature between the at least two thermal images and determining whether the derivative is above a defined threshold. 13. The method according to claim 10, wherein determining the pattern comprises determining a first change of temperature between the at least two thermal images and a second change of temperature between the at least two thermal images, and using the first and second changes and derivatives of the first and second changes for detecting a touch. 14. The method according to claim 1, wherein the method is applied within a human machine interface in an Augmented Reality application. 15. The method according to claim 14, wherein detecting a touch comprises detecting a part of a user touching at least a part of the second object at a place where virtual information is displayed to the user, wherein upon detecting the touch the virtual information is manipulated. 16. The method according to claim 1, wherein the method is used within an application using a video-see-through setup, an optical-see-through setup, or a projective AR setup. 17. The method according to claim 1, wherein the method is used with a hardware setup that does not include a touch screen interface. 18. The method according to claim 1, wherein the method only detects residual heat caused by a touch which has been imaged by the thermal camera while it occurred. 19. The method according to claim 1, wherein a human-computer-interface handles a touch detected in the thermal image according to at least one of the following: depending on the position of the touch relative to a real object, relative to a virtual object, depending on the global position of the capturing device that captures the thermal image. 20. The method according to claim 1, wherein a human machine interface stores a history of detected touches. 21. A method of detecting a touch between at least part of a first object and at least part of a second object, wherein the at least part of the first object has a different temperature than the at least part of the second object, comprising the steps of: providing at least one thermal image of a portion of the second object; determining in at least part of the at least one thermal image a pattern which is indicative of a particular value or range of temperature or a particular value or range of temperature change, wherein determining the pattern comprises determining a temperature distribution between at least two temperature intervals which are indicative of a respective temperature of the first and second object; and using the determined pattern for detecting a touch between the at least part of the first object and the at least part of the second object. 22. The method according to claim 21, further comprising determining whether a first of the intervals shows a first increase in temperature followed by a second increase, which is steeper than the first increase, and whether a second of the intervals shows a first descent in temperature followed by a second descent, which is less steep than the first descent. 23. The method according to claim 21, further comprising calculating a histogram of temperatures in the at least one thermal image and using the histogram as a basis to define at least one of the first and second intervals and an interval between the first and second interval that is determined for detecting a touch. 24. A method of detecting a touch between at least part of a first object and at least part of a second object, wherein the at least part of the first object has a different temperature than the at least part of the second object, comprising the steps of: providing at least one thermal image of a portion of the second object; determining in at least part of the at least one thermal image a pattern which is indicative of a particular value or range of temperature or a particular value or range of temperature change; using the determined pattern for detecting a touch between the at least part of the first object and the at least part of the second object; imaging the portion of the second object by a visible light camera and a thermal camera providing the at least one thermal image; providing a first spatial transformation between the visible light camera and the thermal camera; providing a second spatial transformation between the visible light camera and the imaged portion of the second object; concatenating the first and second spatial transformations resulting in a third spatial transformation between a coordinate system of the imaged portion of the second object and a coordinate system of the thermal camera; determining a fourth spatial transformation between the thermal camera and the imaged portion of the second object based on the third spatial transformation; and determining a position of the touch in the coordinate system of the imaged portion of the second object according to the determined fourth spatial transformation. 25. The method according to claim 24, further comprising the steps of determining a position of a touch in the at least one thermal image, wherein the position of the touch in the coordinate system of the imaged portion of the second object is determined by intersecting a ray originating from an origin of the thermal camera transformed to the coordinate system of the imaged portion of the second object and pointing towards the location of the detected touch on the image plane of the thermal camera with a model of the imaged portion of the second object, wherein the intersection is used to trigger a touch event at that position. 26. A non-transitory computer readable medium comprising software code sections which are adapted to perform a method of detecting a touch between at least part of a first object and at least part of a second object, wherein the at least part of the first object has a different temperature than the at least part of the second object, which method comprises the steps of: providing at least one thermal image of a portion of the second object; determining at least two temperature differences between temperatures measured at different positions in the at least one thermal image or at least two derivatives of temperature in the at least one thermal image; determining in at least part of the at least one thermal image a pattern which is indicative of a particular value or range of temperature or a particular value or range of temperature change, wherein determining the pattern comprises determining one or more differences between the at least two temperature differences or between the at least two derivatives; and using the determined pattern for detecting a touch between the at least part of the first object and the at least part of the second object. 27. A device for detecting a touch between at least part of a first object and at least part of a second object, wherein the at least part of the first object has a different temperature than the at least part of the second object, comprising: a processing device adapted to receive image information of at least one thermal image of a portion of the second object, the processing device configured to: determine at least two temperature differences between temperatures measured at different positions in the at least one thermal image or at least two derivatives of temperature in the at least one thermal image; determine in at least part of the at least one thermal image a pattern which is indicative of a particular value or range of temperature or a particular value or range of temperature change, wherein determining the pattern comprises determining one or more differences between the at least two temperature differences or between the at least two derivatives; and to use the determined pattern for detecting a touch between the at least part of the first object and the at least part of the second object. 28. The device according to claim 27, wherein the processing device is communicating with a thermal camera for providing the at least one thermal image, wherein at least one of the processing device and the thermal camera is implemented in or associated with a head-mounted display, a handheld device, or a projector for performing projector-based Augmented Reality. 29. A device for detecting a touch between at least part of a first object and at least part of a second object, wherein the at least part of the first object has a different temperature than the at least part of the second object, comprising: a processing device adapted to receive image information of at least one thermal image of a portion of the second object, the processing device configured to: determine in at least part of the at least one thermal image a pattern which is indicative of a particular value or range of temperature or a particular value or range of temperature change, wherein determining the pattern comprises determining a temperature distribution between at least two temperature intervals which are indicative of a respective temperature of the first and second object; and to use the determined pattern for detecting a touch between the at least part of the first object and the at least part of the second object. 30. A device for detecting a touch between at least part of a first object and at least part of a second object, wherein the at least part of the first object has a different temperature than the at least part of the second object, comprising: a processing device adapted to receive image information of at least one thermal image of a portion of the second object, the processing device configured to: determine in at least part of the at least one thermal image a pattern which is indicative of a particular value or range of temperature or a particular value or range of temperature change; use the determined pattern for detecting a touch between the at least part of the first object and the at least part of the second object; image the portion of the second object by a visible light camera and a thermal camera providing the at least one thermal image; provide a first spatial transformation between the visible light camera and the thermal camera; provide a second spatial transformation between the visible light camera and the imaged portion of the second object; concatenate the first and second spatial transformations resulting in a third spatial transformation between a coordinate system of the imaged portion of the second object and a coordinate system of the thermal camera; determine a fourth spatial transformation between the thermal camera and the imaged portion of the second object based on the third spatial transformation; and determine a position of the touch in the coordinate system of the imaged portion of the second object according to the determined fourth spatial transformation.
The present disclosure is related to a method and device for detecting a touch between at least part of a first object and at least part of a second object, wherein the at least part of the first object has a different temperature than the at least part of the second object. The method includes providing at least one thermal image of a portion of the second object, determining in at least part of the at least one thermal image a pattern which is indicative of a particular value or range of temperature or a particular value or range of temperature change, and using the determined pattern for detecting a touch between the at least part of the first object and the at least part of the second object.1. A method of detecting a touch between at least part of a first object and at least part of a second object, wherein the at least part of the first object has a different temperature than the at least part of the second object, comprising the steps of: providing at least one thermal image of a portion of the second object; determining at least two temperature differences between temperatures measured at different positions in the at least one thermal image or at least two derivatives of temperature in the at least one thermal image; determining in at least part of the at least one thermal image a pattern which is indicative of a particular value or range of temperature or a particular value or range of temperature change, wherein determining the pattern comprises determining one or more differences between the at least two temperature differences or between the at least two derivatives; and using the determined pattern for detecting a touch between the at least part of the first object and the at least part of the second object. 2. The method according to claim 1, where the at least part of the second object and the portion of the second object overlap or do not overlap. 3. The method according to claim 1, wherein the first object is at least part of a human body. 4. The method according to claim 1, further comprising determining a position, a size, an orientation, a direction, a trajectory, or a shape of the detected touch in the thermal image. 5. The method according to claim 1, further providing the detected touch as an input to a machine interface program, wherein the detected touch changes a state in the machine interface program. 6. The method according to claim 5, wherein the position and/or orientation of a camera or an eye or a display relative to the second object is used as a further input to the machine interface program. 7. The method according to claim 1, wherein the at least two derivatives of temperature are with respect to position. 8. The method according to claim 1, wherein determining the pattern comprises determining a temperature distribution of at least one sample line in the at least one thermal image which can have any orientation within the thermal image. 9. The method according to claim 1, wherein determining the pattern comprises determining a cluster in the thermal image which satisfies one or more constraints on its size and/or average temperature. 10. The method according to claim 1, further comprising providing a sequence of thermal images which comprises at least two thermal images of a portion of the second object. 11. The method according to claim 10, wherein determining the pattern comprises determining a change of temperature between the at least two thermal images and determining whether the change is above a defined threshold. 12. The method according to claim 10, wherein determining the pattern comprises determining a derivative of temperature between the at least two thermal images and determining whether the derivative is above a defined threshold. 13. The method according to claim 10, wherein determining the pattern comprises determining a first change of temperature between the at least two thermal images and a second change of temperature between the at least two thermal images, and using the first and second changes and derivatives of the first and second changes for detecting a touch. 14. The method according to claim 1, wherein the method is applied within a human machine interface in an Augmented Reality application. 15. The method according to claim 14, wherein detecting a touch comprises detecting a part of a user touching at least a part of the second object at a place where virtual information is displayed to the user, wherein upon detecting the touch the virtual information is manipulated. 16. The method according to claim 1, wherein the method is used within an application using a video-see-through setup, an optical-see-through setup, or a projective AR setup. 17. The method according to claim 1, wherein the method is used with a hardware setup that does not include a touch screen interface. 18. The method according to claim 1, wherein the method only detects residual heat caused by a touch which has been imaged by the thermal camera while it occurred. 19. The method according to claim 1, wherein a human-computer-interface handles a touch detected in the thermal image according to at least one of the following: depending on the position of the touch relative to a real object, relative to a virtual object, depending on the global position of the capturing device that captures the thermal image. 20. The method according to claim 1, wherein a human machine interface stores a history of detected touches. 21. A method of detecting a touch between at least part of a first object and at least part of a second object, wherein the at least part of the first object has a different temperature than the at least part of the second object, comprising the steps of: providing at least one thermal image of a portion of the second object; determining in at least part of the at least one thermal image a pattern which is indicative of a particular value or range of temperature or a particular value or range of temperature change, wherein determining the pattern comprises determining a temperature distribution between at least two temperature intervals which are indicative of a respective temperature of the first and second object; and using the determined pattern for detecting a touch between the at least part of the first object and the at least part of the second object. 22. The method according to claim 21, further comprising determining whether a first of the intervals shows a first increase in temperature followed by a second increase, which is steeper than the first increase, and whether a second of the intervals shows a first descent in temperature followed by a second descent, which is less steep than the first descent. 23. The method according to claim 21, further comprising calculating a histogram of temperatures in the at least one thermal image and using the histogram as a basis to define at least one of the first and second intervals and an interval between the first and second interval that is determined for detecting a touch. 24. A method of detecting a touch between at least part of a first object and at least part of a second object, wherein the at least part of the first object has a different temperature than the at least part of the second object, comprising the steps of: providing at least one thermal image of a portion of the second object; determining in at least part of the at least one thermal image a pattern which is indicative of a particular value or range of temperature or a particular value or range of temperature change; using the determined pattern for detecting a touch between the at least part of the first object and the at least part of the second object; imaging the portion of the second object by a visible light camera and a thermal camera providing the at least one thermal image; providing a first spatial transformation between the visible light camera and the thermal camera; providing a second spatial transformation between the visible light camera and the imaged portion of the second object; concatenating the first and second spatial transformations resulting in a third spatial transformation between a coordinate system of the imaged portion of the second object and a coordinate system of the thermal camera; determining a fourth spatial transformation between the thermal camera and the imaged portion of the second object based on the third spatial transformation; and determining a position of the touch in the coordinate system of the imaged portion of the second object according to the determined fourth spatial transformation. 25. The method according to claim 24, further comprising the steps of determining a position of a touch in the at least one thermal image, wherein the position of the touch in the coordinate system of the imaged portion of the second object is determined by intersecting a ray originating from an origin of the thermal camera transformed to the coordinate system of the imaged portion of the second object and pointing towards the location of the detected touch on the image plane of the thermal camera with a model of the imaged portion of the second object, wherein the intersection is used to trigger a touch event at that position. 26. A non-transitory computer readable medium comprising software code sections which are adapted to perform a method of detecting a touch between at least part of a first object and at least part of a second object, wherein the at least part of the first object has a different temperature than the at least part of the second object, which method comprises the steps of: providing at least one thermal image of a portion of the second object; determining at least two temperature differences between temperatures measured at different positions in the at least one thermal image or at least two derivatives of temperature in the at least one thermal image; determining in at least part of the at least one thermal image a pattern which is indicative of a particular value or range of temperature or a particular value or range of temperature change, wherein determining the pattern comprises determining one or more differences between the at least two temperature differences or between the at least two derivatives; and using the determined pattern for detecting a touch between the at least part of the first object and the at least part of the second object. 27. A device for detecting a touch between at least part of a first object and at least part of a second object, wherein the at least part of the first object has a different temperature than the at least part of the second object, comprising: a processing device adapted to receive image information of at least one thermal image of a portion of the second object, the processing device configured to: determine at least two temperature differences between temperatures measured at different positions in the at least one thermal image or at least two derivatives of temperature in the at least one thermal image; determine in at least part of the at least one thermal image a pattern which is indicative of a particular value or range of temperature or a particular value or range of temperature change, wherein determining the pattern comprises determining one or more differences between the at least two temperature differences or between the at least two derivatives; and to use the determined pattern for detecting a touch between the at least part of the first object and the at least part of the second object. 28. The device according to claim 27, wherein the processing device is communicating with a thermal camera for providing the at least one thermal image, wherein at least one of the processing device and the thermal camera is implemented in or associated with a head-mounted display, a handheld device, or a projector for performing projector-based Augmented Reality. 29. A device for detecting a touch between at least part of a first object and at least part of a second object, wherein the at least part of the first object has a different temperature than the at least part of the second object, comprising: a processing device adapted to receive image information of at least one thermal image of a portion of the second object, the processing device configured to: determine in at least part of the at least one thermal image a pattern which is indicative of a particular value or range of temperature or a particular value or range of temperature change, wherein determining the pattern comprises determining a temperature distribution between at least two temperature intervals which are indicative of a respective temperature of the first and second object; and to use the determined pattern for detecting a touch between the at least part of the first object and the at least part of the second object. 30. A device for detecting a touch between at least part of a first object and at least part of a second object, wherein the at least part of the first object has a different temperature than the at least part of the second object, comprising: a processing device adapted to receive image information of at least one thermal image of a portion of the second object, the processing device configured to: determine in at least part of the at least one thermal image a pattern which is indicative of a particular value or range of temperature or a particular value or range of temperature change; use the determined pattern for detecting a touch between the at least part of the first object and the at least part of the second object; image the portion of the second object by a visible light camera and a thermal camera providing the at least one thermal image; provide a first spatial transformation between the visible light camera and the thermal camera; provide a second spatial transformation between the visible light camera and the imaged portion of the second object; concatenate the first and second spatial transformations resulting in a third spatial transformation between a coordinate system of the imaged portion of the second object and a coordinate system of the thermal camera; determine a fourth spatial transformation between the thermal camera and the imaged portion of the second object based on the third spatial transformation; and determine a position of the touch in the coordinate system of the imaged portion of the second object according to the determined fourth spatial transformation.
2,600
9,838
9,838
14,878,477
2,698
A support for image processing is provided, comprising: (a) detecting respective face regions from images consecutively photographed for a first person at predetermined time intervals by an image pickup unit to display images of the face regions detected in relation to the first person in a first region of a screen, and providing a user interface for indicating that a specific face image is selected from the face images of the first person displayed in the first region; (b) additionally displaying the specific face image through a second region adjacent to the first region; and (c) displaying a synthesized image using the specific face image as a representative face of the first person, when the specific face image displayed through the second region is selected.
1-33. (canceled) 34. At least one machine readable medium having instructions stored thereon, which when executed by one or more machines, cause the machines to: detect a face region of a person in a plurality of photographed images; detect selection of the face region; detect selection of a face image of the person from among two or more face images, wherein the two or more face images correspond to a face region detected in the plurality of photographed images; and store a synthesized image using the selected face image. 35. The at least one machine readable medium according to claim 34, having instructions stored thereon, which when executed by one or more machines, further cause the one or more machines to display the synthesized image. 36. The at least one machine readable medium according to claim 34, having instructions stored thereon, which when executed by one or more machines, further cause the one or more machines to provide a shape around the selected face image for the person displayed at the region of the screen. 37. The at least one machine readable medium according to claim 34, having instructions stored thereon, which when executed by one or more machines, further cause the one or more machines to: detect a face region of the person and a second person in each of the plurality of photographed images; and display face images of the person and the second person. 38. The at least one machine readable medium according to claim 37, having instructions stored thereon, which when executed by one or more machines, further cause the machines to: detect selection of the face region of the second person; detect selection of a face image of the second person from among two or more face images; and store a synthesized image using the selected face image of the second person. 39. The at least one machine readable medium according to claim 34, having instructions stored thereon, which when executed by one or more machines, further cause the machines to generate the synthesized image using a base image. 40. The at least one machine readable medium according to claim 39, wherein the synthesized image is generated by synthesizing the selected face image and the base image. 41. The at least one machine readable medium according to claim 34, wherein the selection of a face image from among the two or more face images is detected via a user interface. 42. A system comprising: a storage medium to store an image processing application; and a processor to execute the image processing application to: detect a face region of a person in a plurality of photographed images; detect selection of the face region; detect selection of a face image of the person from among two or more face images, wherein the two or more face images correspond to a face region detected in the plurality of photographed images; and store a synthesized image using the selected face image. 43. The system according to claim 42, further comprising an image pickup unit to capture the plurality of photographed images. 44. The system according to claim 42, wherein the processor further to execute the image processing application to display the synthesized image. 45. The system according to claim 42, wherein the processor further to execute the image processing application to provide a shape around the detected face image for the person displayed at the region of the screen. 46. The system according to claim 42, wherein the processor further to execute the image processing application to: detect a face region of the person and a second person in each of the plurality of photographed images; and display face images of the person and the second person. 47. The system according to claim 46, wherein the processor further to execute the image processing application to: detect selection of the face region of the second person; detect selection of a face image of the second person from among two or more face images; and store a synthesized image using the selected face image of the second person. 48. A method comprising: detecting a face region of a person in a plurality of photographed images; detecting selection of the face region; detecting selection of a face image of the person from among two or more face images, wherein the two or more face images correspond to a face region detected in the plurality of photographed images; and storing a synthesized image using the selected face image. 49. The method according to claim 48, further comprising displaying the synthesized image. 50. The method according to claim 48, further comprising providing a shape around the selected face image for the person displayed at the region of the screen. 51. The method according to claim 48, further comprising: detecting a face region of the person and a second person in each of the plurality of photographed images; and displaying face images of the person and the second person. 52. The method according to claim 51, further comprising: detect selection of the face region of the second person; detect selection of a face image of the second person from among two or more face images; and store a synthesized image using the selected face image of the second person. 53. An apparatus comprising: an image pickup unit to capture a plurality of photographed images; and a processor to execute: a face detection unit to detect a face region of a person in a plurality of photographed images, detect selection of the face region, detect selection of a face image of the person from among two or more face images, wherein the two or more face images correspond to a face region detected in the plurality of photographed images; and a storage device to store a synthesized image using the selected face image. 54. The apparatus according to claim 53, further comprising an image pickup unit to capture the plurality of photographed images. 55. The apparatus according to claim 53, wherein the processor further to execute a display unit to display the synthesized image. 56. The apparatus according to claim 53, wherein the display unit provides a shape around the detected face image for the person displayed at the region of the screen. 57. The apparatus according to claim 56, wherein the face detection unit detects a face region of the person and a second person in each of the plurality of photographed images and the display unit displays face images of the person and the second person. 58. The system according to claim 57, wherein the face detection unit detects selection of the face region of the second person, detects selection of a face image of the second person from among two or more face images and the storage device stores a synthesized image using the selected face image of the second person.
A support for image processing is provided, comprising: (a) detecting respective face regions from images consecutively photographed for a first person at predetermined time intervals by an image pickup unit to display images of the face regions detected in relation to the first person in a first region of a screen, and providing a user interface for indicating that a specific face image is selected from the face images of the first person displayed in the first region; (b) additionally displaying the specific face image through a second region adjacent to the first region; and (c) displaying a synthesized image using the specific face image as a representative face of the first person, when the specific face image displayed through the second region is selected.1-33. (canceled) 34. At least one machine readable medium having instructions stored thereon, which when executed by one or more machines, cause the machines to: detect a face region of a person in a plurality of photographed images; detect selection of the face region; detect selection of a face image of the person from among two or more face images, wherein the two or more face images correspond to a face region detected in the plurality of photographed images; and store a synthesized image using the selected face image. 35. The at least one machine readable medium according to claim 34, having instructions stored thereon, which when executed by one or more machines, further cause the one or more machines to display the synthesized image. 36. The at least one machine readable medium according to claim 34, having instructions stored thereon, which when executed by one or more machines, further cause the one or more machines to provide a shape around the selected face image for the person displayed at the region of the screen. 37. The at least one machine readable medium according to claim 34, having instructions stored thereon, which when executed by one or more machines, further cause the one or more machines to: detect a face region of the person and a second person in each of the plurality of photographed images; and display face images of the person and the second person. 38. The at least one machine readable medium according to claim 37, having instructions stored thereon, which when executed by one or more machines, further cause the machines to: detect selection of the face region of the second person; detect selection of a face image of the second person from among two or more face images; and store a synthesized image using the selected face image of the second person. 39. The at least one machine readable medium according to claim 34, having instructions stored thereon, which when executed by one or more machines, further cause the machines to generate the synthesized image using a base image. 40. The at least one machine readable medium according to claim 39, wherein the synthesized image is generated by synthesizing the selected face image and the base image. 41. The at least one machine readable medium according to claim 34, wherein the selection of a face image from among the two or more face images is detected via a user interface. 42. A system comprising: a storage medium to store an image processing application; and a processor to execute the image processing application to: detect a face region of a person in a plurality of photographed images; detect selection of the face region; detect selection of a face image of the person from among two or more face images, wherein the two or more face images correspond to a face region detected in the plurality of photographed images; and store a synthesized image using the selected face image. 43. The system according to claim 42, further comprising an image pickup unit to capture the plurality of photographed images. 44. The system according to claim 42, wherein the processor further to execute the image processing application to display the synthesized image. 45. The system according to claim 42, wherein the processor further to execute the image processing application to provide a shape around the detected face image for the person displayed at the region of the screen. 46. The system according to claim 42, wherein the processor further to execute the image processing application to: detect a face region of the person and a second person in each of the plurality of photographed images; and display face images of the person and the second person. 47. The system according to claim 46, wherein the processor further to execute the image processing application to: detect selection of the face region of the second person; detect selection of a face image of the second person from among two or more face images; and store a synthesized image using the selected face image of the second person. 48. A method comprising: detecting a face region of a person in a plurality of photographed images; detecting selection of the face region; detecting selection of a face image of the person from among two or more face images, wherein the two or more face images correspond to a face region detected in the plurality of photographed images; and storing a synthesized image using the selected face image. 49. The method according to claim 48, further comprising displaying the synthesized image. 50. The method according to claim 48, further comprising providing a shape around the selected face image for the person displayed at the region of the screen. 51. The method according to claim 48, further comprising: detecting a face region of the person and a second person in each of the plurality of photographed images; and displaying face images of the person and the second person. 52. The method according to claim 51, further comprising: detect selection of the face region of the second person; detect selection of a face image of the second person from among two or more face images; and store a synthesized image using the selected face image of the second person. 53. An apparatus comprising: an image pickup unit to capture a plurality of photographed images; and a processor to execute: a face detection unit to detect a face region of a person in a plurality of photographed images, detect selection of the face region, detect selection of a face image of the person from among two or more face images, wherein the two or more face images correspond to a face region detected in the plurality of photographed images; and a storage device to store a synthesized image using the selected face image. 54. The apparatus according to claim 53, further comprising an image pickup unit to capture the plurality of photographed images. 55. The apparatus according to claim 53, wherein the processor further to execute a display unit to display the synthesized image. 56. The apparatus according to claim 53, wherein the display unit provides a shape around the detected face image for the person displayed at the region of the screen. 57. The apparatus according to claim 56, wherein the face detection unit detects a face region of the person and a second person in each of the plurality of photographed images and the display unit displays face images of the person and the second person. 58. The system according to claim 57, wherein the face detection unit detects selection of the face region of the second person, detects selection of a face image of the second person from among two or more face images and the storage device stores a synthesized image using the selected face image of the second person.
2,600
9,839
9,839
14,724,484
2,616
An application sends primitives to a graphics processing system so that an image of a 3D scene can be rendered. The primitives are placed into primitive blocks for storage and retrieval from a parameter memory. Rather than simply placing the first primitives into a primitive block until the primitive block is full and then placing further primitives into the next primitive block, multiple primitive blocks can be “open” such that a primitive block allocation module can allocate primitives to one of the open primitive blocks to thereby sort the primitives into primitive blocks according to their spatial positions. By grouping primitives together into primitive blocks in accordance with their spatial positions, the performance of a rasterization module can be improved. For example, in a tile-based rendering system this may mean that fewer primitive blocks need to be fetched by a hidden surface removal module in order to process a tile.
1. A method of allocating primitives to primitive blocks at a primitive block allocation module of a computer graphics processing system, which includes a data store for storing a set of primitive blocks to which primitives can be allocated, the method comprising: receiving a sequence of primitives; and for each of the received primitives, if at least one primitive block is stored in the data store: (i) comparing an indication of a spatial position of the received primitive with at least one indication of a spatial position of the at least one primitive block stored in the data store; and (ii) allocating the received primitive to a primitive block based on the result of the comparison, such that the received primitive is allocated to a primitive block in accordance with its spatial position. 2. The method of claim 1 wherein, for each of the received primitives, if at least two primitive blocks are stored in the data store, then operation (i) comprises comparing an indication of a spatial position of the received primitive with respective indications of spatial positions of the at least two primitive blocks stored in the data store. 3. The method of claim 1 wherein if the result of the comparison indicates that the spatial position of the received primitive matches the spatial position of a single one of the primitive blocks stored in the data store then the received primitive is allocated to said one of the primitive blocks. 4. The method of claim 1 wherein if the result of the comparison indicates that the spatial position of the received primitive matches the spatial position of a plurality of the primitive blocks stored in the data store then the method comprises merging said plurality of primitive blocks if possible to form a merged primitive block, wherein the received primitive is allocated to the merged primitive block. 5. The method of claim 1 wherein if the result of the comparison indicates that the spatial position of the received primitive does not match the spatial position of any of the primitive blocks stored in the data store then the method comprises creating a new primitive block to be included in said set of primitive blocks, wherein the received primitive is allocated to the new primitive block. 6. The method of claim 5 wherein if the data store does not have space to store the new primitive block then the method further comprises outputting one of the primitive blocks from the data store to provide space for the new primitive block. 7. The method of claim 6 further comprising selecting a primitive block to be outputted from the data store in accordance with an outputting scheme, wherein the outputting scheme comprises one or more of: (i) a round robin scheme; (ii) an output the biggest scheme in which the primitive block with the most primitives is selected to be outputted; (iii) an output the smallest scheme in which the primitive block with the fewest primitives is selected to be outputted; (iv) an output the oldest scheme in which the primitive block which has been in the data store for the longest amount of time is selected to be outputted; (v) a merge and output smallest and biggest scheme in which the primitive block with the most primitives and the primitive block with the fewest primitives are merged and the resulting merged primitive block is then selected to be outputted; (vi) a merge threshold scheme in which any primitive blocks with fewer than a threshold number of primitives are merged and the resulting merged primitive block is then selected to be outputted; (vii) a merge smallest scheme in which the two primitive blocks with the fewest primitives are merged and the resulting merged primitive block is kept in the data store, wherein one of the primitive blocks from the data store is selected to be outputted if merging is not possible. 8. The method of claim 1 wherein each of the primitive blocks includes a header which includes state information indicating how to render the primitives in the primitive block. 9. A primitive block allocation module for allocating primitives to primitive blocks in a computer graphics processing system, the primitive block allocation module comprising: a data store configured to store a set of primitive blocks to which primitives can be allocated; and allocation logic configured to: (a) receive a sequence of primitives, and (b) for each of the received primitives, if at least one primitive block is stored in the data store: (i) compare an indication of a spatial position of the received primitive with at least one indication of a spatial position of the at least one primitive block stored in the data store, and (ii) allocate the received primitive to a primitive block based on the result of the comparison, to thereby allocate the received primitive to a primitive block in accordance with its spatial position. 10. The primitive block allocation module of claim 9 wherein the data store has a limit on the number of primitive blocks which it can store, wherein the limit is 2, 3 or 4. 11. The primitive block allocation module of claim 10 wherein the allocation logic is configured to dynamically adapt the limit based on an analysis of the received primitives. 12. The primitive block allocation module of claim 9 wherein the indication of a spatial position of the received primitive comprises a vertex of the received primitive, and an indication of a spatial position of a primitive block comprises a vertex of a primitive included in the primitive block, wherein the allocation logic is configured to compare the vertices of the received primitive and the vertices of the primitive blocks stored in the data store to determine whether the received primitive has one or more shared vertices with a primitive block stored in the data store. 13. The primitive block allocation module of claim 9 wherein the indication of a spatial position of the received primitive comprises a bounding box of the received primitive, and an indication of a spatial position of a primitive block comprises a bounding box of the primitive block, wherein the allocation logic is configured to compare the bounding box of the received primitive and the bounding boxes of the primitive blocks stored in the data store to determine whether the bounding box of the received primitive overlaps with, or is within a minimum distance from overlapping with, the bounding box of a primitive block stored in the data store. 14. The primitive block allocation module of claim 9 wherein the allocation module is further configured such that, responsive to the received primitive being allocated to a primitive block and if the received primitive does not lie within a bounding box of the primitive block, the bounding box of the primitive block is updated to include the received primitive. 15. The primitive block allocation module of claim 9 further configured to output a primitive block from the data store if the primitive block is full. 16. The primitive block allocation module of claim 15 wherein a primitive block is full if at least one of: (i) the number of vertices in the primitive block is greater than or equal to a vertex threshold, and (ii) the number of primitives in the primitive block is greater than or equal to a primitive threshold. 17. The primitive block allocation module of claim 9 wherein the primitive block allocation module is included as part of a tile-based graphics processing system, the tile-based graphics processing system further comprising a tiling module configured to determine per-tile display lists which indicate which primitives are present within each of a plurality of tiles. 18. The primitive block allocation module of claim 17 wherein the tile-based graphics system further comprises a rasterization block which is configured to implement hidden surface removal and texturing or shading on a per-tile basis using the per-tile display lists. 19. The primitive block allocation module of claim 17 wherein the indication of a spatial position of the received primitive comprises a bounding box of the received primitive, and an indication of a spatial position of a primitive block comprises a bounding box of the primitive block, wherein the bounding boxes have a per-tile resolution. 20. A non-transitory computer readable storage medium having stored thereon processor executable instructions that when executed cause at least one processor to allocate primitives to primitive blocks at a primitive block allocation module which includes a data store for storing a set of primitive blocks to which primitives can be allocated, the allocation of primitives to primitive blocks comprising: receiving a sequence of primitives; and for each of the received primitives, if at least one primitive block is stored in the data store: (i) comparing an indication of a spatial position of the received primitive with at least one indication of a spatial position of the at least one primitive block stored in the data store; and (ii) allocating the received primitive to a primitive block based on the result of the comparison, such that the received primitive is allocated to a primitive block in accordance with its spatial position. 21. A non-transitory computer readable storage medium having stored thereon processor executable instructions that when executed at a computer system for generating a representation of a digital circuit from definitions of circuit elements and data defining rules for combining those circuit elements, cause the computer system to generate a graphics processing unit comprising a primitive block allocation module which is configured to allocate primitives to primitive blocks, the primitive block allocation module comprising: a data store configured to store a set of primitive blocks to which primitives can be allocated; and allocation logic configured to: (a) receive a sequence of primitives, and (b) for each of the received primitives, if at least one primitive block is stored in the data store: (i) compare an indication of a spatial position of the received primitive with at least one indication of a spatial position of the at least one primitive block stored in the data store, and (ii) allocate the received primitive to a primitive block based on the result of the comparison, to thereby allocate the received primitive to a primitive block in accordance with its spatial position.
An application sends primitives to a graphics processing system so that an image of a 3D scene can be rendered. The primitives are placed into primitive blocks for storage and retrieval from a parameter memory. Rather than simply placing the first primitives into a primitive block until the primitive block is full and then placing further primitives into the next primitive block, multiple primitive blocks can be “open” such that a primitive block allocation module can allocate primitives to one of the open primitive blocks to thereby sort the primitives into primitive blocks according to their spatial positions. By grouping primitives together into primitive blocks in accordance with their spatial positions, the performance of a rasterization module can be improved. For example, in a tile-based rendering system this may mean that fewer primitive blocks need to be fetched by a hidden surface removal module in order to process a tile.1. A method of allocating primitives to primitive blocks at a primitive block allocation module of a computer graphics processing system, which includes a data store for storing a set of primitive blocks to which primitives can be allocated, the method comprising: receiving a sequence of primitives; and for each of the received primitives, if at least one primitive block is stored in the data store: (i) comparing an indication of a spatial position of the received primitive with at least one indication of a spatial position of the at least one primitive block stored in the data store; and (ii) allocating the received primitive to a primitive block based on the result of the comparison, such that the received primitive is allocated to a primitive block in accordance with its spatial position. 2. The method of claim 1 wherein, for each of the received primitives, if at least two primitive blocks are stored in the data store, then operation (i) comprises comparing an indication of a spatial position of the received primitive with respective indications of spatial positions of the at least two primitive blocks stored in the data store. 3. The method of claim 1 wherein if the result of the comparison indicates that the spatial position of the received primitive matches the spatial position of a single one of the primitive blocks stored in the data store then the received primitive is allocated to said one of the primitive blocks. 4. The method of claim 1 wherein if the result of the comparison indicates that the spatial position of the received primitive matches the spatial position of a plurality of the primitive blocks stored in the data store then the method comprises merging said plurality of primitive blocks if possible to form a merged primitive block, wherein the received primitive is allocated to the merged primitive block. 5. The method of claim 1 wherein if the result of the comparison indicates that the spatial position of the received primitive does not match the spatial position of any of the primitive blocks stored in the data store then the method comprises creating a new primitive block to be included in said set of primitive blocks, wherein the received primitive is allocated to the new primitive block. 6. The method of claim 5 wherein if the data store does not have space to store the new primitive block then the method further comprises outputting one of the primitive blocks from the data store to provide space for the new primitive block. 7. The method of claim 6 further comprising selecting a primitive block to be outputted from the data store in accordance with an outputting scheme, wherein the outputting scheme comprises one or more of: (i) a round robin scheme; (ii) an output the biggest scheme in which the primitive block with the most primitives is selected to be outputted; (iii) an output the smallest scheme in which the primitive block with the fewest primitives is selected to be outputted; (iv) an output the oldest scheme in which the primitive block which has been in the data store for the longest amount of time is selected to be outputted; (v) a merge and output smallest and biggest scheme in which the primitive block with the most primitives and the primitive block with the fewest primitives are merged and the resulting merged primitive block is then selected to be outputted; (vi) a merge threshold scheme in which any primitive blocks with fewer than a threshold number of primitives are merged and the resulting merged primitive block is then selected to be outputted; (vii) a merge smallest scheme in which the two primitive blocks with the fewest primitives are merged and the resulting merged primitive block is kept in the data store, wherein one of the primitive blocks from the data store is selected to be outputted if merging is not possible. 8. The method of claim 1 wherein each of the primitive blocks includes a header which includes state information indicating how to render the primitives in the primitive block. 9. A primitive block allocation module for allocating primitives to primitive blocks in a computer graphics processing system, the primitive block allocation module comprising: a data store configured to store a set of primitive blocks to which primitives can be allocated; and allocation logic configured to: (a) receive a sequence of primitives, and (b) for each of the received primitives, if at least one primitive block is stored in the data store: (i) compare an indication of a spatial position of the received primitive with at least one indication of a spatial position of the at least one primitive block stored in the data store, and (ii) allocate the received primitive to a primitive block based on the result of the comparison, to thereby allocate the received primitive to a primitive block in accordance with its spatial position. 10. The primitive block allocation module of claim 9 wherein the data store has a limit on the number of primitive blocks which it can store, wherein the limit is 2, 3 or 4. 11. The primitive block allocation module of claim 10 wherein the allocation logic is configured to dynamically adapt the limit based on an analysis of the received primitives. 12. The primitive block allocation module of claim 9 wherein the indication of a spatial position of the received primitive comprises a vertex of the received primitive, and an indication of a spatial position of a primitive block comprises a vertex of a primitive included in the primitive block, wherein the allocation logic is configured to compare the vertices of the received primitive and the vertices of the primitive blocks stored in the data store to determine whether the received primitive has one or more shared vertices with a primitive block stored in the data store. 13. The primitive block allocation module of claim 9 wherein the indication of a spatial position of the received primitive comprises a bounding box of the received primitive, and an indication of a spatial position of a primitive block comprises a bounding box of the primitive block, wherein the allocation logic is configured to compare the bounding box of the received primitive and the bounding boxes of the primitive blocks stored in the data store to determine whether the bounding box of the received primitive overlaps with, or is within a minimum distance from overlapping with, the bounding box of a primitive block stored in the data store. 14. The primitive block allocation module of claim 9 wherein the allocation module is further configured such that, responsive to the received primitive being allocated to a primitive block and if the received primitive does not lie within a bounding box of the primitive block, the bounding box of the primitive block is updated to include the received primitive. 15. The primitive block allocation module of claim 9 further configured to output a primitive block from the data store if the primitive block is full. 16. The primitive block allocation module of claim 15 wherein a primitive block is full if at least one of: (i) the number of vertices in the primitive block is greater than or equal to a vertex threshold, and (ii) the number of primitives in the primitive block is greater than or equal to a primitive threshold. 17. The primitive block allocation module of claim 9 wherein the primitive block allocation module is included as part of a tile-based graphics processing system, the tile-based graphics processing system further comprising a tiling module configured to determine per-tile display lists which indicate which primitives are present within each of a plurality of tiles. 18. The primitive block allocation module of claim 17 wherein the tile-based graphics system further comprises a rasterization block which is configured to implement hidden surface removal and texturing or shading on a per-tile basis using the per-tile display lists. 19. The primitive block allocation module of claim 17 wherein the indication of a spatial position of the received primitive comprises a bounding box of the received primitive, and an indication of a spatial position of a primitive block comprises a bounding box of the primitive block, wherein the bounding boxes have a per-tile resolution. 20. A non-transitory computer readable storage medium having stored thereon processor executable instructions that when executed cause at least one processor to allocate primitives to primitive blocks at a primitive block allocation module which includes a data store for storing a set of primitive blocks to which primitives can be allocated, the allocation of primitives to primitive blocks comprising: receiving a sequence of primitives; and for each of the received primitives, if at least one primitive block is stored in the data store: (i) comparing an indication of a spatial position of the received primitive with at least one indication of a spatial position of the at least one primitive block stored in the data store; and (ii) allocating the received primitive to a primitive block based on the result of the comparison, such that the received primitive is allocated to a primitive block in accordance with its spatial position. 21. A non-transitory computer readable storage medium having stored thereon processor executable instructions that when executed at a computer system for generating a representation of a digital circuit from definitions of circuit elements and data defining rules for combining those circuit elements, cause the computer system to generate a graphics processing unit comprising a primitive block allocation module which is configured to allocate primitives to primitive blocks, the primitive block allocation module comprising: a data store configured to store a set of primitive blocks to which primitives can be allocated; and allocation logic configured to: (a) receive a sequence of primitives, and (b) for each of the received primitives, if at least one primitive block is stored in the data store: (i) compare an indication of a spatial position of the received primitive with at least one indication of a spatial position of the at least one primitive block stored in the data store, and (ii) allocate the received primitive to a primitive block based on the result of the comparison, to thereby allocate the received primitive to a primitive block in accordance with its spatial position.
2,600
9,840
9,840
14,733,141
2,666
Systems and methods for creating non-orthogonal dimensionality between signals are disclosed. Signals are received from at least one electronic device. An adjustment of a parameter of a received signal of a first device of the at least one of the electronic device that would result in an adjusted signal that is not orthogonal but differentiates the signal from at least one other signal of the received signals is determined. An instruction is communicated to the first device to implement the adjustment of the parameter.
1. A method for creating non-orthogonal dimensionality between signals, the method comprising: receiving signals from at least one electronic device; determining an adjustment of a parameter of a received signal of a first device of the at least one of the electronic device that would result in an adjusted signal that is not orthogonal but differentiates the signal from at least one other signal of the received signals; and communicating, to the first device, an instruction to implement the adjustment of the parameter. 2. The method of claim 1, wherein receiving signals from at least one electronic device includes receiving signals that overlap in at least one of time and frequency. 3. The method of claim 1, further comprising decoding the signals based on a multi-user detection technique. 4. The method of claim 1, further comprising decoding the signals as multiple-input and multiple-output signals from a single electronic device. 5. The method of claim 1, wherein receiving signals from at least one electronic device includes receiving signals from at least one of a wireless transmit/receive unit (WTRU), user equipment (UE), a mobile station, a fixed subscriber unit, a mobile subscriber unit, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, and a consumer electronic device. 6. The method of claim 1, wherein determining an adjustment of a parameter includes estimating the parameters of the signal of at least one of the electronic devices. 7. The method of claim 1, wherein determining an adjustment of a parameter includes determining an adjustment of at least one of the symbol phase, power level, channel, modulation rate, transmit symbol timing, coding rate, and time offset 8. The method of claim 1, wherein determining an adjustment of a parameter includes determining an adjustment of symbol phase such that the adjustment results in the phase of the adjusted signal of being different than the phase of one or more other received signals. 9. The method of claim 8, wherein determining an adjustment of symbol phase comprises setting the adjustment such that a symbol phase offset between the adjusted signal and one or more other received signals is linearly spaced between two angles. 10. The method of claim 8, wherein determining an adjustment of symbol phase comprises setting the adjustment such that a symbol phase offset within a grouping of electronic devices that include a subset of the total number of electronic devices that are linearly spaced between two angles. 11. The method of claim 8, further comprising: detecting a symbol phase offset between the signals; and decoding the signals partially or fully based on the detected symbol phase offset. 12. The method of claim 8, further comprising: determining an adjustment of one of a power, channel, transmit symbol timing, modulation rate, coding rate, and a time offset of at least one of the first device and a second device of the at least one of the electronic device that would result in an adjusted signal that is not orthogonal but differentiates the signal from at least one other signal of the received signals; and communicating, to the at least one of the first device and a second device, an instruction to implement the adjustment of the one of the power, channel, transmit symbol timing, modulation rate, coding rate, and the time offset. 13. The method of claim 1, wherein determining an adjustment of a parameter includes determining an adjustment of power level such that the adjustment results in the received power level of the adjusted signal being different than the received power level of one or more signals of other electronic devices. 14. The method of claim 13, wherein determining an adjustment of power level comprises setting the adjustment such that a power level difference between the adjusted signal and one or more other received signals is one of linearly and exponentially spaced. 15. The method of claim 13, wherein determining an adjustment of power level comprises setting the adjustment such that a power level difference between groups of signals exists to aid a group detector. 16. The method of claim 13, further comprising: detecting a power level difference between the signals; and decoding the signals partially or fully based on the detected power level difference. 17. The method of claim 1, wherein the signals are received at one or more of a different angle of arrival, channel, transmit symbol timing, modulation rate, and coding rate, the method further comprising: detecting the one or more of the different angle of arrival, channel, transmit symbol timing, modulation rate, and coding rate, and decoding the signals based on at least the detected one or more of the different angle of arrival, channel, transmit symbol timing, modulation rate, and coding rate. 18. A method comprising: receiving a first set of signals from a first plurality of electronic devices, wherein the signals are transmitted at the same time and on substantially the same frequency or an overlapping frequency; receiving a second set of signals from a second plurality of electronic devices, wherein the second set of signals are transmitted on a different frequency than the first set of signals; estimating the parameters of at least one of the signals of the first set of signals; estimating the parameters of at least one of the signals of the second set of signals; and instructing an electronic device from the first plurality of electronic devices to transmit on the frequency of the second set of signals if at least one parameter of a signal from the electronic device from the first plurality of electronic devices is more diverse with those of the second set of signals than with those of the first set of signals. 19. An electronic device comprising: at least a processor and memory configured to: receive signals from at least one electronic device; determine an adjustment of a parameter of a received signal of a first device of the at least one of the electronic device that would result in an adjusted signal that is not orthogonal but differentiates the signal from at least one other signal of the received signals; and communicate, to the first device, an instruction to implement the adjustment of the parameter. 20. The electronic device of claim 19, wherein the signals arrive at the electronic device overlapping in at least one of time and frequency. 21. The electronic device of claim 19, further comprising decoding the signals based on a multi-user detection technique. 22. The electronic device of claim 19, wherein the at least a processor and memory are configured to decode the signals as multiple-input and multiple-output signals from a single electronic device. 23. The electronic device of claim 19, wherein the at least one electronic device are one of a wireless transmit/receive unit (WTRU), user equipment (UE), a mobile station, a fixed subscriber unit, a mobile subscriber unit, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, and a consumer electronic device. 24. The electronic device of claim 19, wherein the at least a processor and memory are configured to estimate the parameters of the signal of at least one of the electronic devices. 25. The electronic device of claim 19, wherein the at least a processor and memory are configured to determine an adjustment of at least one of the symbol phase, power level, channel, transmit symbol timing, modulation rate, coding rate, and time offset. 26. The electronic device of claim 19, wherein the at least a processor and memory are configured to determine an adjustment of phase such that the adjustment results in the phase of the adjusted signal being different than the phase of one or more other received signals. 27. The electronic device of claim 26, wherein the at least a processor and memory are configured to set the adjustment such that a symbol phase offset between the signals is linearly spaced between two angles as seen by the receiver. 28. The electronic device of claim 26, wherein the at least a processor and memory are configured to set the adjustment such that a symbol phase offset within a grouping of users that include a subset of the total number of users are linearly spaced between two angles. 29. The electronic device of claim 26, wherein the at least a processor and memory are configured to: detect a symbol phase offset between the signals; and decode the signals partially or fully based on the detected symbol phase offset. 30. The electronic device of claim 26, wherein the at least a processor and memory are further configured to: determine an adjustment of one of power, an angle of arrival, channel, transmit symbol timing, modulation rate, coding rate, and a time offset of at least one of the first device and a second device of the at least one of the electronic device that would result in an adjusted signal that is not orthogonal but differentiates the signal from at least one other signal of the received signals; and communicating, to the at least one of the first device and a second device, an instruction to implement the adjustment of the one of the power, the angle of arrival, channel, transmit symbol timing, modulation rate, coding rate, and the time offset. 31. The electronic device of claim 19, wherein the at least a processor and memory are configured to determine an adjustment of power level such that the adjustment results in the power level of the adjusted signal being different than the power level of one or more other received signals. 32. The electronic device of claim 31, wherein the at least a processor and memory are configured to set the adjustment such that a power level difference between the adjusted signal and one or more other received signals is one of linearly and exponentially spaced. 33. The electronic device of claim 31, wherein the at least a processor and memory are configured to set the adjustment such that a power level difference between groups of signals exists to aid a group detector. 34. The electronic device of claim 31, wherein the at least a processor and memory are configured to: detect a power level difference between the signals; and decode the signals partially or fully based on the detected power level difference. 35. The electronic device of claim 19, wherein the signals are received at one or more of a different angle of arrival, channel, transmit symbol timing, modulation rate, and coding rate, wherein the at least a processor and memory are configured to: detect the one or more of the different angle of arrival, channel, transmit symbol timing, modulation rate, and coding rate; and decode the signals comprises decoding the signals based on the detected one or more of the different angle of arrival, channel, transmit symbol timing, modulation rate, and coding rate. 36. An electronic device comprising: at least a processor and memory configured to: receive a first set of signals from a plurality of electronic devices, wherein the signals are transmitted at the same time and on substantially the same frequency or an overlapping frequency; receive a second set of signals from a plurality of electronic devices, wherein the second set of signals are transmitted on a different frequency than the first set of signals; estimate the parameters of at least one of the signals of the first set of signals; estimate the parameters of at least one of the signals of the second set of signals; and instruct an electronic device from the first plurality of electronic devices to transmit on the frequency of the second set of signals if at least one parameter of a signal from the electronic device from the first plurality of electronic devices is more diverse with those of the second set of signals than with those of the first set of signals. 37. A non-transitory computer readable medium storing a computer program, executable by a machine, for creating non-orthogonal dimensionality between signals, the computer program comprising executable instructions for: receiving signals from at least one electronic device; determining an adjustment of a parameter of a received signal of a first device of the at least one of the electronic device that would result in an adjusted signal that is not orthogonal but differentiates the signal from at least one other signal of the received signals; and communicating, to the first device, an instruction to implement the adjustment of the parameter. 38. A non-transitory computer readable medium storing a computer program, executable by a machine, for creating non-orthogonal dimensionality between signals, the computer program comprising executable instructions for: receiving a first set of signals from a first plurality of electronic devices, wherein the signals are transmitted at the same time and on substantially the same frequency or an overlapping frequency; receiving a second set of signals from a second plurality of electronic devices, wherein the second set of signals are transmitted on a different frequency than the first set of signals; estimating the parameters of at least one of the signals of the first set of signals; estimating the parameters of at least one of the signals of the second set of signals; and instructing an electronic device from the first plurality of electronic devices to transmit on the frequency of the second set of signals if at least one parameter of a signal from the electronic device from the first plurality of electronic devices is more diverse with those of the second set of signals than with those of the first set of signals.
Systems and methods for creating non-orthogonal dimensionality between signals are disclosed. Signals are received from at least one electronic device. An adjustment of a parameter of a received signal of a first device of the at least one of the electronic device that would result in an adjusted signal that is not orthogonal but differentiates the signal from at least one other signal of the received signals is determined. An instruction is communicated to the first device to implement the adjustment of the parameter.1. A method for creating non-orthogonal dimensionality between signals, the method comprising: receiving signals from at least one electronic device; determining an adjustment of a parameter of a received signal of a first device of the at least one of the electronic device that would result in an adjusted signal that is not orthogonal but differentiates the signal from at least one other signal of the received signals; and communicating, to the first device, an instruction to implement the adjustment of the parameter. 2. The method of claim 1, wherein receiving signals from at least one electronic device includes receiving signals that overlap in at least one of time and frequency. 3. The method of claim 1, further comprising decoding the signals based on a multi-user detection technique. 4. The method of claim 1, further comprising decoding the signals as multiple-input and multiple-output signals from a single electronic device. 5. The method of claim 1, wherein receiving signals from at least one electronic device includes receiving signals from at least one of a wireless transmit/receive unit (WTRU), user equipment (UE), a mobile station, a fixed subscriber unit, a mobile subscriber unit, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, and a consumer electronic device. 6. The method of claim 1, wherein determining an adjustment of a parameter includes estimating the parameters of the signal of at least one of the electronic devices. 7. The method of claim 1, wherein determining an adjustment of a parameter includes determining an adjustment of at least one of the symbol phase, power level, channel, modulation rate, transmit symbol timing, coding rate, and time offset 8. The method of claim 1, wherein determining an adjustment of a parameter includes determining an adjustment of symbol phase such that the adjustment results in the phase of the adjusted signal of being different than the phase of one or more other received signals. 9. The method of claim 8, wherein determining an adjustment of symbol phase comprises setting the adjustment such that a symbol phase offset between the adjusted signal and one or more other received signals is linearly spaced between two angles. 10. The method of claim 8, wherein determining an adjustment of symbol phase comprises setting the adjustment such that a symbol phase offset within a grouping of electronic devices that include a subset of the total number of electronic devices that are linearly spaced between two angles. 11. The method of claim 8, further comprising: detecting a symbol phase offset between the signals; and decoding the signals partially or fully based on the detected symbol phase offset. 12. The method of claim 8, further comprising: determining an adjustment of one of a power, channel, transmit symbol timing, modulation rate, coding rate, and a time offset of at least one of the first device and a second device of the at least one of the electronic device that would result in an adjusted signal that is not orthogonal but differentiates the signal from at least one other signal of the received signals; and communicating, to the at least one of the first device and a second device, an instruction to implement the adjustment of the one of the power, channel, transmit symbol timing, modulation rate, coding rate, and the time offset. 13. The method of claim 1, wherein determining an adjustment of a parameter includes determining an adjustment of power level such that the adjustment results in the received power level of the adjusted signal being different than the received power level of one or more signals of other electronic devices. 14. The method of claim 13, wherein determining an adjustment of power level comprises setting the adjustment such that a power level difference between the adjusted signal and one or more other received signals is one of linearly and exponentially spaced. 15. The method of claim 13, wherein determining an adjustment of power level comprises setting the adjustment such that a power level difference between groups of signals exists to aid a group detector. 16. The method of claim 13, further comprising: detecting a power level difference between the signals; and decoding the signals partially or fully based on the detected power level difference. 17. The method of claim 1, wherein the signals are received at one or more of a different angle of arrival, channel, transmit symbol timing, modulation rate, and coding rate, the method further comprising: detecting the one or more of the different angle of arrival, channel, transmit symbol timing, modulation rate, and coding rate, and decoding the signals based on at least the detected one or more of the different angle of arrival, channel, transmit symbol timing, modulation rate, and coding rate. 18. A method comprising: receiving a first set of signals from a first plurality of electronic devices, wherein the signals are transmitted at the same time and on substantially the same frequency or an overlapping frequency; receiving a second set of signals from a second plurality of electronic devices, wherein the second set of signals are transmitted on a different frequency than the first set of signals; estimating the parameters of at least one of the signals of the first set of signals; estimating the parameters of at least one of the signals of the second set of signals; and instructing an electronic device from the first plurality of electronic devices to transmit on the frequency of the second set of signals if at least one parameter of a signal from the electronic device from the first plurality of electronic devices is more diverse with those of the second set of signals than with those of the first set of signals. 19. An electronic device comprising: at least a processor and memory configured to: receive signals from at least one electronic device; determine an adjustment of a parameter of a received signal of a first device of the at least one of the electronic device that would result in an adjusted signal that is not orthogonal but differentiates the signal from at least one other signal of the received signals; and communicate, to the first device, an instruction to implement the adjustment of the parameter. 20. The electronic device of claim 19, wherein the signals arrive at the electronic device overlapping in at least one of time and frequency. 21. The electronic device of claim 19, further comprising decoding the signals based on a multi-user detection technique. 22. The electronic device of claim 19, wherein the at least a processor and memory are configured to decode the signals as multiple-input and multiple-output signals from a single electronic device. 23. The electronic device of claim 19, wherein the at least one electronic device are one of a wireless transmit/receive unit (WTRU), user equipment (UE), a mobile station, a fixed subscriber unit, a mobile subscriber unit, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, and a consumer electronic device. 24. The electronic device of claim 19, wherein the at least a processor and memory are configured to estimate the parameters of the signal of at least one of the electronic devices. 25. The electronic device of claim 19, wherein the at least a processor and memory are configured to determine an adjustment of at least one of the symbol phase, power level, channel, transmit symbol timing, modulation rate, coding rate, and time offset. 26. The electronic device of claim 19, wherein the at least a processor and memory are configured to determine an adjustment of phase such that the adjustment results in the phase of the adjusted signal being different than the phase of one or more other received signals. 27. The electronic device of claim 26, wherein the at least a processor and memory are configured to set the adjustment such that a symbol phase offset between the signals is linearly spaced between two angles as seen by the receiver. 28. The electronic device of claim 26, wherein the at least a processor and memory are configured to set the adjustment such that a symbol phase offset within a grouping of users that include a subset of the total number of users are linearly spaced between two angles. 29. The electronic device of claim 26, wherein the at least a processor and memory are configured to: detect a symbol phase offset between the signals; and decode the signals partially or fully based on the detected symbol phase offset. 30. The electronic device of claim 26, wherein the at least a processor and memory are further configured to: determine an adjustment of one of power, an angle of arrival, channel, transmit symbol timing, modulation rate, coding rate, and a time offset of at least one of the first device and a second device of the at least one of the electronic device that would result in an adjusted signal that is not orthogonal but differentiates the signal from at least one other signal of the received signals; and communicating, to the at least one of the first device and a second device, an instruction to implement the adjustment of the one of the power, the angle of arrival, channel, transmit symbol timing, modulation rate, coding rate, and the time offset. 31. The electronic device of claim 19, wherein the at least a processor and memory are configured to determine an adjustment of power level such that the adjustment results in the power level of the adjusted signal being different than the power level of one or more other received signals. 32. The electronic device of claim 31, wherein the at least a processor and memory are configured to set the adjustment such that a power level difference between the adjusted signal and one or more other received signals is one of linearly and exponentially spaced. 33. The electronic device of claim 31, wherein the at least a processor and memory are configured to set the adjustment such that a power level difference between groups of signals exists to aid a group detector. 34. The electronic device of claim 31, wherein the at least a processor and memory are configured to: detect a power level difference between the signals; and decode the signals partially or fully based on the detected power level difference. 35. The electronic device of claim 19, wherein the signals are received at one or more of a different angle of arrival, channel, transmit symbol timing, modulation rate, and coding rate, wherein the at least a processor and memory are configured to: detect the one or more of the different angle of arrival, channel, transmit symbol timing, modulation rate, and coding rate; and decode the signals comprises decoding the signals based on the detected one or more of the different angle of arrival, channel, transmit symbol timing, modulation rate, and coding rate. 36. An electronic device comprising: at least a processor and memory configured to: receive a first set of signals from a plurality of electronic devices, wherein the signals are transmitted at the same time and on substantially the same frequency or an overlapping frequency; receive a second set of signals from a plurality of electronic devices, wherein the second set of signals are transmitted on a different frequency than the first set of signals; estimate the parameters of at least one of the signals of the first set of signals; estimate the parameters of at least one of the signals of the second set of signals; and instruct an electronic device from the first plurality of electronic devices to transmit on the frequency of the second set of signals if at least one parameter of a signal from the electronic device from the first plurality of electronic devices is more diverse with those of the second set of signals than with those of the first set of signals. 37. A non-transitory computer readable medium storing a computer program, executable by a machine, for creating non-orthogonal dimensionality between signals, the computer program comprising executable instructions for: receiving signals from at least one electronic device; determining an adjustment of a parameter of a received signal of a first device of the at least one of the electronic device that would result in an adjusted signal that is not orthogonal but differentiates the signal from at least one other signal of the received signals; and communicating, to the first device, an instruction to implement the adjustment of the parameter. 38. A non-transitory computer readable medium storing a computer program, executable by a machine, for creating non-orthogonal dimensionality between signals, the computer program comprising executable instructions for: receiving a first set of signals from a first plurality of electronic devices, wherein the signals are transmitted at the same time and on substantially the same frequency or an overlapping frequency; receiving a second set of signals from a second plurality of electronic devices, wherein the second set of signals are transmitted on a different frequency than the first set of signals; estimating the parameters of at least one of the signals of the first set of signals; estimating the parameters of at least one of the signals of the second set of signals; and instructing an electronic device from the first plurality of electronic devices to transmit on the frequency of the second set of signals if at least one parameter of a signal from the electronic device from the first plurality of electronic devices is more diverse with those of the second set of signals than with those of the first set of signals.
2,600
9,841
9,841
15,458,761
2,684
An apparatus may include a housing defining a compartment, wherein the housing is configured to accept at least one item within the compartment. The apparatus may also include a member coupled with the housing, wherein the member defines a container, wherein the member is configured to provide access to the compartment in a first state, and wherein the member is further configured to restrict access to the compartment in a second state. The apparatus may further include an electronic lock disposed at least partially within the container, wherein the electronic lock is configured to limit relative movement between the member and the housing in the second state. The apparatus may also include an interface routed between the compartment and the container, wherein the interface is configured to allow control of the electronic lock.
1. An apparatus comprising: a housing defining a compartment, wherein said housing is configured to accept at least one item within said compartment; a member coupled with said housing, wherein said member defines a container, wherein said member is configured to provide access to said compartment in a first state, and wherein said member is further configured to restrict access to said compartment in a second state; an electronic lock disposed at least partially within said container, wherein said electronic lock is configured to limit relative movement between said member and said housing in said second state; and an interface routed between said compartment and said container, wherein said interface is configured to allow control of said electronic lock. 2. The apparatus of claim 1 further comprising: circuitry disposed at least partially within said container, and wherein said circuitry is configured to control said electronic lock to implement said first and second states. 3. The apparatus of claim 2, wherein said circuitry is configured to control said electronic lock using an authentication mechanism selected from a group consisting of biometric authentication, SMS authentication, authentication based on a digital signature that is unique to said electronic lock, authentication based on a digital signature that is unique to said apparatus, authentication based on an electronic key, and authentication based on an electronic password. 4. The apparatus of claim 2, wherein said interface is configured to supply power to said circuitry. 5. The apparatus of claim 2, wherein said circuitry includes a communication interface configured to establish a wireless connection between said circuitry and a computer system separate from said apparatus. 6. The apparatus of claim 2, wherein said interface is configured to establish a connection between said circuitry and a computer system separate from said apparatus. 7. The apparatus of claim 6, wherein said connection is a wired connection selected from a group consisting of an Ethernet connection, a RS-485 connection, and a RS-232 connection. 8. The apparatus of claim 6 further comprising: a second interface disposed at least partially within said container, and wherein said circuitry is configured to control said electronic lock using said second interface. 9. The apparatus of claim 8 further comprising: at least one electronic component disposed at least partially within container; and a third interface disposed at least partially within said container, and wherein said circuitry is configured to control said at least one electronic component using said third interface. 10. The apparatus of claim 9, wherein said at least one electronic component is selected from a group consisting of a visual output device and an audio output device. 11. The apparatus of claim 6, wherein said circuitry includes a processor, and further comprising: at least one sensor configured to monitor a parameter associated with said apparatus, and wherein said processor is configured to communicate, via said interface over said connection, information associated with said parameter to said computer system. 12. The apparatus of claim 11, wherein said parameter is selected from a group consisting of vibration, an opening of said member with respect to said housing, a closing of said member with respect to said housing, and an electrical failure. 13. The apparatus of claim 6, wherein said circuitry includes a processor, and further comprising: at least one camera disposed within said compartment, wherein said at least one camera is configured to capture at least one image of said at least one item, and wherein said processor is configured to communicate, via said interface over said connection, said at least one image to said computer system. 14. The apparatus of claim 13 further comprising: a cable electrically coupled to said circuitry and said at least one camera, wherein said cable is routed between said compartment and said container. 15. The apparatus of claim 13, wherein said at least one camera is positioned toward a ceiling of said housing. 16. The apparatus of claim 13 further comprising: at least one light source disposed within said compartment, wherein said at least one light source is configured to illuminate said at least one item. 17. The apparatus of claim 2, wherein said circuitry includes at least one component selected from a group consisting of a processor, a memory, a communication interface, a sensor interface, an output device interface, a camera controller, a provisioning controller, a cryptography processor, and a power interface. 18. The apparatus of claim 17, wherein said processor is selected from a group consisting of a controller, a microcontroller, a microprocessor, a central processing unit, and a main processor. 19. The apparatus of claim 2, wherein said circuitry is selected from a group consisting of a panel, a control board, an electronic main board, and a motherboard. 20. The apparatus of claim 1, wherein said member is configured to implement a door. 21. The apparatus of claim 1 further comprising: a base coupled with said member, wherein said base is configured to accept said at least one item, and wherein said base and said member are configured to implement a drawer; and at least one channel coupled with said housing and said base, wherein said at least one channel is configured to allow said member and said base to move with respect to said housing. 22. The apparatus of claim 1, wherein said electronic lock includes a component selected from a group consisting of an electromagnetic lock and a solenoid.
An apparatus may include a housing defining a compartment, wherein the housing is configured to accept at least one item within the compartment. The apparatus may also include a member coupled with the housing, wherein the member defines a container, wherein the member is configured to provide access to the compartment in a first state, and wherein the member is further configured to restrict access to the compartment in a second state. The apparatus may further include an electronic lock disposed at least partially within the container, wherein the electronic lock is configured to limit relative movement between the member and the housing in the second state. The apparatus may also include an interface routed between the compartment and the container, wherein the interface is configured to allow control of the electronic lock.1. An apparatus comprising: a housing defining a compartment, wherein said housing is configured to accept at least one item within said compartment; a member coupled with said housing, wherein said member defines a container, wherein said member is configured to provide access to said compartment in a first state, and wherein said member is further configured to restrict access to said compartment in a second state; an electronic lock disposed at least partially within said container, wherein said electronic lock is configured to limit relative movement between said member and said housing in said second state; and an interface routed between said compartment and said container, wherein said interface is configured to allow control of said electronic lock. 2. The apparatus of claim 1 further comprising: circuitry disposed at least partially within said container, and wherein said circuitry is configured to control said electronic lock to implement said first and second states. 3. The apparatus of claim 2, wherein said circuitry is configured to control said electronic lock using an authentication mechanism selected from a group consisting of biometric authentication, SMS authentication, authentication based on a digital signature that is unique to said electronic lock, authentication based on a digital signature that is unique to said apparatus, authentication based on an electronic key, and authentication based on an electronic password. 4. The apparatus of claim 2, wherein said interface is configured to supply power to said circuitry. 5. The apparatus of claim 2, wherein said circuitry includes a communication interface configured to establish a wireless connection between said circuitry and a computer system separate from said apparatus. 6. The apparatus of claim 2, wherein said interface is configured to establish a connection between said circuitry and a computer system separate from said apparatus. 7. The apparatus of claim 6, wherein said connection is a wired connection selected from a group consisting of an Ethernet connection, a RS-485 connection, and a RS-232 connection. 8. The apparatus of claim 6 further comprising: a second interface disposed at least partially within said container, and wherein said circuitry is configured to control said electronic lock using said second interface. 9. The apparatus of claim 8 further comprising: at least one electronic component disposed at least partially within container; and a third interface disposed at least partially within said container, and wherein said circuitry is configured to control said at least one electronic component using said third interface. 10. The apparatus of claim 9, wherein said at least one electronic component is selected from a group consisting of a visual output device and an audio output device. 11. The apparatus of claim 6, wherein said circuitry includes a processor, and further comprising: at least one sensor configured to monitor a parameter associated with said apparatus, and wherein said processor is configured to communicate, via said interface over said connection, information associated with said parameter to said computer system. 12. The apparatus of claim 11, wherein said parameter is selected from a group consisting of vibration, an opening of said member with respect to said housing, a closing of said member with respect to said housing, and an electrical failure. 13. The apparatus of claim 6, wherein said circuitry includes a processor, and further comprising: at least one camera disposed within said compartment, wherein said at least one camera is configured to capture at least one image of said at least one item, and wherein said processor is configured to communicate, via said interface over said connection, said at least one image to said computer system. 14. The apparatus of claim 13 further comprising: a cable electrically coupled to said circuitry and said at least one camera, wherein said cable is routed between said compartment and said container. 15. The apparatus of claim 13, wherein said at least one camera is positioned toward a ceiling of said housing. 16. The apparatus of claim 13 further comprising: at least one light source disposed within said compartment, wherein said at least one light source is configured to illuminate said at least one item. 17. The apparatus of claim 2, wherein said circuitry includes at least one component selected from a group consisting of a processor, a memory, a communication interface, a sensor interface, an output device interface, a camera controller, a provisioning controller, a cryptography processor, and a power interface. 18. The apparatus of claim 17, wherein said processor is selected from a group consisting of a controller, a microcontroller, a microprocessor, a central processing unit, and a main processor. 19. The apparatus of claim 2, wherein said circuitry is selected from a group consisting of a panel, a control board, an electronic main board, and a motherboard. 20. The apparatus of claim 1, wherein said member is configured to implement a door. 21. The apparatus of claim 1 further comprising: a base coupled with said member, wherein said base is configured to accept said at least one item, and wherein said base and said member are configured to implement a drawer; and at least one channel coupled with said housing and said base, wherein said at least one channel is configured to allow said member and said base to move with respect to said housing. 22. The apparatus of claim 1, wherein said electronic lock includes a component selected from a group consisting of an electromagnetic lock and a solenoid.
2,600
9,842
9,842
14,913,462
2,674
Examples disclosed herein provide for the handling of a print job for a printer via a mobile device. The mobile device receives a notification from a computing device to indicate the print job is available for printing. Upon the mobile device joining a wireless network associated with the printer, the mobile device delivers the print job from the mobile device to the printer via the wireless network.
1. A method to handle a print job for a printer via a mobile device, the method comprising: receiving, with the mobile device, a notification from a computing device to indicate the print job is available for printing; and upon the mobile device joining a wireless network associated with the printer, automatically delivering the print job from the mobile device to the printer via the wireless network. 2. The method of claim 1, comprising: downloading, with the mobile device, the print job wirelessly from the computing device to deliver the print job to the printer upon joining the wireless network. 3. The method of claim 1, comprising: displaying, on the mobile device, a list of one or more printers associated with the wireless network; upon a selection of the printer from the list, registering the selected printer as a default printer for printing print jobs; and sending, from the mobile device, capabilities of the registered printer to be stored in a database associated with the computing device. 4. The method of claim 3, wherein the print job is rendered by the computing device according to the capabilities of the registered printer retrieved from the database. 5. The method of claim 3, comprising: receiving, with the mobile device, a notification from the computing device for another print job available for printing; upon selection of another printer from the list of printers associated with the wireless network, sending, from the mobile device, capabilities of the other selected printer to the computing device, wherein the other print job is rendered by the computing device according to the capabilities of the other selected printer; and automatically delivering the rendered print job wirelessly from the mobile device to the other printer for printing. 6. A mobile device to handle a print job for a printer, the mobile device comprising: a register engine to register the mobile device with a computing device to receive printable content from the computing device via the Internet; a receive engine to receive a notification from the computing device to indicate the print job is available for printing; and upon the mobile device joining a wireless network associated with the printer, a delivery engine to deliver, via the wireless network, the print job to the printer without user intervention. 7. The mobile device of claim 6, wherein an email address is assigned by the computing device to the mobile device upon registration and the print job is a document emailed as an attachment to the email address. 8. The mobile device of claim 6, comprising: a download engine to download the print job wirelessly from the computing device to deliver the print job to the printer upon joining the wireless network. 9. The mobile device of claim 6, comprising: a display engine to display a list of one or more printers associated with the wireless network; upon a selection of the printer from the list, an associate engine to register the selected printer as a default printer for printing print jobs; and a transmit engine to send capabilities of the registered printer to be stored in a database associated with the computing device. 10. The mobile device of claim 9, wherein the print job is rendered by the computing device according to the capabilities of the registered printer retrieved from the database. 11. The mobile device of claim 9, comprising: the receive engine to receive a notification from the computing device for another print job available for printing; upon selection of another printer from the list of printers associated with the wireless network, the transmit engine to send capabilities of the other selected printer to the computing device, wherein the other print job is rendered by the computing device according to the capabilities of the other selected printer; and the delivery engine to automatically deliver the rendered print job to the other printer for printing. 12. A non-transitory memory resource storing instructions that when executed cause a processing resource to implement a system to handle a print job for a printer via a mobile device, the instructions comprising: a receive module to receive a notification from a computing device to indicate the print job is available for printing; and upon joining a wireless network associated with the printer, a delivery module to automatically deliver the print job wirelessly to the printer for printing. 13. The non-transitory memory resource of claim 12, the instructions comprising: a download module to download the print job wirelessly from the computing device to deliver the print job to the printer upon joining the wireless network. 14. The non-transitory memory resource of claim 12, the instructions comprising: a display module to display a list of one or more printers associated with the wireless network; upon a selection of the printer from the list, an associate module to register the selected printer as a default printer for printing print jobs; and a transmit module to send capabilities of the registered printer to be stored in a database associated with the computing device. 15. The non-transitory memory resource of claim 14, wherein the print job is rendered by the computing device according to the capabilities of the registered printer retrieved from the database.
Examples disclosed herein provide for the handling of a print job for a printer via a mobile device. The mobile device receives a notification from a computing device to indicate the print job is available for printing. Upon the mobile device joining a wireless network associated with the printer, the mobile device delivers the print job from the mobile device to the printer via the wireless network.1. A method to handle a print job for a printer via a mobile device, the method comprising: receiving, with the mobile device, a notification from a computing device to indicate the print job is available for printing; and upon the mobile device joining a wireless network associated with the printer, automatically delivering the print job from the mobile device to the printer via the wireless network. 2. The method of claim 1, comprising: downloading, with the mobile device, the print job wirelessly from the computing device to deliver the print job to the printer upon joining the wireless network. 3. The method of claim 1, comprising: displaying, on the mobile device, a list of one or more printers associated with the wireless network; upon a selection of the printer from the list, registering the selected printer as a default printer for printing print jobs; and sending, from the mobile device, capabilities of the registered printer to be stored in a database associated with the computing device. 4. The method of claim 3, wherein the print job is rendered by the computing device according to the capabilities of the registered printer retrieved from the database. 5. The method of claim 3, comprising: receiving, with the mobile device, a notification from the computing device for another print job available for printing; upon selection of another printer from the list of printers associated with the wireless network, sending, from the mobile device, capabilities of the other selected printer to the computing device, wherein the other print job is rendered by the computing device according to the capabilities of the other selected printer; and automatically delivering the rendered print job wirelessly from the mobile device to the other printer for printing. 6. A mobile device to handle a print job for a printer, the mobile device comprising: a register engine to register the mobile device with a computing device to receive printable content from the computing device via the Internet; a receive engine to receive a notification from the computing device to indicate the print job is available for printing; and upon the mobile device joining a wireless network associated with the printer, a delivery engine to deliver, via the wireless network, the print job to the printer without user intervention. 7. The mobile device of claim 6, wherein an email address is assigned by the computing device to the mobile device upon registration and the print job is a document emailed as an attachment to the email address. 8. The mobile device of claim 6, comprising: a download engine to download the print job wirelessly from the computing device to deliver the print job to the printer upon joining the wireless network. 9. The mobile device of claim 6, comprising: a display engine to display a list of one or more printers associated with the wireless network; upon a selection of the printer from the list, an associate engine to register the selected printer as a default printer for printing print jobs; and a transmit engine to send capabilities of the registered printer to be stored in a database associated with the computing device. 10. The mobile device of claim 9, wherein the print job is rendered by the computing device according to the capabilities of the registered printer retrieved from the database. 11. The mobile device of claim 9, comprising: the receive engine to receive a notification from the computing device for another print job available for printing; upon selection of another printer from the list of printers associated with the wireless network, the transmit engine to send capabilities of the other selected printer to the computing device, wherein the other print job is rendered by the computing device according to the capabilities of the other selected printer; and the delivery engine to automatically deliver the rendered print job to the other printer for printing. 12. A non-transitory memory resource storing instructions that when executed cause a processing resource to implement a system to handle a print job for a printer via a mobile device, the instructions comprising: a receive module to receive a notification from a computing device to indicate the print job is available for printing; and upon joining a wireless network associated with the printer, a delivery module to automatically deliver the print job wirelessly to the printer for printing. 13. The non-transitory memory resource of claim 12, the instructions comprising: a download module to download the print job wirelessly from the computing device to deliver the print job to the printer upon joining the wireless network. 14. The non-transitory memory resource of claim 12, the instructions comprising: a display module to display a list of one or more printers associated with the wireless network; upon a selection of the printer from the list, an associate module to register the selected printer as a default printer for printing print jobs; and a transmit module to send capabilities of the registered printer to be stored in a database associated with the computing device. 15. The non-transitory memory resource of claim 14, wherein the print job is rendered by the computing device according to the capabilities of the registered printer retrieved from the database.
2,600
9,843
9,843
14,641,599
2,651
A service for automatically connecting an individual's telephony device to a conference call when the conference call is about to begin is provided by a telephony system. The telephony system obtains information about a scheduled conference call directly from the individual's electronic calendar. When it is time for the conference call to begin, the telephony system dials into the conference call bridge and automatically provides the information needed to access the conference call. The telephony system then connects the individual's telephony device to the conference call.
1. A method for connecting a user's telephony device to a conference call, comprising: obtaining, using one or more processors, information about a scheduled conference call from an individual's electronic calendar, wherein the information includes at least an access identifier that is to be used to access the conference call; and setting up a communications channel between the conference call and a telephony device used by the individual at a time that is based upon a start time of the scheduled conference call. 2. The method of claim 1, wherein setting up a communications channel between the conference call and a telephony device used by the individual comprises: setting up a first communications channel between the telephony system and the telephony device used by the individual; and setting up a second communications channel between a telephony system and a conference call bridge for the scheduled conference call using the access identifier; bridging together the first and second communications channels to create the communications channel between the conference call and the telephony device used by the individual. 3. The method of claim 2, wherein obtaining information about the scheduled conference call further comprises obtaining an access code that is to be used to access the conference call bridge from the individual's electronic calendar, and wherein setting up the communications channel between the conference call and the telephony device used by the individual also comprises automatically providing the access code to the conference call bridge. 4. The method of claim 3, wherein automatically providing the access code to the conference call bridge comprises providing the access code to the conference call bridge once a predetermined period of time has expired after the second communications channel has been established between the telephony system and the conference call bridge. 5. The method of claim 3, wherein automatically providing the access code to the conference call bridge comprises: determining when the conference call bridge has asked for the access code; and providing the access code to the conference call bridge after it is determined that the conference call bridge has asked for the access code. 6. The method of claim 3, wherein the step of bridging together the first and second communications channels is performed after the access code has been provided to the conference call bridge. 7. The method of claim 3, further comprising automatically playing an audio recording to the conference call bridge when the conference call bridge asks for a participant to speak their name. 8. The method of claim 7, wherein the step of bridging together the first and second communications channels is performed after the audio recording has been played to the conference call bridge. 9. The method of claim 3, further comprising: determining when the conference call bridge has asked for a participant to speak their name; and playing an audio recording to the conference call bridge after it is determined that the conference call bridge has asked for a participant to speak their name. 10. The method of claim 2, further comprising: determining that the communications channel between conference call and the telephony device used by the individual has been terminated; receiving an incoming telephone call from the telephony device used by the individual; and reestablishing a communications channel between the conference call and the telephony device used by the individual. 11. The method of claim 10, wherein reestablishing the communications channel between the conference call and the telephony device used by the individual comprises: determining whether the received incoming telephone call is from the telephony device used by the individual using caller ID information associated with the incoming telephone call; and reestablishing the communications channel between the conference call and the telephony device used by the individual when it is determined that the received incoming telephone call is from the telephony device used by the individual. 12. The method of claim 10, wherein reestablishing the communications channel between the conference call and the telephony device used by the individual comprises: setting up a new communications channel between the telephony system and the conference call bridge using the access identifier; and bridging together the received telephone call from the telephony device used by the individual and the new communications channel to reestablish a communications channel between the conference call and the telephony device used by the individual. 13. The method of claim 2, wherein setting up a second communications channel between a telephony system and a conference call bridge comprises: determining an identity the conference call bridge service provider; and responding to questions posed by the conference call bridge based on profile information for the determined conference call bridge service provider. 14. The method of claim 2, wherein setting up a first communications channel between the telephony system and the telephony device used by the individual comprises causing information related to the conference call to be played to the individual via the telephony device used by the individual. 15. The method of claim 2, wherein setting up a first communications channel between the telephony system and the telephony device used by the individual comprises using profile information related to the individual to determine how to setup the communications channel between the telephony system and the telephony device used by the individual. 16. The method of claim 1, wherein obtaining information about a scheduled conference call comprises receiving a communication that includes information about the conference call. 17. The method of claim 1, wherein obtaining information about a scheduled conference call comprises: obtaining authorization from the individual to access information in the individual's electronic calendar; and periodically accessing the individual's electronic calendar to obtain information about one or more scheduled conference calls that is present in the individual's electronic calendar. 18. The method of claim 17, wherein periodically accessing the individual's electronic calendar to obtain information about one or more scheduled conference calls comprises identifying information in the individual's electronic calendar that conforms to a predetermined format that is used to store information about scheduled conference calls. 19. A system for connecting a user's telephony device to a conference call, comprising: means for obtaining, using one or more processors, information about a scheduled conference call from an individual's electronic calendar, wherein the information includes at least an access identifier that is to be used to access the conference call; and means for setting up a communications channel between the conference call and a telephony device used by the individual at a time that is based upon a start time of the scheduled conference call. 20. A system for connecting a user's telephony device to a conference call, comprising: an electronic calendar interface that obtains, using one or more processors, information about a scheduled conference call from an individual's electronic calendar, wherein the information includes at least an access identifier that is to be used to access the conference call; and a communication channel setup unit that sets up a communications channel between the conference call and a telephony device used by the individual at a time that is based upon a start time of the scheduled conference call.
A service for automatically connecting an individual's telephony device to a conference call when the conference call is about to begin is provided by a telephony system. The telephony system obtains information about a scheduled conference call directly from the individual's electronic calendar. When it is time for the conference call to begin, the telephony system dials into the conference call bridge and automatically provides the information needed to access the conference call. The telephony system then connects the individual's telephony device to the conference call.1. A method for connecting a user's telephony device to a conference call, comprising: obtaining, using one or more processors, information about a scheduled conference call from an individual's electronic calendar, wherein the information includes at least an access identifier that is to be used to access the conference call; and setting up a communications channel between the conference call and a telephony device used by the individual at a time that is based upon a start time of the scheduled conference call. 2. The method of claim 1, wherein setting up a communications channel between the conference call and a telephony device used by the individual comprises: setting up a first communications channel between the telephony system and the telephony device used by the individual; and setting up a second communications channel between a telephony system and a conference call bridge for the scheduled conference call using the access identifier; bridging together the first and second communications channels to create the communications channel between the conference call and the telephony device used by the individual. 3. The method of claim 2, wherein obtaining information about the scheduled conference call further comprises obtaining an access code that is to be used to access the conference call bridge from the individual's electronic calendar, and wherein setting up the communications channel between the conference call and the telephony device used by the individual also comprises automatically providing the access code to the conference call bridge. 4. The method of claim 3, wherein automatically providing the access code to the conference call bridge comprises providing the access code to the conference call bridge once a predetermined period of time has expired after the second communications channel has been established between the telephony system and the conference call bridge. 5. The method of claim 3, wherein automatically providing the access code to the conference call bridge comprises: determining when the conference call bridge has asked for the access code; and providing the access code to the conference call bridge after it is determined that the conference call bridge has asked for the access code. 6. The method of claim 3, wherein the step of bridging together the first and second communications channels is performed after the access code has been provided to the conference call bridge. 7. The method of claim 3, further comprising automatically playing an audio recording to the conference call bridge when the conference call bridge asks for a participant to speak their name. 8. The method of claim 7, wherein the step of bridging together the first and second communications channels is performed after the audio recording has been played to the conference call bridge. 9. The method of claim 3, further comprising: determining when the conference call bridge has asked for a participant to speak their name; and playing an audio recording to the conference call bridge after it is determined that the conference call bridge has asked for a participant to speak their name. 10. The method of claim 2, further comprising: determining that the communications channel between conference call and the telephony device used by the individual has been terminated; receiving an incoming telephone call from the telephony device used by the individual; and reestablishing a communications channel between the conference call and the telephony device used by the individual. 11. The method of claim 10, wherein reestablishing the communications channel between the conference call and the telephony device used by the individual comprises: determining whether the received incoming telephone call is from the telephony device used by the individual using caller ID information associated with the incoming telephone call; and reestablishing the communications channel between the conference call and the telephony device used by the individual when it is determined that the received incoming telephone call is from the telephony device used by the individual. 12. The method of claim 10, wherein reestablishing the communications channel between the conference call and the telephony device used by the individual comprises: setting up a new communications channel between the telephony system and the conference call bridge using the access identifier; and bridging together the received telephone call from the telephony device used by the individual and the new communications channel to reestablish a communications channel between the conference call and the telephony device used by the individual. 13. The method of claim 2, wherein setting up a second communications channel between a telephony system and a conference call bridge comprises: determining an identity the conference call bridge service provider; and responding to questions posed by the conference call bridge based on profile information for the determined conference call bridge service provider. 14. The method of claim 2, wherein setting up a first communications channel between the telephony system and the telephony device used by the individual comprises causing information related to the conference call to be played to the individual via the telephony device used by the individual. 15. The method of claim 2, wherein setting up a first communications channel between the telephony system and the telephony device used by the individual comprises using profile information related to the individual to determine how to setup the communications channel between the telephony system and the telephony device used by the individual. 16. The method of claim 1, wherein obtaining information about a scheduled conference call comprises receiving a communication that includes information about the conference call. 17. The method of claim 1, wherein obtaining information about a scheduled conference call comprises: obtaining authorization from the individual to access information in the individual's electronic calendar; and periodically accessing the individual's electronic calendar to obtain information about one or more scheduled conference calls that is present in the individual's electronic calendar. 18. The method of claim 17, wherein periodically accessing the individual's electronic calendar to obtain information about one or more scheduled conference calls comprises identifying information in the individual's electronic calendar that conforms to a predetermined format that is used to store information about scheduled conference calls. 19. A system for connecting a user's telephony device to a conference call, comprising: means for obtaining, using one or more processors, information about a scheduled conference call from an individual's electronic calendar, wherein the information includes at least an access identifier that is to be used to access the conference call; and means for setting up a communications channel between the conference call and a telephony device used by the individual at a time that is based upon a start time of the scheduled conference call. 20. A system for connecting a user's telephony device to a conference call, comprising: an electronic calendar interface that obtains, using one or more processors, information about a scheduled conference call from an individual's electronic calendar, wherein the information includes at least an access identifier that is to be used to access the conference call; and a communication channel setup unit that sets up a communications channel between the conference call and a telephony device used by the individual at a time that is based upon a start time of the scheduled conference call.
2,600
9,844
9,844
15,030,022
2,622
A method for displaying a display element on at least one vehicle-side display device of a vehicle includes: transmitting, from the vehicle to a terminal via a data link, information about a size of the vehicle-side display device; providing, by the terminal, data for displaying the display element, as a function of the transmitted information about the size of the display device; and transmitting, from the terminal to the vehicle via the data link, the data for displaying the display element.
1-10. (canceled) 11. A method for displaying a display element on at least one vehicle-side display device of a vehicle, the method comprising: transmitting, from the vehicle to a terminal via a data link, information about a size of the vehicle-side display device; providing, by the terminal, data for displaying the display element, as a function of the transmitted information about the size of the display device; and transmitting, from the terminal to the vehicle via the data link, the data for displaying the display element. 12. The method as recited in claim 11, wherein: a predetermined density-independent pixel number is assigned to the display element by the terminal; and an absolute pixel number of the display element to be displayed on the vehicle-side display device is determined as a function of the density-independent pixel number and the transmitted information about the size of the display device. 13. The method as recited in claim 12, wherein: the absolute pixel number is determined as a function of the density-independent pixel number, a reference scaling quantity and a resolution/size ratio; and the resolution/size ratio is a ratio between a resolution of a terminal-side display device and the size of the vehicle-side display device. 14. The method as recited in claim 13, wherein in the event a ratio of the width to the height of the terminal-side display device is less than a ratio of the width to the height of the vehicle-side display device, the resolution/size ratio is determined as a ratio between a vertical resolution of the terminal-side display device and the height of the vehicle-side display device. 15. The method as recited in claim 13, wherein in the event a ratio of the width to the height of the terminal-side display device is no less than a ratio of the width to the height of the vehicle-side display device, the resolution/size ratio is determined as a ratio between a horizontal resolution of the terminal-side display device and the width of the vehicle-side display device. 16. The method as recited in claim 13, wherein the terminal is a mobile terminal. 17. The method as recited in claim 13, wherein the display element is displayed at the same time on a terminal-side display device and on the vehicle-side display device. 18. The method as recited in claim 17, wherein the display element is displayed on the terminal-side display device with the absolute pixel number of the display element to be displayed on the vehicle-side display device. 19. A system for displaying a display element, comprising: at least one vehicle-side display device of a vehicle, wherein the display element is displayed on the vehicle-side display device; and at least one terminal providing data for displaying the display element; wherein: the terminal is operatively linked to the vehicle via a data link; information about a size of the vehicle-side display device is transmitted from the vehicle to the terminal; the data for displaying the display element are provided by the terminal as a function of the information about the size of the vehicle-side display device; and the data for displaying the display element are transmitted from the terminal to the vehicle for displaying the display element. 20. The system as recited in claim 19, wherein: a predetermined density-independent pixel number is assigned to the display element by the terminal; and an absolute pixel number of the display element to be displayed on the vehicle-side display device is determined as a function of the density-independent pixel number and the transmitted information about the size of the display device.
A method for displaying a display element on at least one vehicle-side display device of a vehicle includes: transmitting, from the vehicle to a terminal via a data link, information about a size of the vehicle-side display device; providing, by the terminal, data for displaying the display element, as a function of the transmitted information about the size of the display device; and transmitting, from the terminal to the vehicle via the data link, the data for displaying the display element.1-10. (canceled) 11. A method for displaying a display element on at least one vehicle-side display device of a vehicle, the method comprising: transmitting, from the vehicle to a terminal via a data link, information about a size of the vehicle-side display device; providing, by the terminal, data for displaying the display element, as a function of the transmitted information about the size of the display device; and transmitting, from the terminal to the vehicle via the data link, the data for displaying the display element. 12. The method as recited in claim 11, wherein: a predetermined density-independent pixel number is assigned to the display element by the terminal; and an absolute pixel number of the display element to be displayed on the vehicle-side display device is determined as a function of the density-independent pixel number and the transmitted information about the size of the display device. 13. The method as recited in claim 12, wherein: the absolute pixel number is determined as a function of the density-independent pixel number, a reference scaling quantity and a resolution/size ratio; and the resolution/size ratio is a ratio between a resolution of a terminal-side display device and the size of the vehicle-side display device. 14. The method as recited in claim 13, wherein in the event a ratio of the width to the height of the terminal-side display device is less than a ratio of the width to the height of the vehicle-side display device, the resolution/size ratio is determined as a ratio between a vertical resolution of the terminal-side display device and the height of the vehicle-side display device. 15. The method as recited in claim 13, wherein in the event a ratio of the width to the height of the terminal-side display device is no less than a ratio of the width to the height of the vehicle-side display device, the resolution/size ratio is determined as a ratio between a horizontal resolution of the terminal-side display device and the width of the vehicle-side display device. 16. The method as recited in claim 13, wherein the terminal is a mobile terminal. 17. The method as recited in claim 13, wherein the display element is displayed at the same time on a terminal-side display device and on the vehicle-side display device. 18. The method as recited in claim 17, wherein the display element is displayed on the terminal-side display device with the absolute pixel number of the display element to be displayed on the vehicle-side display device. 19. A system for displaying a display element, comprising: at least one vehicle-side display device of a vehicle, wherein the display element is displayed on the vehicle-side display device; and at least one terminal providing data for displaying the display element; wherein: the terminal is operatively linked to the vehicle via a data link; information about a size of the vehicle-side display device is transmitted from the vehicle to the terminal; the data for displaying the display element are provided by the terminal as a function of the information about the size of the vehicle-side display device; and the data for displaying the display element are transmitted from the terminal to the vehicle for displaying the display element. 20. The system as recited in claim 19, wherein: a predetermined density-independent pixel number is assigned to the display element by the terminal; and an absolute pixel number of the display element to be displayed on the vehicle-side display device is determined as a function of the density-independent pixel number and the transmitted information about the size of the display device.
2,600
9,845
9,845
15,277,145
2,659
A mixed-language translator apparatus may include a language estimator to estimate two or more languages of at least two message fragments in an electronic representation of a mixed-language message including at least two languages, a language preference detector to determine a language preference of an intended recipient for the mixed-language message, and a translator communicatively coupled to language estimator and the language preference detector to translate the mixed-language message based on the estimated two or more languages of the at least two message fragments and the language preference of the intended recipient. Other embodiments are disclosed and claimed.
1. A mixed-language communication system, comprising: a communication interface to send and receive electronic representations of messages between at least two users; a message composer to compose an electronic representation of a message, including a mixed-language message including a mix of at least two languages; and a mixed-language translator communicatively coupled to the communication interface and the message composer, the mixed-language translator including: a language estimator to estimate two or more languages of at least two message fragments in one or more of a received mixed-language message from the communication interface or a composed mixed-language message from the message composer; a language preference detector to determine a language preference of one or more intended recipients; and a translator communicatively coupled to the language estimator and the language preference detector to translate one or more of the composed mixed-language message from the message composer or the received mixed-language message from the communication interface based on the estimated two or more languages of the at least two message fragments and the language preference of the one or more intended recipients. 2. The mixed-language communication system of claim 1, wherein the language preference detector is further to determine a language preference of a composer of the composed mixed-language message, and wherein the language estimator is further to estimate the two or more languages of the at least two message fragments in one or more of the composed message or the received message based on one or more of the language preference of the composer or the language preference of the one or more intended recipients. 3. The mixed-language communication system of claim 1, wherein the language preference detector is further to determine the language preference of the one or more intended recipients based on stored information including one or more of a pre-identified language preference associated with one or more of the composer or the one or more intended recipients, a prior communication, a local file analysis, or a location indicator. 4. A mixed-language translator apparatus, comprising: a language estimator to estimate two or more languages of at least two message fragments in an electronic representation of a mixed-language message including at least two languages; a language preference detector to determine a language preference of an intended recipient for the mixed-language message; and a translator communicatively coupled to the language estimator and the language preference detector to translate the mixed-language message based on the estimated two or more languages of the at least two message fragments and the language preference of the intended recipient. 5. The mixed-language translator apparatus of claim 4, wherein the language estimator is further to identify lingual boundaries in the mixed-language message. 6. The mixed-language translator apparatus of claim 4, wherein the language estimator is further to assign respective confidence levels to the estimated two or more languages of the at least two message fragments. 7. The mixed-language translator apparatus of claim 6, wherein the language preference detector is further to determine a language preference of a composer of the mixed-language message and wherein the language estimator is further to assign the respective confidence levels based at least in part on one or more of the language preference of the composer or the language preference of the intended recipient. 8. The mixed-language translator apparatus of claim 4, wherein the language preference detector is further to determine the language preference of one or more of the intended recipient or a composer based on stored information including one or more of a pre-identified language preference associated with one or more of the composer or the intended recipient, a prior translation, a local file analysis, or a location indicator. 9. The mixed-language translator apparatus of claim 4, wherein the translator is further to translate the mixed-language message based at least in part on one or more of the assigned confidence levels, an idiomatic analysis, a semantic analysis, a contextual probability of an appropriate translation, or a language skill level of one or more of the composer or the intended recipient. 10. The mixed-language translator apparatus of claim 4, wherein the translator is further to translate the mixed-language message based at least in part on a language preference database associated with the intended recipient which maps source original languages to target translation languages. 11. A method of mixed-language translation, comprising: estimating two or more languages of at least two message fragments in an electronic representation of a mixed-language message including at least two languages; determining a language preference of an intended recipient for the mixed-language message; and translating the mixed-language message based on the estimated two or more languages of the at least two message fragments and the language preference of the intended recipient. 12. The method of mixed-language translation of claim 11, further comprising: identifying lingual boundaries in the mixed-language message. 13. The method of mixed-language translation of claim 11, further comprising: assigning respective confidence levels to the estimated two or more languages of the at least two message fragments. 14. The method of mixed-language translation of claim 13, further comprising: determining a language preference of a composer of the mixed-language message; and assigning the respective confidence levels based at least in part on one or more of the language preference of the composer or the language preference of the intended recipient. 15. The method of mixed-language translation of claim 11, further comprising: determining the language preference of one or more of the intended recipient or a composer based on stored information including one or more of a pre-identified language preference associated with one or more of the composer or the intended recipient, a prior translation, a local file analysis, or a location indicator. 16. The method of mixed-language translation of claim 11, further comprising: translating the mixed-language message based at least in part on one or more of the assigned confidence levels, an idiomatic analysis, a semantic analysis, a contextual probability of an appropriate translation, or a language skill level of one or more of the composer or the intended recipient. 17. The method of mixed-language translation of claim 11, further comprising: translating the mixed-language message based at least in part on a language preference database associated with the intended recipient which maps source original languages to target translation languages. 18. At least one computer readable medium comprising a set of instructions, which when executed by a computing device, cause the computing device to: estimate two or more languages of at least two message fragments in an electronic representation of a mixed-language message including at least two languages; determine a language preference of an intended recipient for the mixed-language message; and translate the mixed-language message based on the estimated two or more languages of the at least two message fragments and the language preference of the intended recipient. 19. The at least one computer readable medium of claim 18, comprising a further set of instructions, which when executed by a computing device, cause the computing device to: identify lingual boundaries in the mixed-language message. 20. The at least one computer readable medium of claim 18, comprising a further set of instructions, which when executed by a computing device, cause the computing device to: assign respective confidence levels to the estimated two or more languages of the at least two message fragments. 21. The at least one computer readable medium of claim 20, comprising a further set of instructions, which when executed by a computing device, cause the computing device to: determine a language preference of a composer of the mixed-language message; and assign the respective confidence levels based at least in part on one or more of the language preference of the composer or the language preference of the intended recipient. 22. The at least one computer readable medium of claim 18, comprising a further set of instructions, which when executed by a computing device, cause the computing device to: determine the language preference of one or more of the intended recipient or a composer based on stored information including one or more of a pre-identified language preference associated with one or more of the composer or the intended recipient, a prior translation, a local file analysis, or a location indicator. 23. The at least one computer readable medium of claim 18, comprising a further set of instructions, which when executed by a computing device, cause the computing device to: translate the mixed-language message based at least in part on one or more of the assigned confidence levels, an idiomatic analysis, a semantic analysis, a contextual probability of an appropriate translation, or a language skill level of one or more of the composer or the intended recipient. 24. The at least one computer readable medium of claim 18, comprising a further set of instructions, which when executed by a computing device, cause the computing device to: translate the mixed-language message based at least in part on a language preference database associated with the intended recipient which maps source original languages to target translation languages.
A mixed-language translator apparatus may include a language estimator to estimate two or more languages of at least two message fragments in an electronic representation of a mixed-language message including at least two languages, a language preference detector to determine a language preference of an intended recipient for the mixed-language message, and a translator communicatively coupled to language estimator and the language preference detector to translate the mixed-language message based on the estimated two or more languages of the at least two message fragments and the language preference of the intended recipient. Other embodiments are disclosed and claimed.1. A mixed-language communication system, comprising: a communication interface to send and receive electronic representations of messages between at least two users; a message composer to compose an electronic representation of a message, including a mixed-language message including a mix of at least two languages; and a mixed-language translator communicatively coupled to the communication interface and the message composer, the mixed-language translator including: a language estimator to estimate two or more languages of at least two message fragments in one or more of a received mixed-language message from the communication interface or a composed mixed-language message from the message composer; a language preference detector to determine a language preference of one or more intended recipients; and a translator communicatively coupled to the language estimator and the language preference detector to translate one or more of the composed mixed-language message from the message composer or the received mixed-language message from the communication interface based on the estimated two or more languages of the at least two message fragments and the language preference of the one or more intended recipients. 2. The mixed-language communication system of claim 1, wherein the language preference detector is further to determine a language preference of a composer of the composed mixed-language message, and wherein the language estimator is further to estimate the two or more languages of the at least two message fragments in one or more of the composed message or the received message based on one or more of the language preference of the composer or the language preference of the one or more intended recipients. 3. The mixed-language communication system of claim 1, wherein the language preference detector is further to determine the language preference of the one or more intended recipients based on stored information including one or more of a pre-identified language preference associated with one or more of the composer or the one or more intended recipients, a prior communication, a local file analysis, or a location indicator. 4. A mixed-language translator apparatus, comprising: a language estimator to estimate two or more languages of at least two message fragments in an electronic representation of a mixed-language message including at least two languages; a language preference detector to determine a language preference of an intended recipient for the mixed-language message; and a translator communicatively coupled to the language estimator and the language preference detector to translate the mixed-language message based on the estimated two or more languages of the at least two message fragments and the language preference of the intended recipient. 5. The mixed-language translator apparatus of claim 4, wherein the language estimator is further to identify lingual boundaries in the mixed-language message. 6. The mixed-language translator apparatus of claim 4, wherein the language estimator is further to assign respective confidence levels to the estimated two or more languages of the at least two message fragments. 7. The mixed-language translator apparatus of claim 6, wherein the language preference detector is further to determine a language preference of a composer of the mixed-language message and wherein the language estimator is further to assign the respective confidence levels based at least in part on one or more of the language preference of the composer or the language preference of the intended recipient. 8. The mixed-language translator apparatus of claim 4, wherein the language preference detector is further to determine the language preference of one or more of the intended recipient or a composer based on stored information including one or more of a pre-identified language preference associated with one or more of the composer or the intended recipient, a prior translation, a local file analysis, or a location indicator. 9. The mixed-language translator apparatus of claim 4, wherein the translator is further to translate the mixed-language message based at least in part on one or more of the assigned confidence levels, an idiomatic analysis, a semantic analysis, a contextual probability of an appropriate translation, or a language skill level of one or more of the composer or the intended recipient. 10. The mixed-language translator apparatus of claim 4, wherein the translator is further to translate the mixed-language message based at least in part on a language preference database associated with the intended recipient which maps source original languages to target translation languages. 11. A method of mixed-language translation, comprising: estimating two or more languages of at least two message fragments in an electronic representation of a mixed-language message including at least two languages; determining a language preference of an intended recipient for the mixed-language message; and translating the mixed-language message based on the estimated two or more languages of the at least two message fragments and the language preference of the intended recipient. 12. The method of mixed-language translation of claim 11, further comprising: identifying lingual boundaries in the mixed-language message. 13. The method of mixed-language translation of claim 11, further comprising: assigning respective confidence levels to the estimated two or more languages of the at least two message fragments. 14. The method of mixed-language translation of claim 13, further comprising: determining a language preference of a composer of the mixed-language message; and assigning the respective confidence levels based at least in part on one or more of the language preference of the composer or the language preference of the intended recipient. 15. The method of mixed-language translation of claim 11, further comprising: determining the language preference of one or more of the intended recipient or a composer based on stored information including one or more of a pre-identified language preference associated with one or more of the composer or the intended recipient, a prior translation, a local file analysis, or a location indicator. 16. The method of mixed-language translation of claim 11, further comprising: translating the mixed-language message based at least in part on one or more of the assigned confidence levels, an idiomatic analysis, a semantic analysis, a contextual probability of an appropriate translation, or a language skill level of one or more of the composer or the intended recipient. 17. The method of mixed-language translation of claim 11, further comprising: translating the mixed-language message based at least in part on a language preference database associated with the intended recipient which maps source original languages to target translation languages. 18. At least one computer readable medium comprising a set of instructions, which when executed by a computing device, cause the computing device to: estimate two or more languages of at least two message fragments in an electronic representation of a mixed-language message including at least two languages; determine a language preference of an intended recipient for the mixed-language message; and translate the mixed-language message based on the estimated two or more languages of the at least two message fragments and the language preference of the intended recipient. 19. The at least one computer readable medium of claim 18, comprising a further set of instructions, which when executed by a computing device, cause the computing device to: identify lingual boundaries in the mixed-language message. 20. The at least one computer readable medium of claim 18, comprising a further set of instructions, which when executed by a computing device, cause the computing device to: assign respective confidence levels to the estimated two or more languages of the at least two message fragments. 21. The at least one computer readable medium of claim 20, comprising a further set of instructions, which when executed by a computing device, cause the computing device to: determine a language preference of a composer of the mixed-language message; and assign the respective confidence levels based at least in part on one or more of the language preference of the composer or the language preference of the intended recipient. 22. The at least one computer readable medium of claim 18, comprising a further set of instructions, which when executed by a computing device, cause the computing device to: determine the language preference of one or more of the intended recipient or a composer based on stored information including one or more of a pre-identified language preference associated with one or more of the composer or the intended recipient, a prior translation, a local file analysis, or a location indicator. 23. The at least one computer readable medium of claim 18, comprising a further set of instructions, which when executed by a computing device, cause the computing device to: translate the mixed-language message based at least in part on one or more of the assigned confidence levels, an idiomatic analysis, a semantic analysis, a contextual probability of an appropriate translation, or a language skill level of one or more of the composer or the intended recipient. 24. The at least one computer readable medium of claim 18, comprising a further set of instructions, which when executed by a computing device, cause the computing device to: translate the mixed-language message based at least in part on a language preference database associated with the intended recipient which maps source original languages to target translation languages.
2,600
9,846
9,846
14,718,602
2,699
Disclosed herein are a system, non-transitory computer-readable medium, and method for printing. A plurality of unique identifiers can be generated for a printing device, each of the plurality of email addresses being associated with a unique printing configuration to handle print requests. A print request can be received from a print request source, where the print request is addressed to a specified unique identifier. The received print request can be handled in accordance with the unique printing configuration associated with the specified unique identifier.
1. A system comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the print system to: generate a plurality of unique identifiers for a printing device, each of the plurality of unique identifiers being associated with a unique printing configuration to handle print requests; receive a print request, from a print request source, addressed to a respective one of the plurality of unique identifiers; determine that the received print request satisfies at least one condition of the unique printing configuration associated with the respective unique identifier; and in response to the determination, handle the received print request in accordance with the unique printing configuration associated with the respective unique identifier. 2. The system of claim 1, wherein each of the plurality of unique identifiers comprises a unique email address. 3. The system of claim 1, wherein the plurality of unique identifiers includes an original identifier providing unlimited access to the printing device. 4. The system of claim 1, wherein the at least one condition of the unique printing configuration comprises at least one of a page limit, an ink limit, or a time window in which print requests may be received. 5. The system of claim 1, wherein the at least one condition of the unique printing configuration comprises at least one of a data type requirement or a keyword requirement for content of the received print request. 6. The system of claim 1, further comprising: a display; wherein the executed instructions further cause the system to: provide a configuration screen on the display to indicate the at least one condition for the unique printing configuration associated with the respective unique identifier. 7. A non-transitory computer readable medium storing instructions that, when executed by a processor of a computing system, cause the computing system to: generate a plurality of email addresses for a printing device, each of the plurality of email addresses being associated with a unique printing configuration to handle print requests; receive a print request, from a print request source, addressed to a respective one of the plurality of email addresses; and handle the received print request in accordance with the unique printing configuration associated with the respective email address. 8. The non-transitory computer readable medium of claim 7, wherein the executed instructions further cause the computing system to: prior to handling the received print request, determine that the received print request satisfies at least one condition of the unique printing configuration associated with the respective email address; wherein the executed instructions cause the computing system to handle the received print request in response to determining that the received print request satisfies the conditions of the unique printing configuration associated with the respective email address. 9. The non-transitory computer readable medium of claim 7, wherein the plurality of email addresses includes an original email address providing unlimited access to the printing device 10. The non-transitory computer readable medium of claim 8, wherein the at least one condition of the unique printing configuration comprises at least one of a page limit, an ink limit, or a time window in which print requests may be received. 11. The non-transitory computer readable medium of claim 8, wherein the at least one condition of the unique printing configuration comprises at least one of a data type requirement or a keyword requirement for content of the received print request. 12. The non-transitory computer readable medium of claim 8, wherein the executed instructions further cause the computing system to: provide a configuration screen on a display of the computing system to indicate the at least one condition for the unique printing configuration associated with the respective unique identifier. 13. A method performed by at least one processor of a computing system, the method comprising: associating a configuration with a printing device, the configuration containing information that specifies how to handle a print request associated with the configuration; determining whether a received print request originates from a trustworthy source; determining whether the received print request satisfies a condition indicated in the configuration; and if the source is untrustworthy and the received print request satisfies the condition, associate the received print request with the configuration to print the received print request in accordance with the configuration. 14. The method of claim 13, wherein the configuration (i) specifies a quality and a quantity of content to print, and (ii) indicates a limit to an amount of ink for the received print request. 15. The method of claim 13, wherein the condition comprises at least one of (i) a data type requirement, and (ii) a keyword requirement for content of the received print request.
Disclosed herein are a system, non-transitory computer-readable medium, and method for printing. A plurality of unique identifiers can be generated for a printing device, each of the plurality of email addresses being associated with a unique printing configuration to handle print requests. A print request can be received from a print request source, where the print request is addressed to a specified unique identifier. The received print request can be handled in accordance with the unique printing configuration associated with the specified unique identifier.1. A system comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the print system to: generate a plurality of unique identifiers for a printing device, each of the plurality of unique identifiers being associated with a unique printing configuration to handle print requests; receive a print request, from a print request source, addressed to a respective one of the plurality of unique identifiers; determine that the received print request satisfies at least one condition of the unique printing configuration associated with the respective unique identifier; and in response to the determination, handle the received print request in accordance with the unique printing configuration associated with the respective unique identifier. 2. The system of claim 1, wherein each of the plurality of unique identifiers comprises a unique email address. 3. The system of claim 1, wherein the plurality of unique identifiers includes an original identifier providing unlimited access to the printing device. 4. The system of claim 1, wherein the at least one condition of the unique printing configuration comprises at least one of a page limit, an ink limit, or a time window in which print requests may be received. 5. The system of claim 1, wherein the at least one condition of the unique printing configuration comprises at least one of a data type requirement or a keyword requirement for content of the received print request. 6. The system of claim 1, further comprising: a display; wherein the executed instructions further cause the system to: provide a configuration screen on the display to indicate the at least one condition for the unique printing configuration associated with the respective unique identifier. 7. A non-transitory computer readable medium storing instructions that, when executed by a processor of a computing system, cause the computing system to: generate a plurality of email addresses for a printing device, each of the plurality of email addresses being associated with a unique printing configuration to handle print requests; receive a print request, from a print request source, addressed to a respective one of the plurality of email addresses; and handle the received print request in accordance with the unique printing configuration associated with the respective email address. 8. The non-transitory computer readable medium of claim 7, wherein the executed instructions further cause the computing system to: prior to handling the received print request, determine that the received print request satisfies at least one condition of the unique printing configuration associated with the respective email address; wherein the executed instructions cause the computing system to handle the received print request in response to determining that the received print request satisfies the conditions of the unique printing configuration associated with the respective email address. 9. The non-transitory computer readable medium of claim 7, wherein the plurality of email addresses includes an original email address providing unlimited access to the printing device 10. The non-transitory computer readable medium of claim 8, wherein the at least one condition of the unique printing configuration comprises at least one of a page limit, an ink limit, or a time window in which print requests may be received. 11. The non-transitory computer readable medium of claim 8, wherein the at least one condition of the unique printing configuration comprises at least one of a data type requirement or a keyword requirement for content of the received print request. 12. The non-transitory computer readable medium of claim 8, wherein the executed instructions further cause the computing system to: provide a configuration screen on a display of the computing system to indicate the at least one condition for the unique printing configuration associated with the respective unique identifier. 13. A method performed by at least one processor of a computing system, the method comprising: associating a configuration with a printing device, the configuration containing information that specifies how to handle a print request associated with the configuration; determining whether a received print request originates from a trustworthy source; determining whether the received print request satisfies a condition indicated in the configuration; and if the source is untrustworthy and the received print request satisfies the condition, associate the received print request with the configuration to print the received print request in accordance with the configuration. 14. The method of claim 13, wherein the configuration (i) specifies a quality and a quantity of content to print, and (ii) indicates a limit to an amount of ink for the received print request. 15. The method of claim 13, wherein the condition comprises at least one of (i) a data type requirement, and (ii) a keyword requirement for content of the received print request.
2,600
9,847
9,847
15,307,148
2,674
A system comprising a multifunctional peripheral device in which the multifunctional peripheral device comprises a processor and a display device, in which the processor detects signature lines in an electronic document, and in which the display device of the multifunctional peripheral device displays the detected signature lines one at a time.
1. A system comprising: a multifunctional peripheral device comprising: a processor; and a display device; in which the processor detects signature lines in an electronic document; and in which the display device displays the detected signature lines one at a time. 2. The system of claim 1, in which the processor transforms content within the electronic document into a grayscale representation. 3. The system of claim 2, in which the processor generates a contour map of the grayscale representation of the electronic document. 4. The system of claim 1, in which displaying the detected signature lines comprises placing each detected signature line at the bottom of the display device. 5. The system of claim 1, in which displaying the detected signature lines comprises creating a zoomed in image of the signature line where the zoom factor displays the width of the signature line filling the width of the display device. 6. The system of claim 1, in which the multifunctional peripheral device further comprises a raised bezel at the bottom of the display device. 7. The system of claim 1, in which the multifunctional peripheral device receives the electronic document by scanning a hard copy of a document. 8. A method of detecting signature lines within an electronic document with a multifunctional peripheral device, comprising: receiving the electronic document at the multifunctional peripheral device; detecting signature lines in the electronic document; and displaying each detected signature line within the electronic document on a display device of the multifunctional peripheral device. 9. The method of claim 8, further comprising detecting on the display device a palm touch from a user of the multifunctional peripheral device and rejecting the palm touch as a touch on the display device. 10. The method of claim 8, in which the multifunctional peripheral device further comprises a raised bezel at the bottom display device. 11. The method of claim 8, further comprising detecting a starting and ending point of the detected signature lines in the electronic document and resizing the image of the detected signature line on the display device to include the signature line. 12. The method of claim 8, in which displaying the signature lines on a display device of the multifunctional peripheral device further comprises displaying a forward and backward button that, upon activation, scrolls through the detected signature lines. 13. A computer program product for receiving edits to an electronic document, the computer program product comprising: a computer readable storage medium comprising computer usable program code embodied therewith the computer usable program code comprising: computer usable program code to, when executed by a processor, receive the electronic document at a multifunctional peripheral device; computer usable program code to, when executed by a processor, detect signature lines within the electronic document; and computer usable program code to, when executed by a processor, display the signature lines on a display device of the multifunctional peripheral device. 14. The computer program product of claim 13, further comprising computer usable program code to, when executed by a processor, detect on the display device a palm touch from a user of the multifunctional peripheral device and rejecting the palm touch as a touch on the display device. 15. The computer program product of claim 13, in which displaying the signature lines on a display device of the multifunctional peripheral device further comprises displaying a forward and backward button that, upon activation, scrolls through the detected signature lines.
A system comprising a multifunctional peripheral device in which the multifunctional peripheral device comprises a processor and a display device, in which the processor detects signature lines in an electronic document, and in which the display device of the multifunctional peripheral device displays the detected signature lines one at a time.1. A system comprising: a multifunctional peripheral device comprising: a processor; and a display device; in which the processor detects signature lines in an electronic document; and in which the display device displays the detected signature lines one at a time. 2. The system of claim 1, in which the processor transforms content within the electronic document into a grayscale representation. 3. The system of claim 2, in which the processor generates a contour map of the grayscale representation of the electronic document. 4. The system of claim 1, in which displaying the detected signature lines comprises placing each detected signature line at the bottom of the display device. 5. The system of claim 1, in which displaying the detected signature lines comprises creating a zoomed in image of the signature line where the zoom factor displays the width of the signature line filling the width of the display device. 6. The system of claim 1, in which the multifunctional peripheral device further comprises a raised bezel at the bottom of the display device. 7. The system of claim 1, in which the multifunctional peripheral device receives the electronic document by scanning a hard copy of a document. 8. A method of detecting signature lines within an electronic document with a multifunctional peripheral device, comprising: receiving the electronic document at the multifunctional peripheral device; detecting signature lines in the electronic document; and displaying each detected signature line within the electronic document on a display device of the multifunctional peripheral device. 9. The method of claim 8, further comprising detecting on the display device a palm touch from a user of the multifunctional peripheral device and rejecting the palm touch as a touch on the display device. 10. The method of claim 8, in which the multifunctional peripheral device further comprises a raised bezel at the bottom display device. 11. The method of claim 8, further comprising detecting a starting and ending point of the detected signature lines in the electronic document and resizing the image of the detected signature line on the display device to include the signature line. 12. The method of claim 8, in which displaying the signature lines on a display device of the multifunctional peripheral device further comprises displaying a forward and backward button that, upon activation, scrolls through the detected signature lines. 13. A computer program product for receiving edits to an electronic document, the computer program product comprising: a computer readable storage medium comprising computer usable program code embodied therewith the computer usable program code comprising: computer usable program code to, when executed by a processor, receive the electronic document at a multifunctional peripheral device; computer usable program code to, when executed by a processor, detect signature lines within the electronic document; and computer usable program code to, when executed by a processor, display the signature lines on a display device of the multifunctional peripheral device. 14. The computer program product of claim 13, further comprising computer usable program code to, when executed by a processor, detect on the display device a palm touch from a user of the multifunctional peripheral device and rejecting the palm touch as a touch on the display device. 15. The computer program product of claim 13, in which displaying the signature lines on a display device of the multifunctional peripheral device further comprises displaying a forward and backward button that, upon activation, scrolls through the detected signature lines.
2,600
9,848
9,848
14,043,461
2,611
A tile-based system for processing graphics data. The tile based system includes a first screen-space pipeline, a cache unit, and a first tiling unit. The first tiling unit is configured to transmit a first set of primitives that overlap a first cache tile and a first prefetch command to the first screen-space pipeline for processing, and transmit a second set of primitives that overlap a second cache tile to the first screen-space pipeline for processing. The first prefetch command is configured to cause the cache unit to fetch data associated with the second cache tile from an external memory unit. The first tiling unit may also be configured to transmit a first flush command to the screen-space pipeline for processing with the first set of primitives. The first flush command is configured to cause the cache unit to flush data associated with the first cache tile.
1. A tile-based system for processing graphics data, the system comprising: a first screen-space pipeline; a cache unit; and a first tiling unit configured to: transmit a first set of primitives that overlap a first cache tile and a first prefetch command to the first screen-space pipeline for processing, transmit a second set of primitives that overlap a second cache tile to the first screen-space pipeline for processing, wherein the first prefetch command is configured to cause the cache unit to fetch data associated with the second cache tile from an external memory unit. 2. The system of claim 1, wherein the first tiling unit is further configured to transmit a second prefetch command to the screen-space pipeline for processing, and the second prefetch command is configured to cause the cache unit to fetch data associated with a third cache tile from the external memory unit. 3. The system of claim 2, wherein the first tiling unit is further configured to transmit a first flush command to the screen-space pipeline for processing with the first set of primitives and the first flush command is configured to cause the cache unit to flush data associated with the first cache tile. 4. The system of claim 3, wherein the first tiling unit is configured to transmit the first prefetch command to the first screen-space pipeline prior to transmitting the first set of primitives to the first screen-space pipeline. 5. The system of claim 4, wherein the first tiling unit is further configured to transmit the first flush command to the first screen-space pipeline after transmitting the first set of primitives to the first screen-space pipeline but before transmitting the second set of primitives to the first screen-space pipeline. 6. The system of claim 5, wherein the first tiling unit is further configured to transmit the second prefetch command to the first screen-space pipeline after transmitting the first flush command but before transmitting the second set of primitives to the first screen-space pipeline. 7. The system of claim 6, wherein: the first prefetch command is configured to specify a portion of the second cache tile that is intersected by primitives included in the second set of primitives; and the first flush command is configured to specify a portion of the first cache tile that is intersected by primitives included in the first set of primitives. 8. The system of claim 7, wherein, for the first prefetch command, the first tiling unit is configured to determine the portion of the second cache tile that is intersected by primitives included in the second set of primitives prior to transmitting the second set of primitives to the first screen-space pipeline. 9. The system of claim 8, wherein the first screen-space pipeline further comprises a first raster operations unit configured transmit the first prefetch command, the second prefetch command, and the first flush command to the cache unit for processing. 10. A computing device for processing graphics data, the computing device comprising: a cache unit; an external memory unit; and a graphics processing pipeline comprising: a first screen-space pipeline; and a first tiling unit configured to: transmit a first set of primitives that overlap a first cache tile and a first prefetch command to the first screen-space pipeline for processing, transmit a second set of primitives that overlap a second cache tile to the first screen-space pipeline for processing, wherein the first prefetch command is configured to cause the cache unit to fetch data associated with the second cache tile from the external memory unit. 11. The computing device of claim 10, wherein the first tiling unit is further configured to transmit a second prefetch command to the screen-space pipeline for processing, and the second prefetch command is configured to cause the cache unit to fetch data associated with a third cache tile from the external memory unit. 12. The computing device of claim 12, wherein the first tiling unit is further configured to transmit a first flush command to the screen-space pipeline for processing with the first set of primitives and the first flush command is configured to cause the cache unit to flush data associated with the first cache tile. 13. The computing device of claim 12, wherein the first tiling unit is configured to transmit the first prefetch command to the first screen-space pipeline prior to transmitting the first set of primitives to the first screen-space pipeline. 14. The computing device of claim 13, wherein the first tiling unit is further configured to transmit the first flush command to the first screen-space pipeline after transmitting the first set of primitives to the first screen-space pipeline but before transmitting the second set of primitives to the first screen-space pipeline. 15. The computing device of claim 14, wherein the first tiling unit is further configured to transmit the second prefetch command to the first screen-space pipeline after transmitting the first flush command but before transmitting the second set of primitives to the first screen-space pipeline. 16. The computing device of claim 15, wherein: the first prefetch command is configured to specify a portion of the second cache tile that is intersected by primitives included in the second set of primitives; and the first flush command is configured to specify a portion of the first cache tile that is intersected by primitives included in the first set of primitives. 17. The computing device of claim 16, wherein, for the first prefetch command, the first tiling unit is configured to determine the portion of the second cache tile that is intersected by primitives included in the second set of primitives prior to transmitting the second set of primitives to the first screen-space pipeline. 18. The computing device of claim 17, wherein the first screen-space pipeline further comprises a first raster operations unit configured transmit the first prefetch command, the second prefetch command, and the first flush command to the cache unit for processing. 19. A method for processing graphics data, the method comprising: transmitting a first set of primitives that overlap a first cache tile and a first prefetch command to a first screen-space pipeline for processing, and transmitting a second set of primitives that overlap a second cache tile to the first screen-space pipeline for processing, wherein the first prefetch command is configured to cause a cache unit to fetch data associated with the second cache tile from an external memory unit. 20. The method of claim 19, further comprising transmitting a second prefetch command to the screen-space pipeline for processing, wherein the second prefetch command is configured to cause the cache unit to fetch data associated with a third cache tile from the external memory unit.
A tile-based system for processing graphics data. The tile based system includes a first screen-space pipeline, a cache unit, and a first tiling unit. The first tiling unit is configured to transmit a first set of primitives that overlap a first cache tile and a first prefetch command to the first screen-space pipeline for processing, and transmit a second set of primitives that overlap a second cache tile to the first screen-space pipeline for processing. The first prefetch command is configured to cause the cache unit to fetch data associated with the second cache tile from an external memory unit. The first tiling unit may also be configured to transmit a first flush command to the screen-space pipeline for processing with the first set of primitives. The first flush command is configured to cause the cache unit to flush data associated with the first cache tile.1. A tile-based system for processing graphics data, the system comprising: a first screen-space pipeline; a cache unit; and a first tiling unit configured to: transmit a first set of primitives that overlap a first cache tile and a first prefetch command to the first screen-space pipeline for processing, transmit a second set of primitives that overlap a second cache tile to the first screen-space pipeline for processing, wherein the first prefetch command is configured to cause the cache unit to fetch data associated with the second cache tile from an external memory unit. 2. The system of claim 1, wherein the first tiling unit is further configured to transmit a second prefetch command to the screen-space pipeline for processing, and the second prefetch command is configured to cause the cache unit to fetch data associated with a third cache tile from the external memory unit. 3. The system of claim 2, wherein the first tiling unit is further configured to transmit a first flush command to the screen-space pipeline for processing with the first set of primitives and the first flush command is configured to cause the cache unit to flush data associated with the first cache tile. 4. The system of claim 3, wherein the first tiling unit is configured to transmit the first prefetch command to the first screen-space pipeline prior to transmitting the first set of primitives to the first screen-space pipeline. 5. The system of claim 4, wherein the first tiling unit is further configured to transmit the first flush command to the first screen-space pipeline after transmitting the first set of primitives to the first screen-space pipeline but before transmitting the second set of primitives to the first screen-space pipeline. 6. The system of claim 5, wherein the first tiling unit is further configured to transmit the second prefetch command to the first screen-space pipeline after transmitting the first flush command but before transmitting the second set of primitives to the first screen-space pipeline. 7. The system of claim 6, wherein: the first prefetch command is configured to specify a portion of the second cache tile that is intersected by primitives included in the second set of primitives; and the first flush command is configured to specify a portion of the first cache tile that is intersected by primitives included in the first set of primitives. 8. The system of claim 7, wherein, for the first prefetch command, the first tiling unit is configured to determine the portion of the second cache tile that is intersected by primitives included in the second set of primitives prior to transmitting the second set of primitives to the first screen-space pipeline. 9. The system of claim 8, wherein the first screen-space pipeline further comprises a first raster operations unit configured transmit the first prefetch command, the second prefetch command, and the first flush command to the cache unit for processing. 10. A computing device for processing graphics data, the computing device comprising: a cache unit; an external memory unit; and a graphics processing pipeline comprising: a first screen-space pipeline; and a first tiling unit configured to: transmit a first set of primitives that overlap a first cache tile and a first prefetch command to the first screen-space pipeline for processing, transmit a second set of primitives that overlap a second cache tile to the first screen-space pipeline for processing, wherein the first prefetch command is configured to cause the cache unit to fetch data associated with the second cache tile from the external memory unit. 11. The computing device of claim 10, wherein the first tiling unit is further configured to transmit a second prefetch command to the screen-space pipeline for processing, and the second prefetch command is configured to cause the cache unit to fetch data associated with a third cache tile from the external memory unit. 12. The computing device of claim 12, wherein the first tiling unit is further configured to transmit a first flush command to the screen-space pipeline for processing with the first set of primitives and the first flush command is configured to cause the cache unit to flush data associated with the first cache tile. 13. The computing device of claim 12, wherein the first tiling unit is configured to transmit the first prefetch command to the first screen-space pipeline prior to transmitting the first set of primitives to the first screen-space pipeline. 14. The computing device of claim 13, wherein the first tiling unit is further configured to transmit the first flush command to the first screen-space pipeline after transmitting the first set of primitives to the first screen-space pipeline but before transmitting the second set of primitives to the first screen-space pipeline. 15. The computing device of claim 14, wherein the first tiling unit is further configured to transmit the second prefetch command to the first screen-space pipeline after transmitting the first flush command but before transmitting the second set of primitives to the first screen-space pipeline. 16. The computing device of claim 15, wherein: the first prefetch command is configured to specify a portion of the second cache tile that is intersected by primitives included in the second set of primitives; and the first flush command is configured to specify a portion of the first cache tile that is intersected by primitives included in the first set of primitives. 17. The computing device of claim 16, wherein, for the first prefetch command, the first tiling unit is configured to determine the portion of the second cache tile that is intersected by primitives included in the second set of primitives prior to transmitting the second set of primitives to the first screen-space pipeline. 18. The computing device of claim 17, wherein the first screen-space pipeline further comprises a first raster operations unit configured transmit the first prefetch command, the second prefetch command, and the first flush command to the cache unit for processing. 19. A method for processing graphics data, the method comprising: transmitting a first set of primitives that overlap a first cache tile and a first prefetch command to a first screen-space pipeline for processing, and transmitting a second set of primitives that overlap a second cache tile to the first screen-space pipeline for processing, wherein the first prefetch command is configured to cause a cache unit to fetch data associated with the second cache tile from an external memory unit. 20. The method of claim 19, further comprising transmitting a second prefetch command to the screen-space pipeline for processing, wherein the second prefetch command is configured to cause the cache unit to fetch data associated with a third cache tile from the external memory unit.
2,600
9,849
9,849
15,347,653
2,657
Generation of expressive content is provided. An expressive synthesized speech system provides improved voice authoring user interfaces by which a user is enabled to efficiently author content for generating expressive output. An expressive synthesized speech system provides an expressive keyboard for enabling input of textual content and for selecting expressive operators, such as emoji objects or punctuation objects for applying predetermined prosody attributes or visual effects to the textual content. A voicesetting editor mode enables the user to author and adjust particular prosody attributes associated with the content for composing carefully-crafted synthetic speech. An active listening mode (ALM) is provided, which when selected, a set of ALM effect options are displayed, wherein each option is associated with a particular sound effect and/or visual effect. The user is enabled to rapidly respond with expressive vocal sound effects or visual effects while listening to others speak.
1. A computer-implemented method for generating expressive content comprising: displaying an expressive keyboard, wherein the expressive keyboard includes an alpha-numeric keyboard for receiving textual input and a plurality of expressive operators for selectively applying an emotional tone or a vocal sound effect associated with each of the plurality of expressive operators to received textual input; receiving textual input; selecting, by a user, at least one of the plurality of expressive operators; in response to receiving the selection of at least one of an expressive operator, identifying a predefined set of one or more prosody attributes or vocal sound effects associated with the selected expressive operator; combining the associated predefined set of prosody attributes or the vocal sound effect with the received textual input; and outputting the combined set of prosody attributes or the vocal sound effect and textual input to a speech generation engine for generating expressive synthesized speech. 2. The method of claim 1, wherein displaying the expressive keyboard including the plurality of expressive operators comprises displaying a plurality of emoji objects, wherein each emoji object is illustrative of an emotion. 3. The method of claim 2, further comprising displaying a plurality of emoji objects for selectively providing a visual effect associated with a selected emoji object as output. 4. The method of claim 2, wherein in response to a selection of an emoji object: identifying a visual effect associated with the selected emoji object; and outputting the visual effect to a visualization generation engine for generating an expressive display of the visual effect. 5. The method of claim 2, wherein displaying the plurality of emoji objects comprises: determining a set of emoji objects to display based on data associated with a user's emotional state, wherein the set includes emoji objects associated with an emotion corresponding to the user's emotional state; and displaying the set of emoji objects. 6. The method of claim 1, wherein combining the associated predefined set of prosody attributes with the received textual input comprises applying at least one of pause length, pitch, speed, and emphasis properties to the textual input. 7. The method of claim 1, wherein combining the vocal sound effect with the received textual input comprises combining a vocal sound effect selected from a group comprised of: a laugh; a sarcastic scoff; a sharp breath in; a disgusted “ugh” sound; an angry “argh” sound; and one or more user-provided sound effects. 8. The method of claim 1, wherein displaying the expressive keyboard including the plurality of expressive operators comprises displaying a plurality of punctuation objects. 9. The method of claim 1, further comprising; providing a voicesetting editor interface; in response to receiving a selection to launch the voicesetting editor interface: parsing the textual input and any received expressive operator selections; displaying the parsed textual input and any received expressive operator selections as selectable tokens; in response to receiving a selection of a token, displaying a set of prosodic properties that can be applied to the selected token; and in response to receiving a selection of a prosodic property, displaying a value associated with the selected prosodic property for allowing a user to adjust the value for controlling expressivity of the textual input when rendered. 10. The method of claim 9, wherein receiving textual input comprises receiving an upload of an existing text file. 11. The method of claim 1, further comprising: providing an active listening mode; and in response to receiving a selection to launch the active listening mode, displaying a plurality of selectable active listening mode effect options, wherein each active listening mode effect option has an associated sound effect; in response to receiving a selection of an active listening mode effect option: identifying the associated sound effect; and outputting the associated sound effect to a speech generation engine for playing the associated sound effect on a conversation partner's audio output device. 12. The method of claim 11, wherein: displaying the plurality of selectable active listening mode effect options comprises displaying a plurality of selectable active listening mode effect options wherein each active listening mode effect option has an associated visual effect; and in response to receiving a selection of an active listening mode effect option: identifying the associated visual effect; and outputting the associated visual effect to a visualization generation engine for rendering the associated visual effect on a visual output device. 13. A system for generating expressive content, the computing device comprising: at least one processing device; and at least one computer readable data storage device storing instructions that, when executed by the at least one processing device, cause the computing device to provide an expressive synthesized speech system, the expressive synthesized speech system operative to: display an expressive keyboard, wherein the expressive keyboard includes an alpha-numeric keyboard for receiving textual input and a plurality of expressive operators for selectively applying an emotional tone or a vocal sound effect associated with each of the plurality of expressive operators to received textual input; receive textual input; select, by a user, at least one of the plurality of expressive operators; in response to receiving the selection of at least one of an expressive operator, identify a predefined set of one or more prosody attributes or vocal sound effects associated with the selected expressive operator; combine the associated predefined set of prosody attributes or the vocal sound effect with the received textual input; and output the combined set of prosody attributes or the vocal sound effect and textual input to a speech generation engine for generating expressive synthesized speech. 14. The system of claim 13, wherein: the plurality of expressive operators comprises a plurality of emoji objects, each emoji object illustrating an emotion; one or more of the plurality of emoji objects has an associated visual effect; and in response to a selection of an emoji object, the expressive synthesized speech system is further operative to: identify a visual effect associated with the selected emoji object; and output the visual effect to a visualization generation engine for generating an expressive display of the visual effect. 15. The system of claim 13, wherein in combining the associated predefined set of prosody attributes with the received textual input, the expressive synthesized speech system is operative to apply at least one of pause length, pitch, speed, and emphasis properties to the textual input. 16. The system of claim 13, wherein the expressive synthesized speech system is further operative to: provide a voicesetting editor interface; in response to receiving a selection to launch the voicesetting editor interface: parse the textual input and any received expressive operator selections; display the parsed textual input and any received expressive operator selections as selectable tokens; in response to receiving a selection of a token, display a set of prosodic properties that can be applied to the selected token; and in response to receiving a selection of a prosodic property, display a value associated with the selected prosodic property for allowing a user to adjust the value for controlling expressivity of the textual input when rendered. 17. The system of claim 13, wherein the expressive synthesized speech system is further operative to: provide an active listening mode; and in response to receiving a selection to launch the active listening mode, display a plurality of selectable active listening mode effect options, wherein each active listening mode effect option has an associated sound effect or visual effect; in response to receiving a selection of an active listening mode effect option: identify the associated sound effect or visual effect; and output the associated sound effect to a speech generation engine for playing the associated sound effect on a conversation partner's audio output device or to a visualization generation engine for generating a display of the associated visual effect on a visual output device. 18. A computer readable storage device including computer readable instructions, which when executed by a processing unit is operative to: display an expressive keyboard, wherein the expressive keyboard includes an alpha-numeric keyboard for receiving textual input and a plurality of emoji objects and punctuation objects for selectively applying an emotional tone or a vocal sound effect associated with each of the plurality of emoji objects and punctuation objects to received textual input; receive textual input; select, by a user, at least one of the plurality of emoji objects and punctuation objects; in response to receiving the selection of at least one of an emoji object or punctuation object, identify a predefined set of one or more prosody attributes or vocal sound effects associated with the selected emoji object or punctuation object; combine the associated predefined set of prosody attributes or the vocal sound effect with the received textual input; and output the combined set of prosody attributes or the vocal sound effect and textual input to a speech generation engine for generating expressive synthesized speech. 19. The computer readable storage device of claim 18, wherein the device is further operative to: provide a voicesetting editor interface; in response to receiving a selection to launch the voicesetting editor interface: parse the textual input and any received expressive operator selections; display the parsed textual input and any received expressive operator selections as selectable tokens; in response to receiving a selection of a token, display a set of prosodic properties that can be applied to the selected token; and in response to receiving a selection of a prosodic property, display a value associated with the selected prosodic property for allowing a user to adjust the value for controlling expressivity of the textual input when rendered. 20. The computer readable storage device of claim 18, wherein the device is further operative to: provide an active listening mode; and in response to receiving a selection to launch the active listening mode, display a plurality of selectable active listening mode effect options, wherein each active listening mode effect option has an associated sound effect or visual effect; in response to receiving a selection of an active listening mode effect option: identify the associated sound effect or visual effect; and output the associated sound effect to a speech generation engine for playing the associated sound effect on a conversation partner's audio output device or to a visualization generation engine for generating a display of the associated visual effect on the conversation partner's visual output device.
Generation of expressive content is provided. An expressive synthesized speech system provides improved voice authoring user interfaces by which a user is enabled to efficiently author content for generating expressive output. An expressive synthesized speech system provides an expressive keyboard for enabling input of textual content and for selecting expressive operators, such as emoji objects or punctuation objects for applying predetermined prosody attributes or visual effects to the textual content. A voicesetting editor mode enables the user to author and adjust particular prosody attributes associated with the content for composing carefully-crafted synthetic speech. An active listening mode (ALM) is provided, which when selected, a set of ALM effect options are displayed, wherein each option is associated with a particular sound effect and/or visual effect. The user is enabled to rapidly respond with expressive vocal sound effects or visual effects while listening to others speak.1. A computer-implemented method for generating expressive content comprising: displaying an expressive keyboard, wherein the expressive keyboard includes an alpha-numeric keyboard for receiving textual input and a plurality of expressive operators for selectively applying an emotional tone or a vocal sound effect associated with each of the plurality of expressive operators to received textual input; receiving textual input; selecting, by a user, at least one of the plurality of expressive operators; in response to receiving the selection of at least one of an expressive operator, identifying a predefined set of one or more prosody attributes or vocal sound effects associated with the selected expressive operator; combining the associated predefined set of prosody attributes or the vocal sound effect with the received textual input; and outputting the combined set of prosody attributes or the vocal sound effect and textual input to a speech generation engine for generating expressive synthesized speech. 2. The method of claim 1, wherein displaying the expressive keyboard including the plurality of expressive operators comprises displaying a plurality of emoji objects, wherein each emoji object is illustrative of an emotion. 3. The method of claim 2, further comprising displaying a plurality of emoji objects for selectively providing a visual effect associated with a selected emoji object as output. 4. The method of claim 2, wherein in response to a selection of an emoji object: identifying a visual effect associated with the selected emoji object; and outputting the visual effect to a visualization generation engine for generating an expressive display of the visual effect. 5. The method of claim 2, wherein displaying the plurality of emoji objects comprises: determining a set of emoji objects to display based on data associated with a user's emotional state, wherein the set includes emoji objects associated with an emotion corresponding to the user's emotional state; and displaying the set of emoji objects. 6. The method of claim 1, wherein combining the associated predefined set of prosody attributes with the received textual input comprises applying at least one of pause length, pitch, speed, and emphasis properties to the textual input. 7. The method of claim 1, wherein combining the vocal sound effect with the received textual input comprises combining a vocal sound effect selected from a group comprised of: a laugh; a sarcastic scoff; a sharp breath in; a disgusted “ugh” sound; an angry “argh” sound; and one or more user-provided sound effects. 8. The method of claim 1, wherein displaying the expressive keyboard including the plurality of expressive operators comprises displaying a plurality of punctuation objects. 9. The method of claim 1, further comprising; providing a voicesetting editor interface; in response to receiving a selection to launch the voicesetting editor interface: parsing the textual input and any received expressive operator selections; displaying the parsed textual input and any received expressive operator selections as selectable tokens; in response to receiving a selection of a token, displaying a set of prosodic properties that can be applied to the selected token; and in response to receiving a selection of a prosodic property, displaying a value associated with the selected prosodic property for allowing a user to adjust the value for controlling expressivity of the textual input when rendered. 10. The method of claim 9, wherein receiving textual input comprises receiving an upload of an existing text file. 11. The method of claim 1, further comprising: providing an active listening mode; and in response to receiving a selection to launch the active listening mode, displaying a plurality of selectable active listening mode effect options, wherein each active listening mode effect option has an associated sound effect; in response to receiving a selection of an active listening mode effect option: identifying the associated sound effect; and outputting the associated sound effect to a speech generation engine for playing the associated sound effect on a conversation partner's audio output device. 12. The method of claim 11, wherein: displaying the plurality of selectable active listening mode effect options comprises displaying a plurality of selectable active listening mode effect options wherein each active listening mode effect option has an associated visual effect; and in response to receiving a selection of an active listening mode effect option: identifying the associated visual effect; and outputting the associated visual effect to a visualization generation engine for rendering the associated visual effect on a visual output device. 13. A system for generating expressive content, the computing device comprising: at least one processing device; and at least one computer readable data storage device storing instructions that, when executed by the at least one processing device, cause the computing device to provide an expressive synthesized speech system, the expressive synthesized speech system operative to: display an expressive keyboard, wherein the expressive keyboard includes an alpha-numeric keyboard for receiving textual input and a plurality of expressive operators for selectively applying an emotional tone or a vocal sound effect associated with each of the plurality of expressive operators to received textual input; receive textual input; select, by a user, at least one of the plurality of expressive operators; in response to receiving the selection of at least one of an expressive operator, identify a predefined set of one or more prosody attributes or vocal sound effects associated with the selected expressive operator; combine the associated predefined set of prosody attributes or the vocal sound effect with the received textual input; and output the combined set of prosody attributes or the vocal sound effect and textual input to a speech generation engine for generating expressive synthesized speech. 14. The system of claim 13, wherein: the plurality of expressive operators comprises a plurality of emoji objects, each emoji object illustrating an emotion; one or more of the plurality of emoji objects has an associated visual effect; and in response to a selection of an emoji object, the expressive synthesized speech system is further operative to: identify a visual effect associated with the selected emoji object; and output the visual effect to a visualization generation engine for generating an expressive display of the visual effect. 15. The system of claim 13, wherein in combining the associated predefined set of prosody attributes with the received textual input, the expressive synthesized speech system is operative to apply at least one of pause length, pitch, speed, and emphasis properties to the textual input. 16. The system of claim 13, wherein the expressive synthesized speech system is further operative to: provide a voicesetting editor interface; in response to receiving a selection to launch the voicesetting editor interface: parse the textual input and any received expressive operator selections; display the parsed textual input and any received expressive operator selections as selectable tokens; in response to receiving a selection of a token, display a set of prosodic properties that can be applied to the selected token; and in response to receiving a selection of a prosodic property, display a value associated with the selected prosodic property for allowing a user to adjust the value for controlling expressivity of the textual input when rendered. 17. The system of claim 13, wherein the expressive synthesized speech system is further operative to: provide an active listening mode; and in response to receiving a selection to launch the active listening mode, display a plurality of selectable active listening mode effect options, wherein each active listening mode effect option has an associated sound effect or visual effect; in response to receiving a selection of an active listening mode effect option: identify the associated sound effect or visual effect; and output the associated sound effect to a speech generation engine for playing the associated sound effect on a conversation partner's audio output device or to a visualization generation engine for generating a display of the associated visual effect on a visual output device. 18. A computer readable storage device including computer readable instructions, which when executed by a processing unit is operative to: display an expressive keyboard, wherein the expressive keyboard includes an alpha-numeric keyboard for receiving textual input and a plurality of emoji objects and punctuation objects for selectively applying an emotional tone or a vocal sound effect associated with each of the plurality of emoji objects and punctuation objects to received textual input; receive textual input; select, by a user, at least one of the plurality of emoji objects and punctuation objects; in response to receiving the selection of at least one of an emoji object or punctuation object, identify a predefined set of one or more prosody attributes or vocal sound effects associated with the selected emoji object or punctuation object; combine the associated predefined set of prosody attributes or the vocal sound effect with the received textual input; and output the combined set of prosody attributes or the vocal sound effect and textual input to a speech generation engine for generating expressive synthesized speech. 19. The computer readable storage device of claim 18, wherein the device is further operative to: provide a voicesetting editor interface; in response to receiving a selection to launch the voicesetting editor interface: parse the textual input and any received expressive operator selections; display the parsed textual input and any received expressive operator selections as selectable tokens; in response to receiving a selection of a token, display a set of prosodic properties that can be applied to the selected token; and in response to receiving a selection of a prosodic property, display a value associated with the selected prosodic property for allowing a user to adjust the value for controlling expressivity of the textual input when rendered. 20. The computer readable storage device of claim 18, wherein the device is further operative to: provide an active listening mode; and in response to receiving a selection to launch the active listening mode, display a plurality of selectable active listening mode effect options, wherein each active listening mode effect option has an associated sound effect or visual effect; in response to receiving a selection of an active listening mode effect option: identify the associated sound effect or visual effect; and output the associated sound effect to a speech generation engine for playing the associated sound effect on a conversation partner's audio output device or to a visualization generation engine for generating a display of the associated visual effect on the conversation partner's visual output device.
2,600
9,850
9,850
15,055,874
2,649
A system for providing mobile interactive satellite services includes a satellite operable to communicate with mobile units, a terrestrial base transceiver station operable to communicate with mobile units, and a ground station in communication with the satellite and the terrestrial base transceiver station to provide mobile interactive satellite services. The mobile interactive satellite services include a multicast component and an interactive component such that the ground station provides both the multicast component and the interactive component using the satellite, with the terrestrial base transceiver station used to provide an ancillary terrestrial component. A device for communicating with a mobile interactive satellite service system includes an antenna, a transceiver coupled to the antenna and operable to communicate with a mobile interactive satellite service system, a user input device, an output device, a processor unit, and a network interface. The processor unit is coupled to the user input device, the output device, and the transceiver such that the processor is operable to output received information from the transceiver using the output device, to receive interactive information from the user input device, and to transmit data based on the received interactive information using the transceiver. The network interface is coupled to the processor such that the processor is operable to side load information.
1. A system for providing mobile interactive satellite services, the system comprising: a satellite operable to communicate with mobile units; a terrestrial base transceiver station operable to communicate with the mobile units; a ground station in communication with the satellite and the terrestrial base transceiver station to provide mobile interactive satellite services, wherein the mobile interactive satellite services include a multicast component and an interactive component such that the ground station provides both the multicast component and the interactive component using the satellite. 2. The system of claim 1, wherein the satellite includes one or more from the group consisting of: a satellite in a geostationary orbit; a satellite in a low earth orbit; a satellite in a medium earth orbit; and a satellite in a circular orbit. 3. The system of claim 1, further comprising a secondary satellite in communication with the ground station. 4. The system of claim 3, wherein the secondary satellite communicates information related to the multicast component of the mobile interactive satellite services between the terrestrial base transceiver station and the ground station. 5. The system of claim 3, wherein the secondary satellite communicates information related to the interactive component of the mobile interactive satellite services between the terrestrial base transceiver station and the ground station. 6. The system of claim 1, wherein the terrestrial base transceiver station is operable to communicate information related to the multicast component of the mobile interactive satellite services between the ground station and the mobile units. 7. The system of claim 1, wherein the terrestrial base transceiver station is operable to communicate information related to the interactive component of the mobile interactive satellite services between the ground station and the mobile units. 8. The system of claim 1, wherein the satellite is operable to communicate information related to the multicast component of the mobile interactive satellite services between the ground station and the mobile units. 9. The system of claim 1, wherein the satellite is operable to communicate information related to the interactive component of the mobile interactive satellite services between the ground station and the mobile units. 10. The system of claim 1, further comprising a backhaul network coupled to the terrestrial base transceiver station and coupled to the ground station such that the ground station is operable to communicate with the terrestrial base transceiver station using the backhaul network. 11. The system of claim 1, wherein the ground station comprises: a multicast core; an interactive core; and a communication unit coupled to the multicast core and the interactive core such that the communications unit is operable to communicate with the satellite to provide multicast and interactive communications to mobile units. 12. The system of claim 11, wherein the communication unit includes a radio frequency communication unit. 13. The system of claim 12, wherein the communication unit includes an advanced antenna unit disposed between the radio frequency communication unit, and the multicast core and interactive core. 14. The system of claim 13, wherein the advanced antenna unit performs ground-based beam-forming using digital signal processing on signals provided by the multicast core and the interactive core to create uplink communications usable by the satellite to transmit multiple beams. 15. The system of claim 1, wherein the satellite comprises a smart-antenna unit. 16. The system of claim 15, wherein the advanced antenna unit is operable to perform satellite-based beamforming. 17. The system of claim 1 wherein the mobile interactive satellite services include one or more from the group consisting of: broadcast audio; broadcast video; broadcast data; interactive audio; interactive video; interactive data; and telephony. 18. A device for communicating with a mobile interactive satellite service system, the device comprising: an antenna; a transceiver coupled to the antenna and operable to communicate information related to mobile interactive satellite services; an input device; an output device; a processor unit coupled to the user input device, the output device, and the transceiver such that the processor is operable to output received information from the transceiver using the output device, to receive interactive information from the user input device, and to transmit data based on the received interactive information using the transceiver; and a network interface coupled to the processor such that the processor is operable to side load information. 19. The device of claim 18 wherein the antenna includes multiple elements. 20. The device of claim 19 wherein the antenna is an antenna array. 21. The device of claim 19 wherein digital signal processing is used with the multiple elements of the antenna. 22. The device of claim 18, wherein the antenna includes a terrestrial element having linear polarization and a satellite element having circular polarization. 23. The device of claim 18, wherein the transceiver is operable to communicate with a terrestrial component and a satellite component. 24. The device of claim 18 wherein the input device is one or more from the group consisting of: a mouse; a touch screen; a keyboard; a button; a microphone; a video camera a joystick; a port; and a remote control. 25. The device of claim 18 wherein the output device includes one or more from the group consisting of: a display; a speaker; a light; and a port. 26. The device of claim 25, wherein the port includes one or more from the group consisting of: a serial port; a network port; and a data interface. 27. The device of claim 18, wherein the processor unit includes a voice recognition unit. 28. The device of claim 18, further comprising a side-loading network interface wherein the processor unit is coupled to the side-loading network interface such that the processor unit can receive information from the transceiver and from the side-loading network interface. 29. The device of claim 28, wherein the processor unit is operable to transmit information using the side-loading network interface. 30. The device of claim 28 wherein the side-loading network interface is a wireless network interface. 31. The device of claim 30 wherein the wireless network interface includes one or more from the group consisting of: a IEEE 802.11 WiFi interface; a IEEE 802.16 WiMAX interface; a Bluetooth interface; and a IEEE 802.20 interface. 32. The device of claim 18, wherein the output device is an interface to a vehicle subsystem. 33. The device of claim 32, wherein the vehicle subsystem includes one or more from the group consisting of: a video entertainment system; an audio entertainment system; a navigation system; and a vehicle data bus system. 34. The device of claim 32, wherein the processor unit is operable to receive information and use the received information to update the vehicle subsystem. 35. The device of claim 18, further comprising an vehicle communication bus interface. 36. The device of claim 35, wherein the vehicle communication bus interface includes one or more from the group consisting of: CAN, ODBII, and MOST. 37. A satellite for providing mobile interactive satellite services, the satellite comprising: a ground station uplink that receives multicast content and interactive content from a ground station; a mobile unit downlink that transmits multicast content and interactive content to mobile units; a mobile unit uplink that receives interactive content from the mobile units; a ground station downlink that transmits interactive content to the ground station; and a processor unit coupled to the ground station uplink, the mobile unit downlink, the mobile unit uplink, and the ground station downlink, and configured to provide mobile interactive satellite services to the mobile units. 38. The satellite of claim 37, wherein the ground station uplink, the mobile unit downlink, the mobile unit uplink, and the ground station downlink operate in conjunction with an ancillary terrestrial communication system. 39. The satellite of claim 37, wherein the mobile interactive satellite services includes one or more from the group consisting of: vehicle navigation; broadcast video; interactive video; interactive data communications; vehicle telemetry; vehicle subsystem control; vehicle subsystem maintenance; and vehicle security services. 40. The satellite of claim 37, wherein the mobile unit downlink uses beamforming. 41. A ground station for providing mobile interactive satellite services, the ground station comprising: a multicast core; an interactive core; and a radio frequency communication unit coupled to the multicast core and the interactive core such that the radio frequency communications unit is operable to communicate with the satellite to provide multicast and interactive communications to mobile units. 42. The ground station of claim 41, wherein the radio frequency communication unit uses beamforming. 43. A terrestrial base transceiver station for providing ancillary terrestrial communications in a mobile interactive satellite service, the terrestrial base transceiver station comprising: a ground station communication link for communicating multicast and interactive content with a ground station; a mobile unit downlink that transmits multicast content and interactive content to mobile units; a mobile unit uplink that receives interactive content from the mobile units; and a processor unit coupled to the ground station communication link, the mobile unit downlink, and the mobile unit uplink, and configured to provide ancillary terrestrial communications in a mobile interactive satellite service. 44. A method for providing mobile interactive satellite services on a satellite, the method comprising: receiving multicast data for transmission to mobile units; receiving interactive data for transmission to mobile units; and transmitting multicast data and interactive data to mobile units. 45. The method of claim 44 wherein receiving multicast data for transmission to mobile units includes receiving multicast data from a ground station. 46. The method of claim 44 wherein receiving multicast data for transmission to mobile units includes receiving multicast data from a second satellite. 47. The method of claim 44 wherein receiving interactive data includes receiving interactive data from mobile units. 48. The method of claim 44 wherein receiving interactive data includes receiving interactive data from a ground station. 49. The method of claim 44 wherein the received multicast data has been processed by a ground station for transmission using advanced antenna technology. 50. The method of claim 49 wherein the advanced antenna technology includes one or more of the group consisting of: MIMO and beamforming. 51. The method of claim 44 wherein transmitting multicast data and interactive data to mobile units includes transmitting multiple beams. 52. The method of claim 51 wherein each of the multiple beams are configured to cover a geographical area. 53. The method of claim 44 wherein transmitting multicast data and interactive data to mobile units includes: transmitting multicast data over a first portion of available radio frequency spectrum; and transmitting interactive data over a second portion of available radio frequency spectrum. 54. The method of claim 53 wherein the first portion and second portion of available radio frequency spectrum are configurable. 55. A method for providing ground station support in a mobile interactive satellite service, the method comprising: transmitting multicast data such that the multicast data may be communicated to mobile units; receiving interactive mobile unit data; processing the received interactive mobile unit data to determine interactive data to be communicated to at least one of the mobile units; and transmitting the determined interactive data such that the determined interactive data may be communicated to the at least one of the mobile units. 56. The method of claim 55, further comprising receiving multicast data from a multicast data feed. 57. The method of claim 56, wherein receiving multicast data includes receiving multicast data from a satellite feed. 58. The method of claim 56, wherein receiving multicast data includes receiving multicast data from a terrestrial feed. 59. The method of claim 55, wherein receiving interactive mobile unit data includes receiving interactive mobile unit data through a satellite. 60. The method of claim 55, wherein receiving interactive mobile unit data includes receiving interactive mobile unit data from a terrestrial base transceiver station. 61. The method of claim 55, wherein receiving interactive mobile unit data includes receiving interactive mobile unit data from a server. 62. The method of claim 55, wherein receiving interactive mobile unit data includes receiving interactive mobile unit data from an external communications networks. 63. The method of claim 55, wherein processing the received interactive mobile unit data includes: retrieving data in response to the received interactive mobile unit data; and assembling the retrieved data for transmission. 64. The method of claim 55, wherein transmitting the determined interactive data includes transmitting the determined interactive data to a satellite for transmission to mobile units. 65. The method of claim 55, wherein transmitting the determined interactive data includes transmitting the determined interactive data to a terrestrial base transceiver station for transmission to mobile units. 66. A method for providing ancillary terrestrial communication in a mobile interactive satellite system, the method comprising: providing mobile interactive satellite services using a terrestrial base transceiver station, the mobile interactive satellite services a multicast component and an interactive component; transmitting information for the multicast component and the interactive component using a satellite; and transmitting information for the multicast component and the interactive component using the terrestrial base transceiver station. 67. The method of claim 66, further comprising receiving information for the interactive component using the satellite. 68. The method of claim 67, further comprising receiving information for the interactive component using the terrestrial base transceiver station. 69. A method for providing mobile interactive satellite services, the method comprising: receiving multicast data for transmission using a communication system that includes a satellite with an ancillary terrestrial component; receiving interactive data using the communication system; determining interactive data for transmission using the received interactive data; transmitting the multicast data and the determined interactive data using the communication system to provide mobile interactive satellite services. 70. The method of claim 69, wherein receiving multicast data includes receiving broadcast video feeds. 71. The method of claim 69, wherein receiving multicast data includes receiving broadcast audio feeds. 72. The method of claim 69, wherein receiving interactive data using the communication system includes receiving data from a mobile unit through the satellite. 73. The method of claim 69, wherein receiving interactive data using the communication system includes receiving data from a mobile unit through the ancillary terrestrial component. 74. The method of claim 69, wherein the ancillary terrestrial component includes a terrestrial base transceiver station. 75. The method of claim 69, wherein determining interactive data for transmission using the received interactive data includes retrieving information based on the received interactive data. 76. The method of claim 69, wherein transmitting the multicast data and the determined interactive data using the communication system includes transmitting using both the satellite and the ancillary terrestrial component. 77. The method of claim 69, wherein receiving multicast data includes receiving a broadcast video feed; and transmitting the multicast data includes transmitting the received broadcast video feed to provide multicast video in a mobile interactive satellite service. 78. The method of claim 69, wherein the mobile interactive satellite services include vehicle navigation. 79. The method of claim 78, wherein transmitting the multicast data and the determined interactive data using the communication system includes transmitting updates to navigational information. 80. The method of claim 69, wherein transmitting the multicast data and the determined interactive data using the communication system includes transmitting a system update for a mobile unit system. 81. The method of claim 80, wherein the system update is a software update. 82. The method of claim 69, wherein the mobile interactive satellite services include vehicle security services. 83. The method of claim 82, wherein receiving interactive data using the communication system includes receiving vehicle security information. 84. The method of claim 83, wherein vehicle security information includes one or more from the group consisting of: vehicle telemetry; vehicle location; vehicle security incidents; telephonic information; and vehicle system information. 85. The method of claim 84, wherein the mobile interactive satellite services include one or more from the group consisting of: remote vehicle tracking; remote vehicle disabling; remote vehicle enabling; remote vehicle unlocking; remote vehicle monitoring; vehicle black box access; and vehicle black box recording. 86. In a vehicle, a method for interacting with a mobile interactive satellite service with an ancillary terrestrial component, the method comprising: receiving information from a vehicle communication bus; transmitting data related to mobile interactive satellite services based on the received information. 87. The method of claim 86, wherein receiving information includes receiving an air bag deployment notification. 88. The method of claim 87, wherein transmitting data includes transmitting the air bag deployment notification. 89. The method of claim 86, further comprising: receiving a request from the mobile interactive satellite system to perform an action on a vehicle system; and performing the action on the vehicle system. 90. The method of claim 86, wherein receiving information includes periodically requesting information using the vehicle communication databus. 91. The method of claim 89, wherein performing the action includes upgrading software. 92. The method of claim 89, wherein performing the action includes upgrading firmware. 93. The method of claim 89, wherein performing the action includes modifying the vehicle system configuration. 94. The method of claim 89, wherein performing the action includes retrieving data. 95. The method of claim 86, further comprising: receiving information related to mobile interactive satellite services; and outputting the received information. 96. The method of claim 95, wherein the received information is one or more from the group consisting of: video, audio, and data. 97. The method of claim 95, wherein outputting the received information includes using an output device including one or more from the group consisting of: a display, a speaker, and a light.
A system for providing mobile interactive satellite services includes a satellite operable to communicate with mobile units, a terrestrial base transceiver station operable to communicate with mobile units, and a ground station in communication with the satellite and the terrestrial base transceiver station to provide mobile interactive satellite services. The mobile interactive satellite services include a multicast component and an interactive component such that the ground station provides both the multicast component and the interactive component using the satellite, with the terrestrial base transceiver station used to provide an ancillary terrestrial component. A device for communicating with a mobile interactive satellite service system includes an antenna, a transceiver coupled to the antenna and operable to communicate with a mobile interactive satellite service system, a user input device, an output device, a processor unit, and a network interface. The processor unit is coupled to the user input device, the output device, and the transceiver such that the processor is operable to output received information from the transceiver using the output device, to receive interactive information from the user input device, and to transmit data based on the received interactive information using the transceiver. The network interface is coupled to the processor such that the processor is operable to side load information.1. A system for providing mobile interactive satellite services, the system comprising: a satellite operable to communicate with mobile units; a terrestrial base transceiver station operable to communicate with the mobile units; a ground station in communication with the satellite and the terrestrial base transceiver station to provide mobile interactive satellite services, wherein the mobile interactive satellite services include a multicast component and an interactive component such that the ground station provides both the multicast component and the interactive component using the satellite. 2. The system of claim 1, wherein the satellite includes one or more from the group consisting of: a satellite in a geostationary orbit; a satellite in a low earth orbit; a satellite in a medium earth orbit; and a satellite in a circular orbit. 3. The system of claim 1, further comprising a secondary satellite in communication with the ground station. 4. The system of claim 3, wherein the secondary satellite communicates information related to the multicast component of the mobile interactive satellite services between the terrestrial base transceiver station and the ground station. 5. The system of claim 3, wherein the secondary satellite communicates information related to the interactive component of the mobile interactive satellite services between the terrestrial base transceiver station and the ground station. 6. The system of claim 1, wherein the terrestrial base transceiver station is operable to communicate information related to the multicast component of the mobile interactive satellite services between the ground station and the mobile units. 7. The system of claim 1, wherein the terrestrial base transceiver station is operable to communicate information related to the interactive component of the mobile interactive satellite services between the ground station and the mobile units. 8. The system of claim 1, wherein the satellite is operable to communicate information related to the multicast component of the mobile interactive satellite services between the ground station and the mobile units. 9. The system of claim 1, wherein the satellite is operable to communicate information related to the interactive component of the mobile interactive satellite services between the ground station and the mobile units. 10. The system of claim 1, further comprising a backhaul network coupled to the terrestrial base transceiver station and coupled to the ground station such that the ground station is operable to communicate with the terrestrial base transceiver station using the backhaul network. 11. The system of claim 1, wherein the ground station comprises: a multicast core; an interactive core; and a communication unit coupled to the multicast core and the interactive core such that the communications unit is operable to communicate with the satellite to provide multicast and interactive communications to mobile units. 12. The system of claim 11, wherein the communication unit includes a radio frequency communication unit. 13. The system of claim 12, wherein the communication unit includes an advanced antenna unit disposed between the radio frequency communication unit, and the multicast core and interactive core. 14. The system of claim 13, wherein the advanced antenna unit performs ground-based beam-forming using digital signal processing on signals provided by the multicast core and the interactive core to create uplink communications usable by the satellite to transmit multiple beams. 15. The system of claim 1, wherein the satellite comprises a smart-antenna unit. 16. The system of claim 15, wherein the advanced antenna unit is operable to perform satellite-based beamforming. 17. The system of claim 1 wherein the mobile interactive satellite services include one or more from the group consisting of: broadcast audio; broadcast video; broadcast data; interactive audio; interactive video; interactive data; and telephony. 18. A device for communicating with a mobile interactive satellite service system, the device comprising: an antenna; a transceiver coupled to the antenna and operable to communicate information related to mobile interactive satellite services; an input device; an output device; a processor unit coupled to the user input device, the output device, and the transceiver such that the processor is operable to output received information from the transceiver using the output device, to receive interactive information from the user input device, and to transmit data based on the received interactive information using the transceiver; and a network interface coupled to the processor such that the processor is operable to side load information. 19. The device of claim 18 wherein the antenna includes multiple elements. 20. The device of claim 19 wherein the antenna is an antenna array. 21. The device of claim 19 wherein digital signal processing is used with the multiple elements of the antenna. 22. The device of claim 18, wherein the antenna includes a terrestrial element having linear polarization and a satellite element having circular polarization. 23. The device of claim 18, wherein the transceiver is operable to communicate with a terrestrial component and a satellite component. 24. The device of claim 18 wherein the input device is one or more from the group consisting of: a mouse; a touch screen; a keyboard; a button; a microphone; a video camera a joystick; a port; and a remote control. 25. The device of claim 18 wherein the output device includes one or more from the group consisting of: a display; a speaker; a light; and a port. 26. The device of claim 25, wherein the port includes one or more from the group consisting of: a serial port; a network port; and a data interface. 27. The device of claim 18, wherein the processor unit includes a voice recognition unit. 28. The device of claim 18, further comprising a side-loading network interface wherein the processor unit is coupled to the side-loading network interface such that the processor unit can receive information from the transceiver and from the side-loading network interface. 29. The device of claim 28, wherein the processor unit is operable to transmit information using the side-loading network interface. 30. The device of claim 28 wherein the side-loading network interface is a wireless network interface. 31. The device of claim 30 wherein the wireless network interface includes one or more from the group consisting of: a IEEE 802.11 WiFi interface; a IEEE 802.16 WiMAX interface; a Bluetooth interface; and a IEEE 802.20 interface. 32. The device of claim 18, wherein the output device is an interface to a vehicle subsystem. 33. The device of claim 32, wherein the vehicle subsystem includes one or more from the group consisting of: a video entertainment system; an audio entertainment system; a navigation system; and a vehicle data bus system. 34. The device of claim 32, wherein the processor unit is operable to receive information and use the received information to update the vehicle subsystem. 35. The device of claim 18, further comprising an vehicle communication bus interface. 36. The device of claim 35, wherein the vehicle communication bus interface includes one or more from the group consisting of: CAN, ODBII, and MOST. 37. A satellite for providing mobile interactive satellite services, the satellite comprising: a ground station uplink that receives multicast content and interactive content from a ground station; a mobile unit downlink that transmits multicast content and interactive content to mobile units; a mobile unit uplink that receives interactive content from the mobile units; a ground station downlink that transmits interactive content to the ground station; and a processor unit coupled to the ground station uplink, the mobile unit downlink, the mobile unit uplink, and the ground station downlink, and configured to provide mobile interactive satellite services to the mobile units. 38. The satellite of claim 37, wherein the ground station uplink, the mobile unit downlink, the mobile unit uplink, and the ground station downlink operate in conjunction with an ancillary terrestrial communication system. 39. The satellite of claim 37, wherein the mobile interactive satellite services includes one or more from the group consisting of: vehicle navigation; broadcast video; interactive video; interactive data communications; vehicle telemetry; vehicle subsystem control; vehicle subsystem maintenance; and vehicle security services. 40. The satellite of claim 37, wherein the mobile unit downlink uses beamforming. 41. A ground station for providing mobile interactive satellite services, the ground station comprising: a multicast core; an interactive core; and a radio frequency communication unit coupled to the multicast core and the interactive core such that the radio frequency communications unit is operable to communicate with the satellite to provide multicast and interactive communications to mobile units. 42. The ground station of claim 41, wherein the radio frequency communication unit uses beamforming. 43. A terrestrial base transceiver station for providing ancillary terrestrial communications in a mobile interactive satellite service, the terrestrial base transceiver station comprising: a ground station communication link for communicating multicast and interactive content with a ground station; a mobile unit downlink that transmits multicast content and interactive content to mobile units; a mobile unit uplink that receives interactive content from the mobile units; and a processor unit coupled to the ground station communication link, the mobile unit downlink, and the mobile unit uplink, and configured to provide ancillary terrestrial communications in a mobile interactive satellite service. 44. A method for providing mobile interactive satellite services on a satellite, the method comprising: receiving multicast data for transmission to mobile units; receiving interactive data for transmission to mobile units; and transmitting multicast data and interactive data to mobile units. 45. The method of claim 44 wherein receiving multicast data for transmission to mobile units includes receiving multicast data from a ground station. 46. The method of claim 44 wherein receiving multicast data for transmission to mobile units includes receiving multicast data from a second satellite. 47. The method of claim 44 wherein receiving interactive data includes receiving interactive data from mobile units. 48. The method of claim 44 wherein receiving interactive data includes receiving interactive data from a ground station. 49. The method of claim 44 wherein the received multicast data has been processed by a ground station for transmission using advanced antenna technology. 50. The method of claim 49 wherein the advanced antenna technology includes one or more of the group consisting of: MIMO and beamforming. 51. The method of claim 44 wherein transmitting multicast data and interactive data to mobile units includes transmitting multiple beams. 52. The method of claim 51 wherein each of the multiple beams are configured to cover a geographical area. 53. The method of claim 44 wherein transmitting multicast data and interactive data to mobile units includes: transmitting multicast data over a first portion of available radio frequency spectrum; and transmitting interactive data over a second portion of available radio frequency spectrum. 54. The method of claim 53 wherein the first portion and second portion of available radio frequency spectrum are configurable. 55. A method for providing ground station support in a mobile interactive satellite service, the method comprising: transmitting multicast data such that the multicast data may be communicated to mobile units; receiving interactive mobile unit data; processing the received interactive mobile unit data to determine interactive data to be communicated to at least one of the mobile units; and transmitting the determined interactive data such that the determined interactive data may be communicated to the at least one of the mobile units. 56. The method of claim 55, further comprising receiving multicast data from a multicast data feed. 57. The method of claim 56, wherein receiving multicast data includes receiving multicast data from a satellite feed. 58. The method of claim 56, wherein receiving multicast data includes receiving multicast data from a terrestrial feed. 59. The method of claim 55, wherein receiving interactive mobile unit data includes receiving interactive mobile unit data through a satellite. 60. The method of claim 55, wherein receiving interactive mobile unit data includes receiving interactive mobile unit data from a terrestrial base transceiver station. 61. The method of claim 55, wherein receiving interactive mobile unit data includes receiving interactive mobile unit data from a server. 62. The method of claim 55, wherein receiving interactive mobile unit data includes receiving interactive mobile unit data from an external communications networks. 63. The method of claim 55, wherein processing the received interactive mobile unit data includes: retrieving data in response to the received interactive mobile unit data; and assembling the retrieved data for transmission. 64. The method of claim 55, wherein transmitting the determined interactive data includes transmitting the determined interactive data to a satellite for transmission to mobile units. 65. The method of claim 55, wherein transmitting the determined interactive data includes transmitting the determined interactive data to a terrestrial base transceiver station for transmission to mobile units. 66. A method for providing ancillary terrestrial communication in a mobile interactive satellite system, the method comprising: providing mobile interactive satellite services using a terrestrial base transceiver station, the mobile interactive satellite services a multicast component and an interactive component; transmitting information for the multicast component and the interactive component using a satellite; and transmitting information for the multicast component and the interactive component using the terrestrial base transceiver station. 67. The method of claim 66, further comprising receiving information for the interactive component using the satellite. 68. The method of claim 67, further comprising receiving information for the interactive component using the terrestrial base transceiver station. 69. A method for providing mobile interactive satellite services, the method comprising: receiving multicast data for transmission using a communication system that includes a satellite with an ancillary terrestrial component; receiving interactive data using the communication system; determining interactive data for transmission using the received interactive data; transmitting the multicast data and the determined interactive data using the communication system to provide mobile interactive satellite services. 70. The method of claim 69, wherein receiving multicast data includes receiving broadcast video feeds. 71. The method of claim 69, wherein receiving multicast data includes receiving broadcast audio feeds. 72. The method of claim 69, wherein receiving interactive data using the communication system includes receiving data from a mobile unit through the satellite. 73. The method of claim 69, wherein receiving interactive data using the communication system includes receiving data from a mobile unit through the ancillary terrestrial component. 74. The method of claim 69, wherein the ancillary terrestrial component includes a terrestrial base transceiver station. 75. The method of claim 69, wherein determining interactive data for transmission using the received interactive data includes retrieving information based on the received interactive data. 76. The method of claim 69, wherein transmitting the multicast data and the determined interactive data using the communication system includes transmitting using both the satellite and the ancillary terrestrial component. 77. The method of claim 69, wherein receiving multicast data includes receiving a broadcast video feed; and transmitting the multicast data includes transmitting the received broadcast video feed to provide multicast video in a mobile interactive satellite service. 78. The method of claim 69, wherein the mobile interactive satellite services include vehicle navigation. 79. The method of claim 78, wherein transmitting the multicast data and the determined interactive data using the communication system includes transmitting updates to navigational information. 80. The method of claim 69, wherein transmitting the multicast data and the determined interactive data using the communication system includes transmitting a system update for a mobile unit system. 81. The method of claim 80, wherein the system update is a software update. 82. The method of claim 69, wherein the mobile interactive satellite services include vehicle security services. 83. The method of claim 82, wherein receiving interactive data using the communication system includes receiving vehicle security information. 84. The method of claim 83, wherein vehicle security information includes one or more from the group consisting of: vehicle telemetry; vehicle location; vehicle security incidents; telephonic information; and vehicle system information. 85. The method of claim 84, wherein the mobile interactive satellite services include one or more from the group consisting of: remote vehicle tracking; remote vehicle disabling; remote vehicle enabling; remote vehicle unlocking; remote vehicle monitoring; vehicle black box access; and vehicle black box recording. 86. In a vehicle, a method for interacting with a mobile interactive satellite service with an ancillary terrestrial component, the method comprising: receiving information from a vehicle communication bus; transmitting data related to mobile interactive satellite services based on the received information. 87. The method of claim 86, wherein receiving information includes receiving an air bag deployment notification. 88. The method of claim 87, wherein transmitting data includes transmitting the air bag deployment notification. 89. The method of claim 86, further comprising: receiving a request from the mobile interactive satellite system to perform an action on a vehicle system; and performing the action on the vehicle system. 90. The method of claim 86, wherein receiving information includes periodically requesting information using the vehicle communication databus. 91. The method of claim 89, wherein performing the action includes upgrading software. 92. The method of claim 89, wherein performing the action includes upgrading firmware. 93. The method of claim 89, wherein performing the action includes modifying the vehicle system configuration. 94. The method of claim 89, wherein performing the action includes retrieving data. 95. The method of claim 86, further comprising: receiving information related to mobile interactive satellite services; and outputting the received information. 96. The method of claim 95, wherein the received information is one or more from the group consisting of: video, audio, and data. 97. The method of claim 95, wherein outputting the received information includes using an output device including one or more from the group consisting of: a display, a speaker, and a light.
2,600
9,851
9,851
15,253,915
2,652
A device for obtaining, storing and displaying information from a remote server, the device has a modem for establishing communication sessions with the remote server. A memory coupled to the modem stores the obtained information, and a display is coupled to the memory for displaying the stored information. The device automatically and periodically communicates with the remote server for obtaining the information.
1. A device for obtaining, storing and displaying digital video data from a first remote information server via the Internet over a Local Area Network (LAN), said device comprising: a LAN connector for connecting to the LAN; a LAN transceiver coupled to said LAN connector for bi-directional packet-based digital data communication with another LAN transceiver of the same type over the LAN; a first memory coupled to said LAN transceiver for storing digital video data received by said LAN transceiver; a video display for visually presenting information, said display being coupled to said first memory for displaying the digital video data stored in the first memory; and a single enclosure housing said LAN connector, said LAN transceiver, said first memory and said display, said single enclosure having dimensions and an appearance of a conventional flat, wall-mountable framed picture, wherein said device is addressable in the LAN, and said device is operative for communicating via the LAN with the first remote information server via the Internet for receiving digital video data from the first remote information server, and for storing and displaying digital video data obtained from the first remote information server. 2. The device according to claim 1, wherein said device is operative for automatically and periodically communicating with the first remote information server at all times when said device is in operation. 3. The device according to claim 1, further comprising a second memory for storing a digital address uniquely identifying said device in the LAN. 4. The device according to claim 1, wherein said device is operative to send the digital address and a request for information, and to obtain and display digital video data from the first remote information server in response to the sent request for information. 5. The device according to claim 1, wherein said device is configured for wall mounting in a residential building, and the first remote information server is located outside the residential building. 6. The device according to claim 1, wherein communication over the LAN is based on IEEE802.3 standard; said LAN connector is a RJ-45 type connector; and said LAN transceiver is an Ethernet transceiver. 7. The device according to claim 1, wherein said display comprises a flat screen that is based on Liquid Crystal Display (LCD) technology. 8. The device according to claim 1, wherein said display is an analog video display. 9. The device according to claim 8, wherein said display is coupled to said first memory via a composite video interface, and the composite video interface is one of a PAL and an NTSC interface. 10. The device according to claim 1, wherein said device is further operative to receive and play television channels. 11. The device according to claim 1, wherein said device is further operative to receive and display High Definition (HD) video, and said video display is operative to display High Definition (HD) video. 12. The device according to claim 11, wherein the High Definition (HD) video is High Definition Television (HDTV). 13. The device according to claim 1, wherein said first memory is non-volatile. 14. The device according to claim 1, wherein said first memory is based on one of: a Flash memory; a DRAM memory; and a RAM memory. 15. The device according to claim 1, further comprising a battery, and wherein said device is operative to be at least in part powered from said battery, and said battery is a primary or rechargeable battery. 16. The device according to claim 1, further comprising firmware and a processor for executing said firmware, said processor being coupled to control at least said LAN transceiver and said video display. 17. The device according to claim 16, wherein said processor is one of: a microprocessor; and a microcomputer, and said device further comprises at least one user operated button or switch coupled to said processor, for user control of operation of said device. 18. The device according to claim 17, wherein the user control of operation of said device comprises at least one out of: turning said device on and off; resetting said device to default values; changing the contrast of said display; changing the brightness of said display; changing the zoom of images presented on said display; selecting a language; and selecting the information to be presented on said display. 19. The device according to claim 16, wherein said firmware includes at least part of a web client for communication with, and accessing information stored in, the remote information server. 20. The device according to claim 19, wherein said at least part of a web client includes at least part of a graphical web browser. 21. The device according to claim 20, wherein said at least part of a graphical web browser is based on Windows Internet Explorer. 22. The device according to claim 1, wherein the first remote information server is organized as a website having a Uniform Resource Locator (URL) and including web pages as part of the World Wide Web (WWW), and is further identified by said device using the website URL. 23. The device according to claim 1, wherein said device is operative for communicating with a second remote information server via the Internet for receiving information therefrom, and for storing and displaying the received information from the second remote information server. 24. The device according to claim 23, wherein said device is adapted to communicate with the first and second remote information servers for obtaining selected and distinct information from each remote information server. 25. The device according to claim 24, wherein said device communicates with the first and second remote servers one at a time. 26. The device according to claim 1, wherein communication with the first remote information server is based on Internet protocol suite. 27. The device according to claim 26, wherein communication with the first remote information server is based on TCP/IP. 28. The device according to claim 1, wherein said device is operative to initiate a communication with the first remote information server on a daily basis at a pre-set time of day (TOD). 29. The device according to claim 28, wherein the pre-set time of day is at least one of: set by the user; set previously in the device; and set by the remote information server in a previous communication session. 30. The device according to claim 1, wherein the information received from the first remote information server and displayed on the video display relates to a future event, a planned activity, or a forecast of a situation. 31. The device according to claim 30, wherein the information received from the first remote information server includes at least one of: a weather forecast; a future sports event; a future culture event; a future entertainment event; a TV station guide; and a radio station guide. 32. The device according to claim 1, further comprising a second memory for storing a digital address uniquely identifying said device in the LAN or on the Internet. 33. The device according to claim 32, wherein the digital address is either a MAC address or an IP address. 34. The device according to claim 1, wherein said device is further operative to store and play digital audio data. 35. The device according to claim 1, wherein: said device is further operative to receive and display information from a connected unit; said device further comprises a second connector coupled to said first memory for connecting said first memory to the unit; and said device is operative to receive digital data comprising information from the unit and to display the information on said video display. 36. The device according to claim 35, wherein said device is further operative to transmit digital data to the unit. 37. The device according to claim 36, wherein communication with the unit via said second connector is carried out using a serial digital data stream. 38. The device according to claim 35, further comprising: an AC power plug connectable to an AC power source; and a power supply connected to said AC power plug to be powered by power supplied by the AC power source and to provide DC power for DC powering said first memory and said video display, wherein said second connector is further coupled to said power supply for supplying DC power to the unit via said second connector. 39. The device according to claim 38, wherein: the unit is a battery operated unit having a battery; and said power supply further comprises a charger for charging the battery of the battery operated unit. 40. The device according to claim 35, wherein the unit is a handheld unit; and said device is further adapted to mechanically dock, supply power to, and communicate with the handheld unit. 41. The device according to claim 40, in combination with a cradle in which the handheld unit is detachably mounted, the handheld unit having a mating connector, and wherein said second connector is part of said cradle, and said second connector connects with the mating connector of the handheld unit when the handheld unit is mounted in said cradle. 42. The device according to claim 41, wherein the handheld unit is a Personal Digital Assistant (PDA), or a cellular telephone. 43. The device according to claim 1, wherein said device is further operative as a clock for maintaining and displaying the current hour, minute and second. 44. The device according to claim 43, wherein said device is further operative to display the current year, the current month and the current day of the month. 45. The device according to claim 1, wherein said single enclosure is constructed to have at least one of the following: a form substantially similar to that of a standard picture frame; wall mounting elements substantially similar to those of a standard picture frame for hanging on a wall; and a shape to at least in part substitute for a standard picture frame.
A device for obtaining, storing and displaying information from a remote server, the device has a modem for establishing communication sessions with the remote server. A memory coupled to the modem stores the obtained information, and a display is coupled to the memory for displaying the stored information. The device automatically and periodically communicates with the remote server for obtaining the information.1. A device for obtaining, storing and displaying digital video data from a first remote information server via the Internet over a Local Area Network (LAN), said device comprising: a LAN connector for connecting to the LAN; a LAN transceiver coupled to said LAN connector for bi-directional packet-based digital data communication with another LAN transceiver of the same type over the LAN; a first memory coupled to said LAN transceiver for storing digital video data received by said LAN transceiver; a video display for visually presenting information, said display being coupled to said first memory for displaying the digital video data stored in the first memory; and a single enclosure housing said LAN connector, said LAN transceiver, said first memory and said display, said single enclosure having dimensions and an appearance of a conventional flat, wall-mountable framed picture, wherein said device is addressable in the LAN, and said device is operative for communicating via the LAN with the first remote information server via the Internet for receiving digital video data from the first remote information server, and for storing and displaying digital video data obtained from the first remote information server. 2. The device according to claim 1, wherein said device is operative for automatically and periodically communicating with the first remote information server at all times when said device is in operation. 3. The device according to claim 1, further comprising a second memory for storing a digital address uniquely identifying said device in the LAN. 4. The device according to claim 1, wherein said device is operative to send the digital address and a request for information, and to obtain and display digital video data from the first remote information server in response to the sent request for information. 5. The device according to claim 1, wherein said device is configured for wall mounting in a residential building, and the first remote information server is located outside the residential building. 6. The device according to claim 1, wherein communication over the LAN is based on IEEE802.3 standard; said LAN connector is a RJ-45 type connector; and said LAN transceiver is an Ethernet transceiver. 7. The device according to claim 1, wherein said display comprises a flat screen that is based on Liquid Crystal Display (LCD) technology. 8. The device according to claim 1, wherein said display is an analog video display. 9. The device according to claim 8, wherein said display is coupled to said first memory via a composite video interface, and the composite video interface is one of a PAL and an NTSC interface. 10. The device according to claim 1, wherein said device is further operative to receive and play television channels. 11. The device according to claim 1, wherein said device is further operative to receive and display High Definition (HD) video, and said video display is operative to display High Definition (HD) video. 12. The device according to claim 11, wherein the High Definition (HD) video is High Definition Television (HDTV). 13. The device according to claim 1, wherein said first memory is non-volatile. 14. The device according to claim 1, wherein said first memory is based on one of: a Flash memory; a DRAM memory; and a RAM memory. 15. The device according to claim 1, further comprising a battery, and wherein said device is operative to be at least in part powered from said battery, and said battery is a primary or rechargeable battery. 16. The device according to claim 1, further comprising firmware and a processor for executing said firmware, said processor being coupled to control at least said LAN transceiver and said video display. 17. The device according to claim 16, wherein said processor is one of: a microprocessor; and a microcomputer, and said device further comprises at least one user operated button or switch coupled to said processor, for user control of operation of said device. 18. The device according to claim 17, wherein the user control of operation of said device comprises at least one out of: turning said device on and off; resetting said device to default values; changing the contrast of said display; changing the brightness of said display; changing the zoom of images presented on said display; selecting a language; and selecting the information to be presented on said display. 19. The device according to claim 16, wherein said firmware includes at least part of a web client for communication with, and accessing information stored in, the remote information server. 20. The device according to claim 19, wherein said at least part of a web client includes at least part of a graphical web browser. 21. The device according to claim 20, wherein said at least part of a graphical web browser is based on Windows Internet Explorer. 22. The device according to claim 1, wherein the first remote information server is organized as a website having a Uniform Resource Locator (URL) and including web pages as part of the World Wide Web (WWW), and is further identified by said device using the website URL. 23. The device according to claim 1, wherein said device is operative for communicating with a second remote information server via the Internet for receiving information therefrom, and for storing and displaying the received information from the second remote information server. 24. The device according to claim 23, wherein said device is adapted to communicate with the first and second remote information servers for obtaining selected and distinct information from each remote information server. 25. The device according to claim 24, wherein said device communicates with the first and second remote servers one at a time. 26. The device according to claim 1, wherein communication with the first remote information server is based on Internet protocol suite. 27. The device according to claim 26, wherein communication with the first remote information server is based on TCP/IP. 28. The device according to claim 1, wherein said device is operative to initiate a communication with the first remote information server on a daily basis at a pre-set time of day (TOD). 29. The device according to claim 28, wherein the pre-set time of day is at least one of: set by the user; set previously in the device; and set by the remote information server in a previous communication session. 30. The device according to claim 1, wherein the information received from the first remote information server and displayed on the video display relates to a future event, a planned activity, or a forecast of a situation. 31. The device according to claim 30, wherein the information received from the first remote information server includes at least one of: a weather forecast; a future sports event; a future culture event; a future entertainment event; a TV station guide; and a radio station guide. 32. The device according to claim 1, further comprising a second memory for storing a digital address uniquely identifying said device in the LAN or on the Internet. 33. The device according to claim 32, wherein the digital address is either a MAC address or an IP address. 34. The device according to claim 1, wherein said device is further operative to store and play digital audio data. 35. The device according to claim 1, wherein: said device is further operative to receive and display information from a connected unit; said device further comprises a second connector coupled to said first memory for connecting said first memory to the unit; and said device is operative to receive digital data comprising information from the unit and to display the information on said video display. 36. The device according to claim 35, wherein said device is further operative to transmit digital data to the unit. 37. The device according to claim 36, wherein communication with the unit via said second connector is carried out using a serial digital data stream. 38. The device according to claim 35, further comprising: an AC power plug connectable to an AC power source; and a power supply connected to said AC power plug to be powered by power supplied by the AC power source and to provide DC power for DC powering said first memory and said video display, wherein said second connector is further coupled to said power supply for supplying DC power to the unit via said second connector. 39. The device according to claim 38, wherein: the unit is a battery operated unit having a battery; and said power supply further comprises a charger for charging the battery of the battery operated unit. 40. The device according to claim 35, wherein the unit is a handheld unit; and said device is further adapted to mechanically dock, supply power to, and communicate with the handheld unit. 41. The device according to claim 40, in combination with a cradle in which the handheld unit is detachably mounted, the handheld unit having a mating connector, and wherein said second connector is part of said cradle, and said second connector connects with the mating connector of the handheld unit when the handheld unit is mounted in said cradle. 42. The device according to claim 41, wherein the handheld unit is a Personal Digital Assistant (PDA), or a cellular telephone. 43. The device according to claim 1, wherein said device is further operative as a clock for maintaining and displaying the current hour, minute and second. 44. The device according to claim 43, wherein said device is further operative to display the current year, the current month and the current day of the month. 45. The device according to claim 1, wherein said single enclosure is constructed to have at least one of the following: a form substantially similar to that of a standard picture frame; wall mounting elements substantially similar to those of a standard picture frame for hanging on a wall; and a shape to at least in part substitute for a standard picture frame.
2,600
9,852
9,852
14,155,120
2,631
A multi-standard receiver may comprise in an electronic device, receiving an input radio frequency (RF) signal comprising at least two RF signals of different communication standards. The input RF signal may be separated into two signals based on their different communication standard sand configurable gain levels may be applied to equalize their magnitudes. The amplified signals may be combined, and the combined signals may be converted to a digital signal. The configurable gain may be applied to the two signals using variable gain amplifiers. A null may be generated at the input of at least one of the variable gain amplifiers utilizing a mixer and a filter, both configured to a desired frequency. The desired frequency may correspond to an interferer signal. The input RF signal may be separated into two signals utilizing a diplexer. The input RF signal may be received from a wired connection and/or an antenna.
1. A method for communication, the method comprising: in an electronic device: receiving an input radio frequency (RF) signal comprising at least two RF signals of different communication standards; separating the input RF signal into at least two signals based on their different communication standards; amplifying the two signals by applying configurable gain levels to equalize their magnitudes; combining the amplified signals; and converting the combined signals to a digital signal. 2. The method according to claim 1, comprising applying the configurable gain levels to the at least two signals using variable gain amplifiers. 3. The method according to claim 2, comprising generating a null at the input of at least one of the variable gain amplifiers utilizing a mixer and a filter, both configured to a desired frequency. 4. The method according to claim 3, wherein the desired frequency corresponds to an interferer signal. 5. The method according to claim 1, comprising separating the input RF signal into two signals utilizing a diplexer. 6. The method according to claim 1, comprising receiving said input RF signal from a wired connection. 7. The method according to claim 1, comprising receiving said input RF signal from an antenna. 8. A system, comprising: one or more circuits for use in an electronic device, the one or more circuits being operable to: receive an input radio frequency (RF) signal comprising at least two RF signals of different communication standards; separate the input RF signal into at least two signals based on their different communication standards; amplify the at least two signals by applying configurable gain levels to equalize their magnitudes; combine the amplified signals; and convert the combined signals to a digital signal. 9. The system according to claim 8, wherein said one or more circuits are operable to apply the configurable gain levels to the at least two signals using variable gain amplifiers. 10. The system according to claim 9, wherein said one or more circuits are operable to generate a null at the input of at least one of the variable gain amplifiers utilizing a mixer and a filter, both configured to a desired frequency. 11. The system according to claim 10, wherein the desired frequency corresponds to an interferer signal. 12. The system according to claim 8, wherein said one or more circuits are operable to separating the input RF signal into two signals utilizing a diplexer. 13. The system according to claim 8, wherein said one or more circuits are operable to receive said input RF signal from a wired connection. 14. The system according to claim 8, wherein said one or more circuits are operable to receive said input RF signal from an antenna. 15. A system, comprising: a radio frequency (RF) receiver implemented on a single chip, the RF receiver comprising: at least two variable gain stages with inputs that are coupled to outputs of a diplexer; a combiner coupled to outputs of the at least two variable gain stages; and an analog-to-digital converter (ADC) coupled to an output of the combiner. 16. The system according to claim 15, wherein said diplexer is off-chip. 17. The system according to claim 15, wherein said diplexer is integrated on said chip. 18. The system according to claim 15, wherein said RF receiver is operable to amplify two signals received from the diplexer to equalize their magnitudes. 19. The system according to claim 18, wherein said RF receiver is operable to combine the amplified signals. 20. The system according to claim 15, wherein said RF receiver is operable to digitize the combined signals.
A multi-standard receiver may comprise in an electronic device, receiving an input radio frequency (RF) signal comprising at least two RF signals of different communication standards. The input RF signal may be separated into two signals based on their different communication standard sand configurable gain levels may be applied to equalize their magnitudes. The amplified signals may be combined, and the combined signals may be converted to a digital signal. The configurable gain may be applied to the two signals using variable gain amplifiers. A null may be generated at the input of at least one of the variable gain amplifiers utilizing a mixer and a filter, both configured to a desired frequency. The desired frequency may correspond to an interferer signal. The input RF signal may be separated into two signals utilizing a diplexer. The input RF signal may be received from a wired connection and/or an antenna.1. A method for communication, the method comprising: in an electronic device: receiving an input radio frequency (RF) signal comprising at least two RF signals of different communication standards; separating the input RF signal into at least two signals based on their different communication standards; amplifying the two signals by applying configurable gain levels to equalize their magnitudes; combining the amplified signals; and converting the combined signals to a digital signal. 2. The method according to claim 1, comprising applying the configurable gain levels to the at least two signals using variable gain amplifiers. 3. The method according to claim 2, comprising generating a null at the input of at least one of the variable gain amplifiers utilizing a mixer and a filter, both configured to a desired frequency. 4. The method according to claim 3, wherein the desired frequency corresponds to an interferer signal. 5. The method according to claim 1, comprising separating the input RF signal into two signals utilizing a diplexer. 6. The method according to claim 1, comprising receiving said input RF signal from a wired connection. 7. The method according to claim 1, comprising receiving said input RF signal from an antenna. 8. A system, comprising: one or more circuits for use in an electronic device, the one or more circuits being operable to: receive an input radio frequency (RF) signal comprising at least two RF signals of different communication standards; separate the input RF signal into at least two signals based on their different communication standards; amplify the at least two signals by applying configurable gain levels to equalize their magnitudes; combine the amplified signals; and convert the combined signals to a digital signal. 9. The system according to claim 8, wherein said one or more circuits are operable to apply the configurable gain levels to the at least two signals using variable gain amplifiers. 10. The system according to claim 9, wherein said one or more circuits are operable to generate a null at the input of at least one of the variable gain amplifiers utilizing a mixer and a filter, both configured to a desired frequency. 11. The system according to claim 10, wherein the desired frequency corresponds to an interferer signal. 12. The system according to claim 8, wherein said one or more circuits are operable to separating the input RF signal into two signals utilizing a diplexer. 13. The system according to claim 8, wherein said one or more circuits are operable to receive said input RF signal from a wired connection. 14. The system according to claim 8, wherein said one or more circuits are operable to receive said input RF signal from an antenna. 15. A system, comprising: a radio frequency (RF) receiver implemented on a single chip, the RF receiver comprising: at least two variable gain stages with inputs that are coupled to outputs of a diplexer; a combiner coupled to outputs of the at least two variable gain stages; and an analog-to-digital converter (ADC) coupled to an output of the combiner. 16. The system according to claim 15, wherein said diplexer is off-chip. 17. The system according to claim 15, wherein said diplexer is integrated on said chip. 18. The system according to claim 15, wherein said RF receiver is operable to amplify two signals received from the diplexer to equalize their magnitudes. 19. The system according to claim 18, wherein said RF receiver is operable to combine the amplified signals. 20. The system according to claim 15, wherein said RF receiver is operable to digitize the combined signals.
2,600
9,853
9,853
14,161,048
2,621
An aspect provides a method, including: accepting, at a writing input surface of an information handling device, user handwriting inputs to a note taking application; determining, using a processor, contextual information related to the user handwriting inputs to the note taking application; creating, using a processor, an association between at least one content portion of the user hand writing inputs and at least a portion of the contextual information; and storing, in a memory accessible to the information handling device, the association. Other aspects are described and claimed.
1. A method, comprising: accepting, at a writing input surface of an information handling device, user handwriting inputs to a note taking application; determining, using a processor, contextual information related to the user handwriting inputs to the note taking application; creating, using a processor, an association between at least one content portion of the user hand writing inputs and at least a portion of the contextual information; and storing, in a memory accessible to the information handling device, the association. 2. The method of claim 1, wherein the at least one content portion of the user hand writing inputs comprises one or more key words. 3. The method of claim 1, wherein the contextual information is selected from the group consisting of audio data and device calendar data. 4. The method of claim 1, wherein the creating an association between at least one content portion of the user hand writing inputs and at least a portion of the contextual information comprises forming an association between one or more keywords of the user handwriting input and one or more keywords derived from contextual information. 5. The method of claim 1, wherein the creating an association between at least a content portion of the user handwriting inputs and at least a portion of the contextual information comprises forming a plurality of associations between keywords of the user handwriting input and keywords derived from contextual information. 6. The method of claim 5, wherein the plurality of associations are organized as a timeline. 7. The method of claim 1, further comprising: accepting a user search query; accessing a store of associations between at least a content portion of the user hand writing inputs and at least a portion of the contextual information; searching the store of associations using one or more keywords derived from the query; identifying one or more user handwriting inputs associated with contextual information matching the one or more keywords of the query; and returning a query result based on said identifying. 8. The method of claim 1, wherein the determining contextual information related to the user handwriting inputs to the note taking application comprises accessing audio data of a speaker associated in time with the user handwriting inputs to the note taking application. 9. The method of claim 8, further comprising: utilizing the audio data of the speaker associated in time with the user handwriting inputs to the note taking application to identify the speaker; and providing a representation of the speaker on a display. 10. The method of claim 9, wherein the representation comprises a graphical illustration provided in a note taking application while user input is accepted by the note taking application. 11. An information handling device, comprising: a writing input surface; a processor operatively coupled to the writing input surface; a memory device that stores instructions accessible to the processor, the instructions being executable by the processor to: accept, at the writing input surface, user handwriting inputs to a note taking application; determine contextual information related to the user handwriting inputs to the note taking application; create an association between at least one content portion of the user hand writing inputs and at least a portion of the contextual information; and store the association. 12. The information handling device of claim 11, wherein the at least one content portion of the user hand writing inputs comprises one or more key words. 13. The information handling device of claim 11, wherein the contextual information is selected from the group consisting of audio data and device calendar data. 14. The information handling device of claim 11, wherein to create an association between at least one content portion of the user hand writing inputs and at least a portion of the contextual information comprises forming an association between one or more keywords of the user handwriting input and one or more keywords derived from contextual information. 15. The information handling device of claim 11, wherein to create an association between at least a content portion of the user handwriting inputs and at least a portion of the contextual information comprises forming a plurality of associations between keywords of the user handwriting input and keywords derived from contextual information. 16. The information handling device of claim 15, wherein the plurality of associations are organized as a timeline. 17. The information handling device of claim 11, wherein the instructions are further executable by the processor to: accept a user search query; access a store of associations between at least a content portion of the user hand writing inputs and at least a portion of the contextual information; search the store of associations using one or more keywords derived from the query; identify one or more user handwriting inputs associated with contextual information matching the one or more keywords of the query; and return a query result based on said identifying. 18. The information handling device of claim 11, wherein to determine contextual information related to the user handwriting inputs to the note taking application comprises accessing audio data of a speaker associated in time with the user handwriting inputs to the note taking application. 19. The information handling device of claim 18, wherein the instructions are further executable by the processor to: utilize the audio data of the speaker associated in time with the user handwriting inputs to the note taking application to identify the speaker; and provide a representation of the speaker on a display. 20. A product, comprising: a storage device having code stored therewith, the code being executable by a processor and comprising: code that accepts, at a writing input surface of an information handling device, user handwriting inputs to a note taking application; code that determines contextual information related to the user handwriting inputs to the note taking application; code that creates an association between at least one content portion of the user hand writing inputs and at least a portion of the contextual information; and code that stores the association.
An aspect provides a method, including: accepting, at a writing input surface of an information handling device, user handwriting inputs to a note taking application; determining, using a processor, contextual information related to the user handwriting inputs to the note taking application; creating, using a processor, an association between at least one content portion of the user hand writing inputs and at least a portion of the contextual information; and storing, in a memory accessible to the information handling device, the association. Other aspects are described and claimed.1. A method, comprising: accepting, at a writing input surface of an information handling device, user handwriting inputs to a note taking application; determining, using a processor, contextual information related to the user handwriting inputs to the note taking application; creating, using a processor, an association between at least one content portion of the user hand writing inputs and at least a portion of the contextual information; and storing, in a memory accessible to the information handling device, the association. 2. The method of claim 1, wherein the at least one content portion of the user hand writing inputs comprises one or more key words. 3. The method of claim 1, wherein the contextual information is selected from the group consisting of audio data and device calendar data. 4. The method of claim 1, wherein the creating an association between at least one content portion of the user hand writing inputs and at least a portion of the contextual information comprises forming an association between one or more keywords of the user handwriting input and one or more keywords derived from contextual information. 5. The method of claim 1, wherein the creating an association between at least a content portion of the user handwriting inputs and at least a portion of the contextual information comprises forming a plurality of associations between keywords of the user handwriting input and keywords derived from contextual information. 6. The method of claim 5, wherein the plurality of associations are organized as a timeline. 7. The method of claim 1, further comprising: accepting a user search query; accessing a store of associations between at least a content portion of the user hand writing inputs and at least a portion of the contextual information; searching the store of associations using one or more keywords derived from the query; identifying one or more user handwriting inputs associated with contextual information matching the one or more keywords of the query; and returning a query result based on said identifying. 8. The method of claim 1, wherein the determining contextual information related to the user handwriting inputs to the note taking application comprises accessing audio data of a speaker associated in time with the user handwriting inputs to the note taking application. 9. The method of claim 8, further comprising: utilizing the audio data of the speaker associated in time with the user handwriting inputs to the note taking application to identify the speaker; and providing a representation of the speaker on a display. 10. The method of claim 9, wherein the representation comprises a graphical illustration provided in a note taking application while user input is accepted by the note taking application. 11. An information handling device, comprising: a writing input surface; a processor operatively coupled to the writing input surface; a memory device that stores instructions accessible to the processor, the instructions being executable by the processor to: accept, at the writing input surface, user handwriting inputs to a note taking application; determine contextual information related to the user handwriting inputs to the note taking application; create an association between at least one content portion of the user hand writing inputs and at least a portion of the contextual information; and store the association. 12. The information handling device of claim 11, wherein the at least one content portion of the user hand writing inputs comprises one or more key words. 13. The information handling device of claim 11, wherein the contextual information is selected from the group consisting of audio data and device calendar data. 14. The information handling device of claim 11, wherein to create an association between at least one content portion of the user hand writing inputs and at least a portion of the contextual information comprises forming an association between one or more keywords of the user handwriting input and one or more keywords derived from contextual information. 15. The information handling device of claim 11, wherein to create an association between at least a content portion of the user handwriting inputs and at least a portion of the contextual information comprises forming a plurality of associations between keywords of the user handwriting input and keywords derived from contextual information. 16. The information handling device of claim 15, wherein the plurality of associations are organized as a timeline. 17. The information handling device of claim 11, wherein the instructions are further executable by the processor to: accept a user search query; access a store of associations between at least a content portion of the user hand writing inputs and at least a portion of the contextual information; search the store of associations using one or more keywords derived from the query; identify one or more user handwriting inputs associated with contextual information matching the one or more keywords of the query; and return a query result based on said identifying. 18. The information handling device of claim 11, wherein to determine contextual information related to the user handwriting inputs to the note taking application comprises accessing audio data of a speaker associated in time with the user handwriting inputs to the note taking application. 19. The information handling device of claim 18, wherein the instructions are further executable by the processor to: utilize the audio data of the speaker associated in time with the user handwriting inputs to the note taking application to identify the speaker; and provide a representation of the speaker on a display. 20. A product, comprising: a storage device having code stored therewith, the code being executable by a processor and comprising: code that accepts, at a writing input surface of an information handling device, user handwriting inputs to a note taking application; code that determines contextual information related to the user handwriting inputs to the note taking application; code that creates an association between at least one content portion of the user hand writing inputs and at least a portion of the contextual information; and code that stores the association.
2,600
9,854
9,854
15,255,476
2,672
An information terminal device which is an image receiving device receiving image data from a multifunction device which is an image processing device transmits setting information such as resolution, color/monochrome setting and data format for generating image data to be received, and destination information of the information terminal device itself to the multifunction device, as a profile. The multifunction device receives the profile, generates image data of an image read based on the setting information of the received profile when scanning is executed, and automatically transmits the image data to the information terminal device based on the destination information of the profile.
1-6. (canceled) 7. An image processing system comprising: one or more image processing devices having a communication unit and an image reading unit that generates image data, and transmitting the generated image data through the communication unit; and an image receiving device having a communication unit, and receiving the generated image data transmitted by the communication unit of the one or more image processing devices, wherein the image receiving device includes: a control unit that executes: a step of obtaining destination information of the image receiving device; a step of obtaining setting information settable when image data to be received is generated; a step of transmitting, through the communication unit of the image receiving device, the destination information and the setting information to any one of the image processing devices; and a step of transmitting to the any one of the image processing devices, through the communication unit of the image receiving device, a request for deleting the transmitted destination information, and the one or more image processing devices including a control unit that executes: a step of receiving, through the communication unit of the one or more image processing devices, the destination information and setting information transmitted from the image receiving device; a step of generating, by the image reading unit, image data based on the received setting information; and a step of transmitting, through the communication unit of the one or more image processing devices, the generated image data to a destination indicated by the received destination information, and a storage unit that stores the received destination information, and wherein the control unit of the image processing device further executes: a step of receiving, through the communication unit, the request transmitted from the image receiving device; and a step of deleting or prohibiting reading of the destination information from the storage unit when receiving the request; and wherein the functions performed by the communication unit, image reading unit, control unit, and storage unit are achieved using a CPU. 8. The image processing system according to claim 7, comprising a plurality of the image processing devices, wherein the control unit of the image receiving device further executes: a step of searching for an image processing device of the one or more image processing devices capable of communicating through the communication unit and of generating image data based on the setting information; a step of outputting, by an input/output unit, a result of searching; and a step of accepting a selection of the outputted image processing device from the result of searching. 9. (canceled) 10. The image processing system according to claim 7, wherein the control unit of the one or more image processing devices further executes a step of deleting or prohibiting reading of the destination information from the storage unit after a period of time has elapsed since the image data is transmitted to a destination indicated by the destination information. 11. An image receiving device that includes a communication unit and receives image data through the communication unit, comprising: a control unit that executes a step of obtaining destination information of the image receiving device; a step of obtaining setting information settable when image data to be received by the image receiving device is generated; a step of transmitting, through the communication unit, the destination information and the setting information; and a step of transmitting, through the communication unit, a request for deleting the transmitted destination information; and wherein the functions performed by the communication unit and control unit are achieved using a CPU. 12. The image receiving device according to claim 11, wherein the control unit further executes: a step of searching for a device capable of communicating through the communication unit and of generating image data based on the obtained setting information; a step of outputting, by an input/output unit, a result of searching; and a step of accepting a selection of a device from the output result of searching. 13. An image processing device that includes a communication unit and an image reading unit that generates image data, and transmits the generated image data through the communication unit, comprising a control unit that executes: a step of receiving, through the communication unit, destination information for image data and setting information settable when the image data is generated; a step of generating, by the image reading unit, image data based on the received setting information; and a step of transmitting, through the communication unit, the generated image data to a destination indicated by the received destination information, and a storage unit that stores the received destination information, and wherein the control unit further executes: a step of receiving, through the communication unit, a request for deleting the received destination information; and a step of deleting or prohibiting reading of the destination information from the storage unit when receiving the request; and wherein the functions performed by the communication unit, image reading unit, control unit, and storage unit are achieved using a CPU. 14. An image processing method in which one or more image processing devices having a communication unit and an image reading unit that generates image data transmits the generated image data through the communication unit, and an image receiving device having a communication unit receives the generated image data transmitted from the image processing device through the communication unit, comprising: a step of obtaining, by the image receiving device, destination information of the image receiving device; a step of obtaining, by the image receiving device, setting information settable when image data to be received is generated; a step of transmitting, through the communication unit of the image receiving device, the destination information and the setting information to any one of the image processing devices; a step of receiving, through the communication unit of the any one of the image processing devices, the destination information and setting information transmitted from the image receiving device, a step of generating, by the image reading unit of the any one of the image processing devices, image data based on the received setting information; a step of transmitting, through the communication unit of the image processing device, the generated image data to a destination indicated by the received destination information; a step of transmitting to the any one of the image processing devices, through the communication unit of the image receiving device, a request for deleting the destination information transmitted to the image processing device; a step of storing, by the any one of the image processing devices, the received destination information in a storage unit; a step of receiving, through the communication unit of the any one of the image processing devices, the request transmitted from the image receiving device; and a step of deleting or prohibiting reading of the destination information from the storage unit when receiving the request by the any one of the image processing devices.
An information terminal device which is an image receiving device receiving image data from a multifunction device which is an image processing device transmits setting information such as resolution, color/monochrome setting and data format for generating image data to be received, and destination information of the information terminal device itself to the multifunction device, as a profile. The multifunction device receives the profile, generates image data of an image read based on the setting information of the received profile when scanning is executed, and automatically transmits the image data to the information terminal device based on the destination information of the profile.1-6. (canceled) 7. An image processing system comprising: one or more image processing devices having a communication unit and an image reading unit that generates image data, and transmitting the generated image data through the communication unit; and an image receiving device having a communication unit, and receiving the generated image data transmitted by the communication unit of the one or more image processing devices, wherein the image receiving device includes: a control unit that executes: a step of obtaining destination information of the image receiving device; a step of obtaining setting information settable when image data to be received is generated; a step of transmitting, through the communication unit of the image receiving device, the destination information and the setting information to any one of the image processing devices; and a step of transmitting to the any one of the image processing devices, through the communication unit of the image receiving device, a request for deleting the transmitted destination information, and the one or more image processing devices including a control unit that executes: a step of receiving, through the communication unit of the one or more image processing devices, the destination information and setting information transmitted from the image receiving device; a step of generating, by the image reading unit, image data based on the received setting information; and a step of transmitting, through the communication unit of the one or more image processing devices, the generated image data to a destination indicated by the received destination information, and a storage unit that stores the received destination information, and wherein the control unit of the image processing device further executes: a step of receiving, through the communication unit, the request transmitted from the image receiving device; and a step of deleting or prohibiting reading of the destination information from the storage unit when receiving the request; and wherein the functions performed by the communication unit, image reading unit, control unit, and storage unit are achieved using a CPU. 8. The image processing system according to claim 7, comprising a plurality of the image processing devices, wherein the control unit of the image receiving device further executes: a step of searching for an image processing device of the one or more image processing devices capable of communicating through the communication unit and of generating image data based on the setting information; a step of outputting, by an input/output unit, a result of searching; and a step of accepting a selection of the outputted image processing device from the result of searching. 9. (canceled) 10. The image processing system according to claim 7, wherein the control unit of the one or more image processing devices further executes a step of deleting or prohibiting reading of the destination information from the storage unit after a period of time has elapsed since the image data is transmitted to a destination indicated by the destination information. 11. An image receiving device that includes a communication unit and receives image data through the communication unit, comprising: a control unit that executes a step of obtaining destination information of the image receiving device; a step of obtaining setting information settable when image data to be received by the image receiving device is generated; a step of transmitting, through the communication unit, the destination information and the setting information; and a step of transmitting, through the communication unit, a request for deleting the transmitted destination information; and wherein the functions performed by the communication unit and control unit are achieved using a CPU. 12. The image receiving device according to claim 11, wherein the control unit further executes: a step of searching for a device capable of communicating through the communication unit and of generating image data based on the obtained setting information; a step of outputting, by an input/output unit, a result of searching; and a step of accepting a selection of a device from the output result of searching. 13. An image processing device that includes a communication unit and an image reading unit that generates image data, and transmits the generated image data through the communication unit, comprising a control unit that executes: a step of receiving, through the communication unit, destination information for image data and setting information settable when the image data is generated; a step of generating, by the image reading unit, image data based on the received setting information; and a step of transmitting, through the communication unit, the generated image data to a destination indicated by the received destination information, and a storage unit that stores the received destination information, and wherein the control unit further executes: a step of receiving, through the communication unit, a request for deleting the received destination information; and a step of deleting or prohibiting reading of the destination information from the storage unit when receiving the request; and wherein the functions performed by the communication unit, image reading unit, control unit, and storage unit are achieved using a CPU. 14. An image processing method in which one or more image processing devices having a communication unit and an image reading unit that generates image data transmits the generated image data through the communication unit, and an image receiving device having a communication unit receives the generated image data transmitted from the image processing device through the communication unit, comprising: a step of obtaining, by the image receiving device, destination information of the image receiving device; a step of obtaining, by the image receiving device, setting information settable when image data to be received is generated; a step of transmitting, through the communication unit of the image receiving device, the destination information and the setting information to any one of the image processing devices; a step of receiving, through the communication unit of the any one of the image processing devices, the destination information and setting information transmitted from the image receiving device, a step of generating, by the image reading unit of the any one of the image processing devices, image data based on the received setting information; a step of transmitting, through the communication unit of the image processing device, the generated image data to a destination indicated by the received destination information; a step of transmitting to the any one of the image processing devices, through the communication unit of the image receiving device, a request for deleting the destination information transmitted to the image processing device; a step of storing, by the any one of the image processing devices, the received destination information in a storage unit; a step of receiving, through the communication unit of the any one of the image processing devices, the request transmitted from the image receiving device; and a step of deleting or prohibiting reading of the destination information from the storage unit when receiving the request by the any one of the image processing devices.
2,600
9,855
9,855
14,771,998
2,636
Embodiments disclosed herein provide a hybrid fiber-copper access network in which a main OLT sends data to the DSLAMs via a plurality of point-to-point optical fiber connections. A standby OLT is provided which has a plurality of point-to-multi-point optical fiber connections to the DSLAMs. In the event of a failure, data can be sent to some of the DSLAMs via the standby OLT and the point-to-multi-point optical fiber connections. Following the rectification of the fault, the network can revert to its normal state and transmit data to the DSLAMs via the main OLT and the plurality of point-to-point optical fiber connections.
1. A communications network comprising: a main primary network node connected to a plurality of secondary network nodes via a plurality of point to point optical fiber connections; and a standby primary network node connected to the plurality of secondary network nodes via a plurality of point to multi-point optical fiber connections. 2. A communications network according to claim 1, wherein the plurality of point to multi-point optical fiber connections comprises a passive optical network (PON). 3. A communications network according to claim 2, wherein the PON comprises a primary optical splitter co-located with a PON optical line terminal (OLT). 4. A communications network according to claim 2, wherein the PON comprises a primary optical splitter co-located with one of the plurality of secondary network nodes. 5. A communications network according to claim 3, wherein the primary optical splitter and the PON OLT comprise a mode coupling receiver OLT. 6. A communications network according to claim 2, wherein the PON further comprises one or more secondary optical splitters. 7. A communications network according to claim 1, wherein the plurality of secondary network nodes are further connected to a plurality of metallic communications links. 8. A communications network according to claim 1, wherein the plurality of secondary network nodes each comprise a digital subscriber line add/drop multiplexer. 9. A communications network according to claim 1, wherein the network comprises a fiber to the cabinet network architecture 10. A communications network according to claim 1, wherein, in use, the network is operated by transmitting data to the plurality of secondary network nodes via the main primary network node and transmitting data to one or more of the plurality of secondary network nodes via the standby primary network node in the event that a failure event is detected. 11. A method of operating a communications network, the method comprising: in a normal operating mode, transmitting data from a main primary network node to a plurality of secondary network nodes via a plurality of point to point optical fiber connections; and if a fault condition is detected, switching to a back-up operating mode in which data is transmitted from a standby primary network, node to one or more of the plurality of secondary network nodes via a plurality of point to multi-point optical fiber connections. 12. A method according to claim 11, the method further comprising: following a rectification of the fault condition, reverting to the normal operating mode such that data is transmitted from the main primary network node to each of the plurality of secondary network nodes via the plurality of point to point optical fiber connections.
Embodiments disclosed herein provide a hybrid fiber-copper access network in which a main OLT sends data to the DSLAMs via a plurality of point-to-point optical fiber connections. A standby OLT is provided which has a plurality of point-to-multi-point optical fiber connections to the DSLAMs. In the event of a failure, data can be sent to some of the DSLAMs via the standby OLT and the point-to-multi-point optical fiber connections. Following the rectification of the fault, the network can revert to its normal state and transmit data to the DSLAMs via the main OLT and the plurality of point-to-point optical fiber connections.1. A communications network comprising: a main primary network node connected to a plurality of secondary network nodes via a plurality of point to point optical fiber connections; and a standby primary network node connected to the plurality of secondary network nodes via a plurality of point to multi-point optical fiber connections. 2. A communications network according to claim 1, wherein the plurality of point to multi-point optical fiber connections comprises a passive optical network (PON). 3. A communications network according to claim 2, wherein the PON comprises a primary optical splitter co-located with a PON optical line terminal (OLT). 4. A communications network according to claim 2, wherein the PON comprises a primary optical splitter co-located with one of the plurality of secondary network nodes. 5. A communications network according to claim 3, wherein the primary optical splitter and the PON OLT comprise a mode coupling receiver OLT. 6. A communications network according to claim 2, wherein the PON further comprises one or more secondary optical splitters. 7. A communications network according to claim 1, wherein the plurality of secondary network nodes are further connected to a plurality of metallic communications links. 8. A communications network according to claim 1, wherein the plurality of secondary network nodes each comprise a digital subscriber line add/drop multiplexer. 9. A communications network according to claim 1, wherein the network comprises a fiber to the cabinet network architecture 10. A communications network according to claim 1, wherein, in use, the network is operated by transmitting data to the plurality of secondary network nodes via the main primary network node and transmitting data to one or more of the plurality of secondary network nodes via the standby primary network node in the event that a failure event is detected. 11. A method of operating a communications network, the method comprising: in a normal operating mode, transmitting data from a main primary network node to a plurality of secondary network nodes via a plurality of point to point optical fiber connections; and if a fault condition is detected, switching to a back-up operating mode in which data is transmitted from a standby primary network, node to one or more of the plurality of secondary network nodes via a plurality of point to multi-point optical fiber connections. 12. A method according to claim 11, the method further comprising: following a rectification of the fault condition, reverting to the normal operating mode such that data is transmitted from the main primary network node to each of the plurality of secondary network nodes via the plurality of point to point optical fiber connections.
2,600
9,856
9,856
14,735,960
2,619
A method for generating a user-customized computer-generated animation includes receiving digital content and determining a modifiable portion of the digital content. The digital content includes a computer-generated animation. A design template is received, where the design template includes a representation of the modifiable portion of the digital content. Template image data is generated by performing image analysis on the representation of the modifiable portion of the digital content. A revised portion of the digital content is generated based on the template image data, where the revised portion is a revised version of the modifiable portion of the digital content. Updated digital content that includes a version of the computer-generated animation with the revised portion of the digital content is generated and displayed.
1. A method for generating a user-customized computer-generated animation, the method comprising: receiving digital content including a rendered video of a computer-generated animation; determining a modifiable portion of the digital content, wherein the digital content includes texture and shading data of the modifiable portion; receiving a design template, wherein the design template includes a representation of the modifiable portion of the digital content; generating template image data by performing image analysis on the representation of the modifiable portion of the digital content; generating a revised portion of the digital content based on the texture and shading data of the modifiable portion and the template image data, wherein the revised portion is a revised version of the modifiable portion of the digital content; generating an updated version of the video of the computer-generated animation, wherein the updated video comprises a version of the computer-generated animation including the revised portion of the digital content; and causing a display of the updated video. 2. The method of claim 1, further comprising: before receiving the design template, providing the design template for editing, wherein the received design template is an edited version of the provided design template. 3. The method of claim 2, wherein providing the design template comprises providing a user interface including the design template, and wherein the method further comprises: receiving data representing a user input on the user interface; and editing the design template in accordance with the user input. 4. The method of claim 1, wherein generating the template image data comprises determining a color profile of the representation of the modifiable portion of the digital content. 5. The method of claim 4, wherein generating the revised portion of the digital content comprises determining a color profile of the revised portion based on the color profile of the representation of the modifiable portion. 6. The method of claim 4, wherein generating the revised portion of the digital content comprises rendering the revised portion based on the color profile of the representation of the modifiable portion of the digital content. 7. The method of claim 1, wherein generating the updated version of the video of the computer-generated animation comprises mapping the revised version onto the modifiable portion of the digital content. 8. The method of claim 1, wherein generating the updated version of the video of the computer-generated animation comprises rendering the digital content using the revised portion. 9. The method of claim 1, further comprising: generating a version of the digital content without the modifiable portion, wherein generating the updated version of the video of the computer-generated animation includes overlaying the revised portion on the version of the digital content without the modifiable portion. 10. A system for generating a user-customized computer-generated animation, the system comprising: a processing unit and memory, wherein the processing unit is configured to: receive digital content including a rendered video of a computer-generated animation; determine a modifiable portion of the digital content, wherein the digital content includes texture and shading data of the modifiable portion; receive a design template, wherein the design template includes a representation of the modifiable portion of the digital content; generate template image data by performing image analysis on the representation of the modifiable portion of the digital content; generate a revised portion of the digital content based on the texture and shading data of the modifiable portion and template image data, wherein the revised portion is a revised version of the modifiable portion of the digital content; generate an updated version of the video of the computer-generated animation, wherein the updated video comprises a version of the computer-generated animation including the revised portion of the digital content; and cause a display of the updated video. 11. The system of claim 10, wherein the processing unit is further configured to: before receiving the design template, provide the design template for editing, wherein the design template received by the processing unit is an edited version of the provided design template. 12. The system of claim 11, wherein providing the design template comprises providing a user interface including the design template, and wherein the processing unit is further configured to: receive data representing a user input on the user interface; and edit the design template in accordance with the user input. 13. The system of claim 10, wherein generating the template image data comprises determining a color profile of the representation of the modifiable portion of the digital content. 14. The system of claim 13, wherein generating the revised portion of the digital content comprises determining a color profile of the revised portion based on the color profile of the representation of the modifiable portion. 15. The system of claim 13, wherein generating the revised portion of the digital content comprises rendering the revised portion based on the color profile of the representation of the modifiable portion of the digital content. 16. The system of claim 10, wherein generating the updated version of the video of the computer-generated animation comprises mapping the revised version onto the modifiable portion of the digital content. 17. The system of claim 10, wherein generating the updated version of the video of the computer-generated animation comprises rendering the digital content using the revised portion. 18. The system of claim 10, wherein the processing unit is further configured to: generate a version of the digital content without the modifiable portion, wherein generating the updated version of the video of the computer-generated animation includes overlaying the revised portion on the version of the digital content without the modifiable portion. 19. A non-transitory computer-readable storage medium comprising computer-executable instructions for generating a user-customized computer-generated animation, the computer-executable instructions comprising instructions for: receiving digital content including a rendered video of a computer-generated animation; determining a modifiable portion of the digital content, wherein the digital content includes texture and shading data of the modifiable portion; receiving a design template, wherein the design template includes a representation of the modifiable portion of the digital content; generating template image data by performing image analysis on the representation of the modifiable portion of the digital content; generating a revised portion of the digital content based on the texture and shading data of the modifiable portion and template image data, wherein the revised portion is a revised version of the modifiable portion of the digital content; generating an updated version of the video of the computer-generated animation, wherein the updated video comprises a version of the computer-generated animation including the revised portion of the digital content; and causing a display of the updated video. 20. The computer-readable storage medium of claim 19, further comprising instructions for: before receiving the design template, providing the design template for editing, wherein the received design template is an edited version of the provided design template. 21. The computer-readable storage medium of claim 20, wherein providing the design template comprises providing a user interface including the design template, and wherein the computer-readable storage medium further comprises instructions for: receiving data representing a user input on the user interface; and editing the design template in accordance with the user input. 22. The computer-readable storage medium of claim 19, wherein generating the template image data comprises determining a color profile of the representation of the modifiable portion of the digital content. 23. The computer-readable storage medium of claim 22, wherein generating the revised portion of the digital content comprises determining a color profile of the revised portion based on the color profile of the representation of the modifiable portion. 24. The computer-readable storage medium of claim 22, wherein generating the revised portion of the digital content comprises rendering the revised portion based on the color profile of the representation of the modifiable portion of the digital content. 25. The computer-readable storage medium of claim 19, wherein generating the updated version of the video of the computer-generated animation comprises mapping the revised version onto the modifiable portion of the digital content. 26. The computer-readable storage medium of claim 19, wherein generating the updated version of the video of the computer-generated animation comprises rendering the digital content using the revised portion. 27. The computer-readable storage medium of claim 19, further comprising instructions for: generating a version of the digital content without the modifiable portion, wherein generating the updated version of the video of the computer-generated animation includes overlaying the revised portion on the version of the digital content without the modifiable portion.
A method for generating a user-customized computer-generated animation includes receiving digital content and determining a modifiable portion of the digital content. The digital content includes a computer-generated animation. A design template is received, where the design template includes a representation of the modifiable portion of the digital content. Template image data is generated by performing image analysis on the representation of the modifiable portion of the digital content. A revised portion of the digital content is generated based on the template image data, where the revised portion is a revised version of the modifiable portion of the digital content. Updated digital content that includes a version of the computer-generated animation with the revised portion of the digital content is generated and displayed.1. A method for generating a user-customized computer-generated animation, the method comprising: receiving digital content including a rendered video of a computer-generated animation; determining a modifiable portion of the digital content, wherein the digital content includes texture and shading data of the modifiable portion; receiving a design template, wherein the design template includes a representation of the modifiable portion of the digital content; generating template image data by performing image analysis on the representation of the modifiable portion of the digital content; generating a revised portion of the digital content based on the texture and shading data of the modifiable portion and the template image data, wherein the revised portion is a revised version of the modifiable portion of the digital content; generating an updated version of the video of the computer-generated animation, wherein the updated video comprises a version of the computer-generated animation including the revised portion of the digital content; and causing a display of the updated video. 2. The method of claim 1, further comprising: before receiving the design template, providing the design template for editing, wherein the received design template is an edited version of the provided design template. 3. The method of claim 2, wherein providing the design template comprises providing a user interface including the design template, and wherein the method further comprises: receiving data representing a user input on the user interface; and editing the design template in accordance with the user input. 4. The method of claim 1, wherein generating the template image data comprises determining a color profile of the representation of the modifiable portion of the digital content. 5. The method of claim 4, wherein generating the revised portion of the digital content comprises determining a color profile of the revised portion based on the color profile of the representation of the modifiable portion. 6. The method of claim 4, wherein generating the revised portion of the digital content comprises rendering the revised portion based on the color profile of the representation of the modifiable portion of the digital content. 7. The method of claim 1, wherein generating the updated version of the video of the computer-generated animation comprises mapping the revised version onto the modifiable portion of the digital content. 8. The method of claim 1, wherein generating the updated version of the video of the computer-generated animation comprises rendering the digital content using the revised portion. 9. The method of claim 1, further comprising: generating a version of the digital content without the modifiable portion, wherein generating the updated version of the video of the computer-generated animation includes overlaying the revised portion on the version of the digital content without the modifiable portion. 10. A system for generating a user-customized computer-generated animation, the system comprising: a processing unit and memory, wherein the processing unit is configured to: receive digital content including a rendered video of a computer-generated animation; determine a modifiable portion of the digital content, wherein the digital content includes texture and shading data of the modifiable portion; receive a design template, wherein the design template includes a representation of the modifiable portion of the digital content; generate template image data by performing image analysis on the representation of the modifiable portion of the digital content; generate a revised portion of the digital content based on the texture and shading data of the modifiable portion and template image data, wherein the revised portion is a revised version of the modifiable portion of the digital content; generate an updated version of the video of the computer-generated animation, wherein the updated video comprises a version of the computer-generated animation including the revised portion of the digital content; and cause a display of the updated video. 11. The system of claim 10, wherein the processing unit is further configured to: before receiving the design template, provide the design template for editing, wherein the design template received by the processing unit is an edited version of the provided design template. 12. The system of claim 11, wherein providing the design template comprises providing a user interface including the design template, and wherein the processing unit is further configured to: receive data representing a user input on the user interface; and edit the design template in accordance with the user input. 13. The system of claim 10, wherein generating the template image data comprises determining a color profile of the representation of the modifiable portion of the digital content. 14. The system of claim 13, wherein generating the revised portion of the digital content comprises determining a color profile of the revised portion based on the color profile of the representation of the modifiable portion. 15. The system of claim 13, wherein generating the revised portion of the digital content comprises rendering the revised portion based on the color profile of the representation of the modifiable portion of the digital content. 16. The system of claim 10, wherein generating the updated version of the video of the computer-generated animation comprises mapping the revised version onto the modifiable portion of the digital content. 17. The system of claim 10, wherein generating the updated version of the video of the computer-generated animation comprises rendering the digital content using the revised portion. 18. The system of claim 10, wherein the processing unit is further configured to: generate a version of the digital content without the modifiable portion, wherein generating the updated version of the video of the computer-generated animation includes overlaying the revised portion on the version of the digital content without the modifiable portion. 19. A non-transitory computer-readable storage medium comprising computer-executable instructions for generating a user-customized computer-generated animation, the computer-executable instructions comprising instructions for: receiving digital content including a rendered video of a computer-generated animation; determining a modifiable portion of the digital content, wherein the digital content includes texture and shading data of the modifiable portion; receiving a design template, wherein the design template includes a representation of the modifiable portion of the digital content; generating template image data by performing image analysis on the representation of the modifiable portion of the digital content; generating a revised portion of the digital content based on the texture and shading data of the modifiable portion and template image data, wherein the revised portion is a revised version of the modifiable portion of the digital content; generating an updated version of the video of the computer-generated animation, wherein the updated video comprises a version of the computer-generated animation including the revised portion of the digital content; and causing a display of the updated video. 20. The computer-readable storage medium of claim 19, further comprising instructions for: before receiving the design template, providing the design template for editing, wherein the received design template is an edited version of the provided design template. 21. The computer-readable storage medium of claim 20, wherein providing the design template comprises providing a user interface including the design template, and wherein the computer-readable storage medium further comprises instructions for: receiving data representing a user input on the user interface; and editing the design template in accordance with the user input. 22. The computer-readable storage medium of claim 19, wherein generating the template image data comprises determining a color profile of the representation of the modifiable portion of the digital content. 23. The computer-readable storage medium of claim 22, wherein generating the revised portion of the digital content comprises determining a color profile of the revised portion based on the color profile of the representation of the modifiable portion. 24. The computer-readable storage medium of claim 22, wherein generating the revised portion of the digital content comprises rendering the revised portion based on the color profile of the representation of the modifiable portion of the digital content. 25. The computer-readable storage medium of claim 19, wherein generating the updated version of the video of the computer-generated animation comprises mapping the revised version onto the modifiable portion of the digital content. 26. The computer-readable storage medium of claim 19, wherein generating the updated version of the video of the computer-generated animation comprises rendering the digital content using the revised portion. 27. The computer-readable storage medium of claim 19, further comprising instructions for: generating a version of the digital content without the modifiable portion, wherein generating the updated version of the video of the computer-generated animation includes overlaying the revised portion on the version of the digital content without the modifiable portion.
2,600
9,857
9,857
15,004,344
2,657
Systems and methods are provided for gathering audience measurement data relating to receipt of and/or exposure to audio data by an audience member. Audio data is monitored to detect a monitoring code. Based on detection of the monitoring code, a signature characterizing the audio data is extracted.
1. (canceled) 2. A method of monitoring receipt or exposure to audio data, the method comprising: receiving audio data in a monitoring device, the audio data including a monitoring code indicating that the audio data is to be monitored; processing the audio data with a processor of the monitoring device to detect the monitoring code; and generating, with the processor, a signature characterizing the audio data in response to detecting the monitoring code, the signature being generated from at least a portion of the audio data including the monitoring code. 3. The method of claim 2, wherein the monitoring code is encoded in the audio data. 4. The method of claim 2, wherein the signature is generated based on at least one of: (1) time-domain variations in the audio data, (2) frequency-domain variations in the audio data, (3) signal-to-noise ratios for frequency components of the audio data, or (4) time-domain variations of the signal-to-noise ratios in a plurality of frequency sub-bands of the audio data. 5. The method of claim 2, wherein the audio data includes a first data set and a second data set, the first data set includes the monitoring code, the signature characterizes the first data set, the second data set includes a second monitoring code, and further including: detecting the second monitoring code; and in response to detection of the second monitoring code, generating a second signature characterizing the second data set, the second signature being generated from at least a portion of the audio data including the second monitoring code. 6. The method of claim 2, wherein the portion of the audio used to generate the signature further includes a source identification code. 7. The method of claim 6, wherein the monitoring code and the source identification code occur simultaneously in the audio data. 8. The method of claim 6, wherein the monitoring code and the source identification code occur in different time segments of the audio data. 9. A tangible article of manufacture comprising computer readable instructions which, when executed, cause a processor of a monitoring device to at least: access audio data at the monitoring device; process the audio data to detect a monitoring code included in the audio data, the monitoring code indicating that the audio data is to be monitored; and generate a signature characterizing the audio data in response to detection of the monitoring code, the signature being generated from at least a portion of the audio data including the monitoring code. 10. The article of manufacture of claim 9, wherein the monitoring code is encoded in the audio data. 11. The article of manufacture of claim 9, wherein the signature is generated based on at least one of: (1) time-domain variations in the audio data, (2) frequency-domain variations in the audio data, (3) signal-to-noise ratios for frequency components of the audio data, or (4) time-domain variations of the signal-to-noise ratios in a plurality of frequency sub-bands of the audio data. 12. The article of manufacture of claim 9, wherein the audio data includes a first data set and a second data set, the first data set includes the monitoring code, the signature characterizes the first data set, the second data set includes a second monitoring code, and the instructions, when executed, further cause the processor to: detect the second monitoring code; and in response to detection of the second monitoring code, generate a second signature characterizing the second data set, the second signature being generated from at least a portion of the audio data including the second monitoring code. 13. The article of manufacture of claim 9, wherein the portion of the audio used to generate the signature further includes a source identification code. 14. The article of manufacture of claim 13, wherein the monitoring code and the source identification code occur simultaneously in the audio data. 15. The article of manufacture of claim 13, wherein the monitoring code and the source identification code occur in different time segments of the audio data. 16. A monitoring device comprising: an input device to receive audio data; a detector to detect a monitoring code encoded in the audio data, the monitoring code indicating that the audio data is to be monitored; and a processor to generate a signature characterizing the audio data in response to detection of the monitoring code by the detector, the signature being generated from at least a portion of the audio data in which the monitoring code is encoded. 17. The monitoring device of claim 16, wherein the processor is to generate the signature based on at least one of: (1) time-domain variations in the audio data, (2) frequency-domain variations in the audio data, (3) signal-to-noise ratios for frequency components of the audio data, or (4) time-domain variations of the signal-to-noise ratios in a plurality of frequency sub-bands of the audio data. 18. The monitoring device of claim 16, wherein the audio data includes a first data set and a second data set, the first data set includes the monitoring code, the signature characterizes the first data set, the second data set includes a second monitoring code, the detector is further to detect the second monitoring code, and the processor is further to generate a second signature characterizing the second data set in response to the second monitoring code being detected by the detector, the second signature being generated from at least a portion of the audio data including the second monitoring code. 19. The monitoring device of claim 16, wherein the portion of the audio used to generate the signature further includes a source identification code. 20. The monitoring device of claim 19, wherein the monitoring code and the source identification code occur simultaneously in the audio data. 21. The monitoring device of claim 16, wherein the monitoring code occurs at least one of (1) continuously throughout a time base of the audio data, or (2) repeatedly in at least one interval of the audio data.
Systems and methods are provided for gathering audience measurement data relating to receipt of and/or exposure to audio data by an audience member. Audio data is monitored to detect a monitoring code. Based on detection of the monitoring code, a signature characterizing the audio data is extracted.1. (canceled) 2. A method of monitoring receipt or exposure to audio data, the method comprising: receiving audio data in a monitoring device, the audio data including a monitoring code indicating that the audio data is to be monitored; processing the audio data with a processor of the monitoring device to detect the monitoring code; and generating, with the processor, a signature characterizing the audio data in response to detecting the monitoring code, the signature being generated from at least a portion of the audio data including the monitoring code. 3. The method of claim 2, wherein the monitoring code is encoded in the audio data. 4. The method of claim 2, wherein the signature is generated based on at least one of: (1) time-domain variations in the audio data, (2) frequency-domain variations in the audio data, (3) signal-to-noise ratios for frequency components of the audio data, or (4) time-domain variations of the signal-to-noise ratios in a plurality of frequency sub-bands of the audio data. 5. The method of claim 2, wherein the audio data includes a first data set and a second data set, the first data set includes the monitoring code, the signature characterizes the first data set, the second data set includes a second monitoring code, and further including: detecting the second monitoring code; and in response to detection of the second monitoring code, generating a second signature characterizing the second data set, the second signature being generated from at least a portion of the audio data including the second monitoring code. 6. The method of claim 2, wherein the portion of the audio used to generate the signature further includes a source identification code. 7. The method of claim 6, wherein the monitoring code and the source identification code occur simultaneously in the audio data. 8. The method of claim 6, wherein the monitoring code and the source identification code occur in different time segments of the audio data. 9. A tangible article of manufacture comprising computer readable instructions which, when executed, cause a processor of a monitoring device to at least: access audio data at the monitoring device; process the audio data to detect a monitoring code included in the audio data, the monitoring code indicating that the audio data is to be monitored; and generate a signature characterizing the audio data in response to detection of the monitoring code, the signature being generated from at least a portion of the audio data including the monitoring code. 10. The article of manufacture of claim 9, wherein the monitoring code is encoded in the audio data. 11. The article of manufacture of claim 9, wherein the signature is generated based on at least one of: (1) time-domain variations in the audio data, (2) frequency-domain variations in the audio data, (3) signal-to-noise ratios for frequency components of the audio data, or (4) time-domain variations of the signal-to-noise ratios in a plurality of frequency sub-bands of the audio data. 12. The article of manufacture of claim 9, wherein the audio data includes a first data set and a second data set, the first data set includes the monitoring code, the signature characterizes the first data set, the second data set includes a second monitoring code, and the instructions, when executed, further cause the processor to: detect the second monitoring code; and in response to detection of the second monitoring code, generate a second signature characterizing the second data set, the second signature being generated from at least a portion of the audio data including the second monitoring code. 13. The article of manufacture of claim 9, wherein the portion of the audio used to generate the signature further includes a source identification code. 14. The article of manufacture of claim 13, wherein the monitoring code and the source identification code occur simultaneously in the audio data. 15. The article of manufacture of claim 13, wherein the monitoring code and the source identification code occur in different time segments of the audio data. 16. A monitoring device comprising: an input device to receive audio data; a detector to detect a monitoring code encoded in the audio data, the monitoring code indicating that the audio data is to be monitored; and a processor to generate a signature characterizing the audio data in response to detection of the monitoring code by the detector, the signature being generated from at least a portion of the audio data in which the monitoring code is encoded. 17. The monitoring device of claim 16, wherein the processor is to generate the signature based on at least one of: (1) time-domain variations in the audio data, (2) frequency-domain variations in the audio data, (3) signal-to-noise ratios for frequency components of the audio data, or (4) time-domain variations of the signal-to-noise ratios in a plurality of frequency sub-bands of the audio data. 18. The monitoring device of claim 16, wherein the audio data includes a first data set and a second data set, the first data set includes the monitoring code, the signature characterizes the first data set, the second data set includes a second monitoring code, the detector is further to detect the second monitoring code, and the processor is further to generate a second signature characterizing the second data set in response to the second monitoring code being detected by the detector, the second signature being generated from at least a portion of the audio data including the second monitoring code. 19. The monitoring device of claim 16, wherein the portion of the audio used to generate the signature further includes a source identification code. 20. The monitoring device of claim 19, wherein the monitoring code and the source identification code occur simultaneously in the audio data. 21. The monitoring device of claim 16, wherein the monitoring code occurs at least one of (1) continuously throughout a time base of the audio data, or (2) repeatedly in at least one interval of the audio data.
2,600
9,858
9,858
15,564,877
2,645
Various communication systems may benefit from network location reporting. For example, such reporting may be beneficial in the establishment of a multimedia broadcast/multicast service bearer for media traffic delivery from a group communication service application server or other similar device or system. A method can include reporting at least a first set of user equipment location information and at least a second set of user equipment location information from the user equipment to at least one network node for broadcast bearer management. The first set of user Report First Set of Report Second Set equipment location information can include a serving cell identity UE Location of UE Location of the user equipment. The second set of user equipment location information can include at least one identity of at least one network service area for media broadcasting.
1. A method, comprising: reporting at least a first set of user equipment location information and at least a second set of user equipment location information from a user equipment to at least one network node for broadcast bearer management, wherein the first set of user equipment location information comprises a serving cell identity of the user equipment, and wherein the second set of user equipment location information comprises at least one identity of at least one network service area for media broadcasting. 2. The method of claim 1, wherein the identity of the network service area comprises a multimedia broadcast/multicast service (MBMS) service area identity and/or tracking area identity. 3. The method of claim 1, wherein the serving cell identity of the user equipment comprises a cell global identifier. 4. The method of claim 1, wherein the reporting comprises reporting from the user equipment to a group communication service application server, any other application server or content provider, or to an intermediate node that can be accessed by the group communication service application server or other application server or content provider. 5. (canceled) 6. A method comprising: receiving at least a first set of user equipment location information and at least a second set of user equipment location information from the user equipment to at least one network node; and performing broadcast bearer management based on the received first set of user equipment location information and the received second set of user equipment location information, wherein the first set of user equipment location information comprises a serving cell identity of the user equipment, and wherein the second set of user equipment location information comprises at least one identity of at least one network service area for media broadcasting. 7.-9. (canceled) 10. The method of claim 6, further comprising: mapping from a cell identity to one or more multimedia broadcast multicast (MBMS) service areas, wherein the broadcast bearer management is based on the mapping. 11. An apparatus, comprising: at least one processor; and at least one memory including computer program code, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to report at least a first set of user equipment location information and at least a second set of user equipment location information from the user equipment to at least one network node for broadcast bearer management, wherein the first set of user equipment location information comprises a serving cell identity of the user equipment, and wherein the second set of user equipment location information comprises at least one identity of at least one network service area for media broadcasting. 12. The apparatus of claim 11, wherein the identity of the network service area comprises a multimedia broadcast/multicast service (MBMS) service area identity and/or tracking area identity. 13. The apparatus of claim 11, wherein the serving cell identity of the user equipment comprises a cell global identifier. 14. The apparatus of claim 11, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to report from the user equipment to a group communication service application server, to any other application server or content provider, or to an intermediate node that can be accessed by the group communication service application server or other application server or content provider. 15. (canceled) 16. An apparatus comprising: at least one processor; and at least one memory including computer program code, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to receive at least a first set of user equipment location information and at least a second set of user equipment location information from the user equipment to at least one network node; and perform broadcast bearer management based on the received first set of user equipment location information and the received second set of user equipment location information, wherein the first set of user equipment location information comprises a serving cell identity of the user equipment, and wherein the second set of user equipment location information comprises at least one identity of at least one network service area for media broadcasting. 17. (canceled) 18. (canceled) 19. The apparatus of claim 16, wherein the apparatus comprises at least one of a group communication service application server, any other application server, or a broadcast multicast service center. 20. The apparatus of claim 16, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to map from a cell identity to one or more multimedia broadcast multicast (MBMS) service areas, wherein the broadcast bearer management is based on the mapping. 21.-31. (canceled) 32. A non-transitory computer-readable medium encoded with instructions that, when executed in hardware, perform a process, the process comprising the method according to claim 1.
Various communication systems may benefit from network location reporting. For example, such reporting may be beneficial in the establishment of a multimedia broadcast/multicast service bearer for media traffic delivery from a group communication service application server or other similar device or system. A method can include reporting at least a first set of user equipment location information and at least a second set of user equipment location information from the user equipment to at least one network node for broadcast bearer management. The first set of user Report First Set of Report Second Set equipment location information can include a serving cell identity UE Location of UE Location of the user equipment. The second set of user equipment location information can include at least one identity of at least one network service area for media broadcasting.1. A method, comprising: reporting at least a first set of user equipment location information and at least a second set of user equipment location information from a user equipment to at least one network node for broadcast bearer management, wherein the first set of user equipment location information comprises a serving cell identity of the user equipment, and wherein the second set of user equipment location information comprises at least one identity of at least one network service area for media broadcasting. 2. The method of claim 1, wherein the identity of the network service area comprises a multimedia broadcast/multicast service (MBMS) service area identity and/or tracking area identity. 3. The method of claim 1, wherein the serving cell identity of the user equipment comprises a cell global identifier. 4. The method of claim 1, wherein the reporting comprises reporting from the user equipment to a group communication service application server, any other application server or content provider, or to an intermediate node that can be accessed by the group communication service application server or other application server or content provider. 5. (canceled) 6. A method comprising: receiving at least a first set of user equipment location information and at least a second set of user equipment location information from the user equipment to at least one network node; and performing broadcast bearer management based on the received first set of user equipment location information and the received second set of user equipment location information, wherein the first set of user equipment location information comprises a serving cell identity of the user equipment, and wherein the second set of user equipment location information comprises at least one identity of at least one network service area for media broadcasting. 7.-9. (canceled) 10. The method of claim 6, further comprising: mapping from a cell identity to one or more multimedia broadcast multicast (MBMS) service areas, wherein the broadcast bearer management is based on the mapping. 11. An apparatus, comprising: at least one processor; and at least one memory including computer program code, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to report at least a first set of user equipment location information and at least a second set of user equipment location information from the user equipment to at least one network node for broadcast bearer management, wherein the first set of user equipment location information comprises a serving cell identity of the user equipment, and wherein the second set of user equipment location information comprises at least one identity of at least one network service area for media broadcasting. 12. The apparatus of claim 11, wherein the identity of the network service area comprises a multimedia broadcast/multicast service (MBMS) service area identity and/or tracking area identity. 13. The apparatus of claim 11, wherein the serving cell identity of the user equipment comprises a cell global identifier. 14. The apparatus of claim 11, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to report from the user equipment to a group communication service application server, to any other application server or content provider, or to an intermediate node that can be accessed by the group communication service application server or other application server or content provider. 15. (canceled) 16. An apparatus comprising: at least one processor; and at least one memory including computer program code, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to receive at least a first set of user equipment location information and at least a second set of user equipment location information from the user equipment to at least one network node; and perform broadcast bearer management based on the received first set of user equipment location information and the received second set of user equipment location information, wherein the first set of user equipment location information comprises a serving cell identity of the user equipment, and wherein the second set of user equipment location information comprises at least one identity of at least one network service area for media broadcasting. 17. (canceled) 18. (canceled) 19. The apparatus of claim 16, wherein the apparatus comprises at least one of a group communication service application server, any other application server, or a broadcast multicast service center. 20. The apparatus of claim 16, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to map from a cell identity to one or more multimedia broadcast multicast (MBMS) service areas, wherein the broadcast bearer management is based on the mapping. 21.-31. (canceled) 32. A non-transitory computer-readable medium encoded with instructions that, when executed in hardware, perform a process, the process comprising the method according to claim 1.
2,600
9,859
9,859
15,128,363
2,622
At least one display area is predefined in a touch-sensitive and flexible display of a display device. When the display is deformed in such a way that the predefined display area is distorted; an operator control element of a graphical user interface is displayed in the predefined display area.
1-10. (canceled) 11. A method for operating a display device of a motor vehicle, comprising: predefining at least one display area within a display of the display device, having a touch-sensitive surface and a flexible structure; deforming the display so that the at least one display area is distorted; and displaying an operator control element of a graphical user interface in the at least one display area of the display. 12. A method according to claim 11, wherein said deforming includes bending a first part of the display towards a second part of the display, so as to form a curved part in the at least one display area between the first and second parts of the display. 13. A method according to claim 11, wherein said deforming includes forming a depression in the at least one display area of the display. 14. A method according to claim 11, further comprising detecting a user touch of the touch-sensitive surface of the display, and wherein the at least one display area is not distorted until the user touch of the display with a finger is detected. 15. A method according to claim 14, wherein said detecting detects the user touch in the at least one display area, and wherein said method further comprises reversing said deforming of the at least one display area when said detecting detects that the finger of the user has been removed from the at least one display area. 16. A method according to claim 11, wherein said deforming of the display is performed contemporaneously with said displaying of the operator control element. 17. A method according to claim 11, wherein said predefining includes a position of the at least one display area and a shape of distortion of the at least one display area in dependence upon the graphical user interface displayed on the display. 18. A method according to claim 11, wherein the at least one display area includes a plurality of display areas, predefined and distorted, and wherein said predefining includes a number, respective positions and a shape of respective distortions of the display areas, as a function of the graphical user interface displayed on the display. 19. A display device for a motor vehicle, comprising: a display with a touch-sensitive surface and a flexible structure; an adjustment device configured to deform the display so that at least one display area within the display is distorted; a controller configured to predefine the at least one display area within the display and to actuate the display to display an operator control element of a graphical user interface in the at least one display area. 20. A display device according to claim 19, wherein said adjustment device bends a first part of the display towards a second part of the display, so as to form a curved part in the at least one display area between the first and second parts of the display. 21. A display device according to claim 19, further comprising a detector configured to detect a touch by a finger of a user on the touch-sensitive surface of the display, and wherein said adjustment device distorts the at least one display area after the touch on the touch-sensitive surface of the display is detected. 22. A display device according to claim 21, wherein said detector detects the touch in the at least one display area, and wherein said adjustment device reverses distortion of the at least one display area when said detector detects that the finger of the user has been removed from the at least one display area. 23. A display device according to claim 19, wherein said controller predefines a position of the at least one display area and a shape of distortion of the at least one display area in dependence upon the graphical user interface displayed on the display. 24. A display device according to claim 19, wherein the at least one display area includes a plurality of display areas, predefined and distorted, and wherein said controller predefines a number, respective positions and a shape of respective distortions of the display areas, as a function of the graphical user interface displayed on the display. 25. A motor vehicle, comprising: a chassis; and a display device including a touch-sensitive and flexible display; an adjustment device configured to deform the display so that at least one display area within the display is distorted; controller configured to predefine the at least one display area within the display and to actuate the display to display an operator control element of a graphical user interface in the at least one display area. 26. A motor vehicle according to claim 25, wherein said adjustment device bends a first part of the display towards a second part of the display, so as to form a curved part in the at least one display area between the first and second parts of the display. 27. A motor vehicle according to claim 25, further comprising a detector configured to detect a touch by a finger of a user on the touch-sensitive surface of the display, and wherein said adjustment device distorts the at least one display area after the touch on the touch-sensitive surface of the display is detected. 28. A motor vehicle according to claim 27, wherein said detector detects the touch in the at least one display area, and wherein said adjustment device reverses distortion of the at least one display area when said detector detects that the finger of the user has been removed from the at least one display area. 29. A motor vehicle according to claim 25, wherein said controller predefines a position of the at least one display area and a shape of distortion of the at least one display area in dependence upon the graphical user interface displayed on the display. 30. A motor vehicle according to claim 25, wherein the at least one display area includes a plurality of display areas, predefined and distorted, and wherein said controller predefines a number, respective positions and a shape of respective distortions of the display areas, as a function of the graphical user interface displayed on the display.
At least one display area is predefined in a touch-sensitive and flexible display of a display device. When the display is deformed in such a way that the predefined display area is distorted; an operator control element of a graphical user interface is displayed in the predefined display area.1-10. (canceled) 11. A method for operating a display device of a motor vehicle, comprising: predefining at least one display area within a display of the display device, having a touch-sensitive surface and a flexible structure; deforming the display so that the at least one display area is distorted; and displaying an operator control element of a graphical user interface in the at least one display area of the display. 12. A method according to claim 11, wherein said deforming includes bending a first part of the display towards a second part of the display, so as to form a curved part in the at least one display area between the first and second parts of the display. 13. A method according to claim 11, wherein said deforming includes forming a depression in the at least one display area of the display. 14. A method according to claim 11, further comprising detecting a user touch of the touch-sensitive surface of the display, and wherein the at least one display area is not distorted until the user touch of the display with a finger is detected. 15. A method according to claim 14, wherein said detecting detects the user touch in the at least one display area, and wherein said method further comprises reversing said deforming of the at least one display area when said detecting detects that the finger of the user has been removed from the at least one display area. 16. A method according to claim 11, wherein said deforming of the display is performed contemporaneously with said displaying of the operator control element. 17. A method according to claim 11, wherein said predefining includes a position of the at least one display area and a shape of distortion of the at least one display area in dependence upon the graphical user interface displayed on the display. 18. A method according to claim 11, wherein the at least one display area includes a plurality of display areas, predefined and distorted, and wherein said predefining includes a number, respective positions and a shape of respective distortions of the display areas, as a function of the graphical user interface displayed on the display. 19. A display device for a motor vehicle, comprising: a display with a touch-sensitive surface and a flexible structure; an adjustment device configured to deform the display so that at least one display area within the display is distorted; a controller configured to predefine the at least one display area within the display and to actuate the display to display an operator control element of a graphical user interface in the at least one display area. 20. A display device according to claim 19, wherein said adjustment device bends a first part of the display towards a second part of the display, so as to form a curved part in the at least one display area between the first and second parts of the display. 21. A display device according to claim 19, further comprising a detector configured to detect a touch by a finger of a user on the touch-sensitive surface of the display, and wherein said adjustment device distorts the at least one display area after the touch on the touch-sensitive surface of the display is detected. 22. A display device according to claim 21, wherein said detector detects the touch in the at least one display area, and wherein said adjustment device reverses distortion of the at least one display area when said detector detects that the finger of the user has been removed from the at least one display area. 23. A display device according to claim 19, wherein said controller predefines a position of the at least one display area and a shape of distortion of the at least one display area in dependence upon the graphical user interface displayed on the display. 24. A display device according to claim 19, wherein the at least one display area includes a plurality of display areas, predefined and distorted, and wherein said controller predefines a number, respective positions and a shape of respective distortions of the display areas, as a function of the graphical user interface displayed on the display. 25. A motor vehicle, comprising: a chassis; and a display device including a touch-sensitive and flexible display; an adjustment device configured to deform the display so that at least one display area within the display is distorted; controller configured to predefine the at least one display area within the display and to actuate the display to display an operator control element of a graphical user interface in the at least one display area. 26. A motor vehicle according to claim 25, wherein said adjustment device bends a first part of the display towards a second part of the display, so as to form a curved part in the at least one display area between the first and second parts of the display. 27. A motor vehicle according to claim 25, further comprising a detector configured to detect a touch by a finger of a user on the touch-sensitive surface of the display, and wherein said adjustment device distorts the at least one display area after the touch on the touch-sensitive surface of the display is detected. 28. A motor vehicle according to claim 27, wherein said detector detects the touch in the at least one display area, and wherein said adjustment device reverses distortion of the at least one display area when said detector detects that the finger of the user has been removed from the at least one display area. 29. A motor vehicle according to claim 25, wherein said controller predefines a position of the at least one display area and a shape of distortion of the at least one display area in dependence upon the graphical user interface displayed on the display. 30. A motor vehicle according to claim 25, wherein the at least one display area includes a plurality of display areas, predefined and distorted, and wherein said controller predefines a number, respective positions and a shape of respective distortions of the display areas, as a function of the graphical user interface displayed on the display.
2,600
9,860
9,860
15,445,746
2,647
One or more patterns associated with a mobile device are determined. The one or more patterns are determined at least in part by identifying within a set of time information and location information associated with the mobile device a subset comprising successive locations and times associated with a temporary pattern, converting the temporary pattern to be a pattern of the one or more patterns, adding the pattern to a list of detected patterns, generating a reduced set of time information and location information at least in part by removing from the set of time information and location information the subset comprising the successive locations and times, analyzing the reduced set of time information and location information to determine one or more additional patterns. The mobile device can be configured to exhibit a mobile device behavior associated with the determined one or more patterns.
1. A method, comprising: determining, using a processor, one or more patterns associated with a mobile device, wherein the one or more patterns are determined at least in part by: identifying within a set of time information and location information associated with the mobile device a subset comprising successive locations and times associated with a temporary pattern; converting the temporary pattern to be a pattern of the one or more patterns; adding the pattern to a list of detected patterns; generating a reduced set of time information and location information at least in part by removing from the set of time information and location information the subset comprising the successive locations and times; and analyzing the reduced set of time information and location information to determine one or more additional patterns; and configuring the mobile device to exhibit a mobile device behavior associated with the determined one or more patterns. 2. The method of claim 1, further comprising determining, using the processor, the mobile device behavior associated with the determined one or more patterns. 3. The method of claim 1, wherein the processor is located on the mobile device. 4. The method of claim 1, wherein: the processor is located on a device other than the mobile device; and the method further includes receiving, at the processor, the set of time information and location information from the mobile device. 5. The method of claim 1 further comprising presenting information associated with the mobile device behavior via a device other than the mobile device. 6. The method of claim 1 further comprising presenting information associated with the mobile device behavior via one or more of the following: the mobile device and an Internet browser application. 7. The method of claim 1, wherein at least one of the one or more patterns includes a first end, a second end, and a path between the first end and the second end. 8. The method of claim 1, wherein the mobile device behavior is associated with configuring one or more of the following: Wi-Fi (or other wireless local area network (LAN) communication protocol), Bluetooth (or other near-field communication protocol), global positioning system (GPS), a data network associated with mobile telephony, a tone, a sound effect, a screen lock, a keypad lock, a background, a wallpaper, separate program execution, separate program termination, call handling, notifications, reminders, or deleting memory. 9. The method of claim 1, wherein the mobile device behavior is associated with one or more of the following: a safe driving profile, a homework profile, a bedtime profile, or a quiet zone profile. 10. The method of claim 1, further comprising receiving via a user interface a modification to the mobile device behavior, wherein the mobile device is configured to exhibit the modification to the mobile device behavior. 11. The method of claim 1, wherein there is a plurality of mobile device behaviors and the method further includes prioritizing the plurality of mobile device behaviors. 12. The method of claim 1, wherein prioritizing is based at least in part on one or more of the following: time spent at a pattern, a time of day, a day of a week, an acceptance rate, expert opinion, or power savings. 13. The method of claim 1, wherein the pattern is a round-trip pattern. 14. A system, comprising: a memory configured to store a set of time information and location information associated with a mobile device; and a processor coupled to the memory and configured to: determine one or more patterns associated with a mobile device, wherein the one or more patterns are determined at least in part by: identifying within a set of time information and location information associated with the mobile device a subset comprising successive locations and times associated with a temporary pattern; converting the temporary pattern to be a pattern of the one or more patterns; adding the pattern to a list of detected patterns; generating a reduced set of time information and location information at least in part by removing from the set of time information and location information the subset comprising the successive locations and times; and analyzing the reduced set of time information and location information to determine one or more additional patterns; and configure the mobile device to exhibit a mobile device behavior associated with the determined one or more patterns. 15. The system of claim 14, wherein the memory is located on the mobile device. 16. The system of claim 14, wherein the system further includes a communication interface and the set of time information and location information is received from the mobile device via the communication interface. 17. The system of claim 14, wherein the processor is further configure to present information associated with the mobile device behavior via one or more of the following: the mobile device and an Internet browser application. 18. The system of claim 14, wherein the processor is further configure to receive via a user interface a modification to the mobile device behavior, and to configured the mobile device to exhibit the modification to the mobile device behavior. 19. A computer program product, the computer program product being embodied in a tangible non-transitory computer readable storage medium and comprising computer instructions for: determining, using a processor, one or more patterns associated with a mobile device, wherein the one or more patterns are determined at least in part by: identifying within a set of time information and location information associated with the mobile device a subset comprising successive locations and times associated with a temporary pattern; converting the temporary pattern to be a pattern of the one or more patterns; adding the pattern to a list of detected patterns; generating a reduced set of time information and location information at least in part by removing from the set of time information and location information the subset comprising the successive locations and times; and analyzing the reduced set of time information and location information to determine one or more additional patterns; and configuring the mobile device to exhibit a mobile device behavior associated with the determined one or more patterns. 20. The computer program product of claim 19, further comprising instructions for determining, using the processor, the mobile device behavior associated with the determined one or more patterns.
One or more patterns associated with a mobile device are determined. The one or more patterns are determined at least in part by identifying within a set of time information and location information associated with the mobile device a subset comprising successive locations and times associated with a temporary pattern, converting the temporary pattern to be a pattern of the one or more patterns, adding the pattern to a list of detected patterns, generating a reduced set of time information and location information at least in part by removing from the set of time information and location information the subset comprising the successive locations and times, analyzing the reduced set of time information and location information to determine one or more additional patterns. The mobile device can be configured to exhibit a mobile device behavior associated with the determined one or more patterns.1. A method, comprising: determining, using a processor, one or more patterns associated with a mobile device, wherein the one or more patterns are determined at least in part by: identifying within a set of time information and location information associated with the mobile device a subset comprising successive locations and times associated with a temporary pattern; converting the temporary pattern to be a pattern of the one or more patterns; adding the pattern to a list of detected patterns; generating a reduced set of time information and location information at least in part by removing from the set of time information and location information the subset comprising the successive locations and times; and analyzing the reduced set of time information and location information to determine one or more additional patterns; and configuring the mobile device to exhibit a mobile device behavior associated with the determined one or more patterns. 2. The method of claim 1, further comprising determining, using the processor, the mobile device behavior associated with the determined one or more patterns. 3. The method of claim 1, wherein the processor is located on the mobile device. 4. The method of claim 1, wherein: the processor is located on a device other than the mobile device; and the method further includes receiving, at the processor, the set of time information and location information from the mobile device. 5. The method of claim 1 further comprising presenting information associated with the mobile device behavior via a device other than the mobile device. 6. The method of claim 1 further comprising presenting information associated with the mobile device behavior via one or more of the following: the mobile device and an Internet browser application. 7. The method of claim 1, wherein at least one of the one or more patterns includes a first end, a second end, and a path between the first end and the second end. 8. The method of claim 1, wherein the mobile device behavior is associated with configuring one or more of the following: Wi-Fi (or other wireless local area network (LAN) communication protocol), Bluetooth (or other near-field communication protocol), global positioning system (GPS), a data network associated with mobile telephony, a tone, a sound effect, a screen lock, a keypad lock, a background, a wallpaper, separate program execution, separate program termination, call handling, notifications, reminders, or deleting memory. 9. The method of claim 1, wherein the mobile device behavior is associated with one or more of the following: a safe driving profile, a homework profile, a bedtime profile, or a quiet zone profile. 10. The method of claim 1, further comprising receiving via a user interface a modification to the mobile device behavior, wherein the mobile device is configured to exhibit the modification to the mobile device behavior. 11. The method of claim 1, wherein there is a plurality of mobile device behaviors and the method further includes prioritizing the plurality of mobile device behaviors. 12. The method of claim 1, wherein prioritizing is based at least in part on one or more of the following: time spent at a pattern, a time of day, a day of a week, an acceptance rate, expert opinion, or power savings. 13. The method of claim 1, wherein the pattern is a round-trip pattern. 14. A system, comprising: a memory configured to store a set of time information and location information associated with a mobile device; and a processor coupled to the memory and configured to: determine one or more patterns associated with a mobile device, wherein the one or more patterns are determined at least in part by: identifying within a set of time information and location information associated with the mobile device a subset comprising successive locations and times associated with a temporary pattern; converting the temporary pattern to be a pattern of the one or more patterns; adding the pattern to a list of detected patterns; generating a reduced set of time information and location information at least in part by removing from the set of time information and location information the subset comprising the successive locations and times; and analyzing the reduced set of time information and location information to determine one or more additional patterns; and configure the mobile device to exhibit a mobile device behavior associated with the determined one or more patterns. 15. The system of claim 14, wherein the memory is located on the mobile device. 16. The system of claim 14, wherein the system further includes a communication interface and the set of time information and location information is received from the mobile device via the communication interface. 17. The system of claim 14, wherein the processor is further configure to present information associated with the mobile device behavior via one or more of the following: the mobile device and an Internet browser application. 18. The system of claim 14, wherein the processor is further configure to receive via a user interface a modification to the mobile device behavior, and to configured the mobile device to exhibit the modification to the mobile device behavior. 19. A computer program product, the computer program product being embodied in a tangible non-transitory computer readable storage medium and comprising computer instructions for: determining, using a processor, one or more patterns associated with a mobile device, wherein the one or more patterns are determined at least in part by: identifying within a set of time information and location information associated with the mobile device a subset comprising successive locations and times associated with a temporary pattern; converting the temporary pattern to be a pattern of the one or more patterns; adding the pattern to a list of detected patterns; generating a reduced set of time information and location information at least in part by removing from the set of time information and location information the subset comprising the successive locations and times; and analyzing the reduced set of time information and location information to determine one or more additional patterns; and configuring the mobile device to exhibit a mobile device behavior associated with the determined one or more patterns. 20. The computer program product of claim 19, further comprising instructions for determining, using the processor, the mobile device behavior associated with the determined one or more patterns.
2,600
9,861
9,861
15,093,253
2,644
Various communication systems may benefit from appropriate selection of frequencies for measurement purposes. For example, certain wireless communication systems that can use numerous frequencies but measure a smaller number of frequencies, may benefit from enhanced frequency selection. A method can include identifying a set of possible frequencies for measurement by a user equipment. The method can also include selecting a subset of frequencies from the possible frequencies based on parameters configured by an operator. The method can additionally include causing communication of the selection to the user equipment in a list.
1. A method, comprising: identifying a set of possible frequencies for measurement by a user equipment; selecting a subset of frequencies from the possible frequencies based on parameters configured by an operator; and causing a communication of the selection to the user equipment in a list. 2. The method of claim 1, further comprising: receiving the parameters from a management system. 3. The method of claim 1, wherein the parameters comprise a group profile. 4. The method of claim 3, wherein the group profile comprises at least one of: a group priority, a group size and a frequency selection method. 5. The method of claim 4, wherein the group priority is configured to determine a position of a group of frequencies in the list relative to other groups of frequencies. 6. The method of claim 4, wherein the group size is configured to determine a number of reserved places for frequencies of a corresponding group. 7. The method of claim 4, wherein the frequency selection method is configured to determine how frequencies of a corresponding group are selected. 8. The method of claim 4, wherein the frequency selection method comprises at least one of fixed order within group, equal priority random selection, or weighted probability random selection. 9. An apparatus, comprising: at least one processor; and at least one memory including computer program code, wherein the at least one memory and the computer program code are configured to, with the at least one processor cause the apparatus at least to identify a set of possible frequencies for measurement by a user equipment; select a subset of frequencies from the possible frequencies based on parameters configured by an operator; and cause communication of the selection to the user equipment in a list. 10. The apparatus of claim 9, wherein the at least one memory and the computer program code are configured to, with the at least one processor cause the apparatus at least to receive the parameters from a management system. 11. The apparatus of claim 9, wherein the parameters comprise a group profile. 12. The apparatus of claim 11, wherein the group profile comprises at least one of: a group priority, a group size and a frequency selection method. 13. The apparatus of claim 12, wherein the group priority is configured to determine a position of a group of frequencies in the list relative to other groups of frequencies. 14. The apparatus of claim 12, wherein the group size is configured to determine a number of reserved places for frequencies of a corresponding group. 15. The apparatus of claim 12, wherein the frequency selection method is configured to determine how frequencies of a corresponding group are selected. 16. The apparatus of claim 12, wherein the frequency selection method comprises at least one of fixed order within group, equal priority random selection, or weighted probability random selection. 17. An apparatus, comprising: at least one processor; and at least one memory including computer program code, wherein the at least one memory and the computer program code are configured to, with the at least one processor cause the apparatus at least to group available frequencies for measurement by a user equipment into a plurality of groups; and configure parameters to an access node for selecting frequencies for measurement based on groups. 18. The apparatus of claim 17, wherein the parameters comprise at least one of group of a frequency, priority of a frequency, group priority, group size, or selection method. 19. The apparatus of claim 17, wherein the parameters form at least one group profile. 20. The apparatus of claim 19, wherein the group profile comprises at least one of: a group priority, a group size and a frequency selection method.
Various communication systems may benefit from appropriate selection of frequencies for measurement purposes. For example, certain wireless communication systems that can use numerous frequencies but measure a smaller number of frequencies, may benefit from enhanced frequency selection. A method can include identifying a set of possible frequencies for measurement by a user equipment. The method can also include selecting a subset of frequencies from the possible frequencies based on parameters configured by an operator. The method can additionally include causing communication of the selection to the user equipment in a list.1. A method, comprising: identifying a set of possible frequencies for measurement by a user equipment; selecting a subset of frequencies from the possible frequencies based on parameters configured by an operator; and causing a communication of the selection to the user equipment in a list. 2. The method of claim 1, further comprising: receiving the parameters from a management system. 3. The method of claim 1, wherein the parameters comprise a group profile. 4. The method of claim 3, wherein the group profile comprises at least one of: a group priority, a group size and a frequency selection method. 5. The method of claim 4, wherein the group priority is configured to determine a position of a group of frequencies in the list relative to other groups of frequencies. 6. The method of claim 4, wherein the group size is configured to determine a number of reserved places for frequencies of a corresponding group. 7. The method of claim 4, wherein the frequency selection method is configured to determine how frequencies of a corresponding group are selected. 8. The method of claim 4, wherein the frequency selection method comprises at least one of fixed order within group, equal priority random selection, or weighted probability random selection. 9. An apparatus, comprising: at least one processor; and at least one memory including computer program code, wherein the at least one memory and the computer program code are configured to, with the at least one processor cause the apparatus at least to identify a set of possible frequencies for measurement by a user equipment; select a subset of frequencies from the possible frequencies based on parameters configured by an operator; and cause communication of the selection to the user equipment in a list. 10. The apparatus of claim 9, wherein the at least one memory and the computer program code are configured to, with the at least one processor cause the apparatus at least to receive the parameters from a management system. 11. The apparatus of claim 9, wherein the parameters comprise a group profile. 12. The apparatus of claim 11, wherein the group profile comprises at least one of: a group priority, a group size and a frequency selection method. 13. The apparatus of claim 12, wherein the group priority is configured to determine a position of a group of frequencies in the list relative to other groups of frequencies. 14. The apparatus of claim 12, wherein the group size is configured to determine a number of reserved places for frequencies of a corresponding group. 15. The apparatus of claim 12, wherein the frequency selection method is configured to determine how frequencies of a corresponding group are selected. 16. The apparatus of claim 12, wherein the frequency selection method comprises at least one of fixed order within group, equal priority random selection, or weighted probability random selection. 17. An apparatus, comprising: at least one processor; and at least one memory including computer program code, wherein the at least one memory and the computer program code are configured to, with the at least one processor cause the apparatus at least to group available frequencies for measurement by a user equipment into a plurality of groups; and configure parameters to an access node for selecting frequencies for measurement based on groups. 18. The apparatus of claim 17, wherein the parameters comprise at least one of group of a frequency, priority of a frequency, group priority, group size, or selection method. 19. The apparatus of claim 17, wherein the parameters form at least one group profile. 20. The apparatus of claim 19, wherein the group profile comprises at least one of: a group priority, a group size and a frequency selection method.
2,600
9,862
9,862
14,329,024
2,683
With regard to an instrument wherein a plurality of phase operations are carried out, the type of failure or the phase in which the failure has occurred is determined and, in accordance with the result of the determination, an alarm signal is generated. Provided that the failure details and the phase in which the failure has occurred can be determined from the generated alarm signal, it is possible, using a configuration wherein an alarm signal having a pulse number corresponding to the phase in which a failure has occurred is generated, or a configuration wherein an alarm signal having a pulse width corresponding to the phase in which a failure has occurred is generated, to determine the type of failure that has occurred and the phase in which the failure has occurred from the pulse number and pulse width.
1. An alarm signal generator circuit to generate a signal indicative of a failure of an instrument that carries out a plurality of phase operations, the alarm signal generator circuit comprising: a determination unit configured to determine a type of failure or a phase among a plurality of phases in which a failure has occurred; and an alarm signal generation unit that, in accordance with the result of the determination by the determination unit, is configured to generate an alarm signal comprising a pulse corresponding to the type of failure or to the phase among the plurality of phases in which the failure has occurred, wherein failure details and the phase in which the failure has occurred can be determined from the alarm signal generated by the alarm signal generation unit. 2. The alarm signal generator circuit according to claim 1, further comprising: a single terminal for deriving the alarm signal generated by the alarm signal generation unit, wherein the failure details and the phase in which the failure has occurred can be determined from the alarm signal derived from the single terminal. 3. The alarm signal generator circuit according to claim 1, wherein the determination unit comprises: a latch circuit configured to receive a signal output from the instrument; and a monostable multivibrator configured to output a pulse of a width corresponding to the type of failure based on the transition timing of the output of the latch circuit. 4. The alarm signal generator circuit according to claim 1, wherein the alarm signal generation unit comprises a counter configured to output a pulse of a number corresponding to the phase among the plurality of phases in which the failure has occurred. 5. The alarm signal generator circuit according to claim 1, wherein the alarm signal generation unit comprises a monostable multivibrator configured to output a pulse of a width corresponding to the phase among the plurality of phases in which the failure has occurred. 6. An alarm signal generation method for generating a signal indicative of a failure of an instrument that carries out a plurality of phase operations, the method comprising: determining a type of failure or a phase among a plurality of phases in which a failure has occurred; and generating, in accordance with the result of the determination, an alarm signal comprising a pulse corresponding to the type of failure or to the phase among the plurality of phases in which the failure has occurred, wherein the failure details and the phase in which the failure has occurred can be determined from the generated alarm signal. 7. The alarm signal generator circuit according to claim 2, wherein the determination unit comprises: a latch circuit configured to receive a signal output from the instrument; and a monostable multivibrator configured to output a pulse of a width corresponding to the type of failure based on the transition timing of the output of the latch circuit. 8. The alarm signal generator circuit according to claim 2, wherein the alarm signal generation unit comprises a counter configured to output a pulse of a number corresponding to the phase among the plurality of phases in which the failure has occurred. 9. The alarm signal generator circuit according to claim 3, wherein the alarm signal generation unit comprises a counter configured to output a pulse of a number corresponding to the phase among the plurality of phases in which the failure has occurred. 10. The alarm signal generator circuit according to claim 2, wherein the alarm signal generation unit comprises a monostable multivibrator configured to output a pulse of a width corresponding to the phase among the plurality of phases in which the failure has occurred. 11. The alarm signal generator circuit according to claim 3, wherein the alarm signal generation unit comprises a monostable multivibrator configured to output a pulse of a width corresponding to the phase among the plurality of phases in which the failure has occurred. 12. The alarm signal generator circuit according to claim 4, wherein the alarm signal generation unit comprises a monostable multivibrator configured to output a pulse of a width corresponding to the phase among the plurality of phases in which the failure has occurred.
With regard to an instrument wherein a plurality of phase operations are carried out, the type of failure or the phase in which the failure has occurred is determined and, in accordance with the result of the determination, an alarm signal is generated. Provided that the failure details and the phase in which the failure has occurred can be determined from the generated alarm signal, it is possible, using a configuration wherein an alarm signal having a pulse number corresponding to the phase in which a failure has occurred is generated, or a configuration wherein an alarm signal having a pulse width corresponding to the phase in which a failure has occurred is generated, to determine the type of failure that has occurred and the phase in which the failure has occurred from the pulse number and pulse width.1. An alarm signal generator circuit to generate a signal indicative of a failure of an instrument that carries out a plurality of phase operations, the alarm signal generator circuit comprising: a determination unit configured to determine a type of failure or a phase among a plurality of phases in which a failure has occurred; and an alarm signal generation unit that, in accordance with the result of the determination by the determination unit, is configured to generate an alarm signal comprising a pulse corresponding to the type of failure or to the phase among the plurality of phases in which the failure has occurred, wherein failure details and the phase in which the failure has occurred can be determined from the alarm signal generated by the alarm signal generation unit. 2. The alarm signal generator circuit according to claim 1, further comprising: a single terminal for deriving the alarm signal generated by the alarm signal generation unit, wherein the failure details and the phase in which the failure has occurred can be determined from the alarm signal derived from the single terminal. 3. The alarm signal generator circuit according to claim 1, wherein the determination unit comprises: a latch circuit configured to receive a signal output from the instrument; and a monostable multivibrator configured to output a pulse of a width corresponding to the type of failure based on the transition timing of the output of the latch circuit. 4. The alarm signal generator circuit according to claim 1, wherein the alarm signal generation unit comprises a counter configured to output a pulse of a number corresponding to the phase among the plurality of phases in which the failure has occurred. 5. The alarm signal generator circuit according to claim 1, wherein the alarm signal generation unit comprises a monostable multivibrator configured to output a pulse of a width corresponding to the phase among the plurality of phases in which the failure has occurred. 6. An alarm signal generation method for generating a signal indicative of a failure of an instrument that carries out a plurality of phase operations, the method comprising: determining a type of failure or a phase among a plurality of phases in which a failure has occurred; and generating, in accordance with the result of the determination, an alarm signal comprising a pulse corresponding to the type of failure or to the phase among the plurality of phases in which the failure has occurred, wherein the failure details and the phase in which the failure has occurred can be determined from the generated alarm signal. 7. The alarm signal generator circuit according to claim 2, wherein the determination unit comprises: a latch circuit configured to receive a signal output from the instrument; and a monostable multivibrator configured to output a pulse of a width corresponding to the type of failure based on the transition timing of the output of the latch circuit. 8. The alarm signal generator circuit according to claim 2, wherein the alarm signal generation unit comprises a counter configured to output a pulse of a number corresponding to the phase among the plurality of phases in which the failure has occurred. 9. The alarm signal generator circuit according to claim 3, wherein the alarm signal generation unit comprises a counter configured to output a pulse of a number corresponding to the phase among the plurality of phases in which the failure has occurred. 10. The alarm signal generator circuit according to claim 2, wherein the alarm signal generation unit comprises a monostable multivibrator configured to output a pulse of a width corresponding to the phase among the plurality of phases in which the failure has occurred. 11. The alarm signal generator circuit according to claim 3, wherein the alarm signal generation unit comprises a monostable multivibrator configured to output a pulse of a width corresponding to the phase among the plurality of phases in which the failure has occurred. 12. The alarm signal generator circuit according to claim 4, wherein the alarm signal generation unit comprises a monostable multivibrator configured to output a pulse of a width corresponding to the phase among the plurality of phases in which the failure has occurred.
2,600
9,863
9,863
15,567,472
2,641
A mobile terminal ( 11 ) transmits data to a base station ( 1 ) of a first network relating to neighbour base stations ( 13 ) of a second network. The base station ( 1 ) transmits the neighbour data to a network management processor ( 12 ) controlling the mobile communications system, which maintains a mapping data store ( 27 ) associating the base station with the neighbour base stations. The mapping facilitates a dual attachment process which allows connection of terminals to both networks, thus facilitating transfer of connection between networks.
1. A mobile telecommunications system comprising a plurality of base stations forming part of a first network operating according to a first protocol, a plurality of base stations forming part of a second network operating according to a second protocol, and a mobile management entity (MME), in which the MME has a store of mappings between geographical areas managed by a set of respective first network control elements in the first network and geographical areas managed by a set of respective second network control elements m the second network, the mapping being suitable for supporting a dual attach system to allow a mobile terminal, having made an association with one of the set of first network control elements, to make an association with one of the set of second network control elements selected according to the mapping, wherein at least one of the base stations of the first network is arranged to request and receive neighbour data from a mobile terminal currently in communication with the base station, the neighbour data relating to base stations of the second network that can be detected by the mobile terminal, and to forward the neighbour data to the MME, and the MME is arranged to update the mapping between the base stations in accordance with the location updates it receives from the terminal 2. A mobile telecommunications system according to claim 1, wherein base stations are arranged to transmit requests to mobile terminals to scan for neighbouring base stations operational in the first and second communications networks. 3. A mobile telecommunications system according to claim 1, wherein the mapping is used by the MME to select a server in the second network with which an attach procedure is to be initiated when an attachment to the first network is initiated by a user terminal. 4. A mobile telecommunications system according to claim 3, wherein the server in the second network is used for communication with mobile terminals when a service is required for which the first network is unsuitable 5. A mobility management entity (MME) for a telecommunications system, comprising a store for a mapping between geographical areas managed by a set of respective first network control elements controlling base stations of a first network operating according to a first protocol and geographical areas managed by a set of respective second network control elements controlling base stations of a second network operating according to a second protocol, the mapping being suitable for supporting a dual attach system to allow a mobile terminal, having made an association with one of the first network control elements, to make an association with one of the set of second network control elements selected according to the mapping, wherein the MME is arranged to receive, from a base station of the first network, neighbour data relating to base stations of the second network, and is arranged to maintain the mapping between the respective geographical areas by updating it in accordance with the neighbour data it receives 6. A mobility management entity according to claim 5, wherein the mapping is used by the MME to select a server in the second network with which an attach procedure is to be initiated when an attachment to the first network is initiated by a user terminal. 7. A mobility management entity according to claim 6, wherein the server in the second network is selected for communication with mobile terminals when a service is required for which the first network is unsuitable 8. A base station for a first mobile communications network arranged to communicate with mobile communications terminals under the control of a mobile management entity, and arranged to receive neighbour data from such mobile communications terminals relating to base stations operating on a second mobile communications network, and arranged to respond to changes in that data by transmitting signals to the mobile management entity to communicate the neighbour data relating to the base stations operating on the second mobile communications network 9. A base station according to claim 8, arranged to transmit a request to a mobile terminal to scan for neighbouring base stations operational in the first and second communications networks. 10. A method of operating a mobile communications system, in which: a mobile terminal in communication with a base station of the mobile communications system detects one or more neighbour base stations of a second mobile communications system, the mobile terminal transmits data to the base station relating to the neighbour base stations the base station transmits data relating to the neighbour base stations to a network management processor controlling the mobile communications system, the network management processor maintains a mapping data store associating the base station with the neighbour base stations, the mapping being suitable for supporting a dual attach system to allow a mobile terminal, having made an association with one of the set of first network control elements, to make an association with one of the set of second network control elements selected according to the mapping 11. A method according to claim 10, wherein the mapping data is used during an attach process of mobile terminals to a server of the first mobile communications system to select a server in the second mobile communications system and initiate an attach procedure of said mobile terminals to the server in the second network so identified 12. A method according to claim 11, wherein the server in the second mobile communications system is used for communication with the mobile terminals for services for which the first mobile communications system is unsuitable. 13. A method according to claim 10, wherein the mobile terminal scans for neighbours in response to requests from the base station 14. A method according to claim 10 in which the first mobile communications system is configured to operate according to a packet switched protocol. 15. A method according to claim 10 in which the second mobile communications system is configured to operate according to a circuit switched protocol
A mobile terminal ( 11 ) transmits data to a base station ( 1 ) of a first network relating to neighbour base stations ( 13 ) of a second network. The base station ( 1 ) transmits the neighbour data to a network management processor ( 12 ) controlling the mobile communications system, which maintains a mapping data store ( 27 ) associating the base station with the neighbour base stations. The mapping facilitates a dual attachment process which allows connection of terminals to both networks, thus facilitating transfer of connection between networks.1. A mobile telecommunications system comprising a plurality of base stations forming part of a first network operating according to a first protocol, a plurality of base stations forming part of a second network operating according to a second protocol, and a mobile management entity (MME), in which the MME has a store of mappings between geographical areas managed by a set of respective first network control elements in the first network and geographical areas managed by a set of respective second network control elements m the second network, the mapping being suitable for supporting a dual attach system to allow a mobile terminal, having made an association with one of the set of first network control elements, to make an association with one of the set of second network control elements selected according to the mapping, wherein at least one of the base stations of the first network is arranged to request and receive neighbour data from a mobile terminal currently in communication with the base station, the neighbour data relating to base stations of the second network that can be detected by the mobile terminal, and to forward the neighbour data to the MME, and the MME is arranged to update the mapping between the base stations in accordance with the location updates it receives from the terminal 2. A mobile telecommunications system according to claim 1, wherein base stations are arranged to transmit requests to mobile terminals to scan for neighbouring base stations operational in the first and second communications networks. 3. A mobile telecommunications system according to claim 1, wherein the mapping is used by the MME to select a server in the second network with which an attach procedure is to be initiated when an attachment to the first network is initiated by a user terminal. 4. A mobile telecommunications system according to claim 3, wherein the server in the second network is used for communication with mobile terminals when a service is required for which the first network is unsuitable 5. A mobility management entity (MME) for a telecommunications system, comprising a store for a mapping between geographical areas managed by a set of respective first network control elements controlling base stations of a first network operating according to a first protocol and geographical areas managed by a set of respective second network control elements controlling base stations of a second network operating according to a second protocol, the mapping being suitable for supporting a dual attach system to allow a mobile terminal, having made an association with one of the first network control elements, to make an association with one of the set of second network control elements selected according to the mapping, wherein the MME is arranged to receive, from a base station of the first network, neighbour data relating to base stations of the second network, and is arranged to maintain the mapping between the respective geographical areas by updating it in accordance with the neighbour data it receives 6. A mobility management entity according to claim 5, wherein the mapping is used by the MME to select a server in the second network with which an attach procedure is to be initiated when an attachment to the first network is initiated by a user terminal. 7. A mobility management entity according to claim 6, wherein the server in the second network is selected for communication with mobile terminals when a service is required for which the first network is unsuitable 8. A base station for a first mobile communications network arranged to communicate with mobile communications terminals under the control of a mobile management entity, and arranged to receive neighbour data from such mobile communications terminals relating to base stations operating on a second mobile communications network, and arranged to respond to changes in that data by transmitting signals to the mobile management entity to communicate the neighbour data relating to the base stations operating on the second mobile communications network 9. A base station according to claim 8, arranged to transmit a request to a mobile terminal to scan for neighbouring base stations operational in the first and second communications networks. 10. A method of operating a mobile communications system, in which: a mobile terminal in communication with a base station of the mobile communications system detects one or more neighbour base stations of a second mobile communications system, the mobile terminal transmits data to the base station relating to the neighbour base stations the base station transmits data relating to the neighbour base stations to a network management processor controlling the mobile communications system, the network management processor maintains a mapping data store associating the base station with the neighbour base stations, the mapping being suitable for supporting a dual attach system to allow a mobile terminal, having made an association with one of the set of first network control elements, to make an association with one of the set of second network control elements selected according to the mapping 11. A method according to claim 10, wherein the mapping data is used during an attach process of mobile terminals to a server of the first mobile communications system to select a server in the second mobile communications system and initiate an attach procedure of said mobile terminals to the server in the second network so identified 12. A method according to claim 11, wherein the server in the second mobile communications system is used for communication with the mobile terminals for services for which the first mobile communications system is unsuitable. 13. A method according to claim 10, wherein the mobile terminal scans for neighbours in response to requests from the base station 14. A method according to claim 10 in which the first mobile communications system is configured to operate according to a packet switched protocol. 15. A method according to claim 10 in which the second mobile communications system is configured to operate according to a circuit switched protocol
2,600
9,864
9,864
14,698,807
2,631
Extended fluid meter battery life apparatus and methods. In an embodiment, the fluid metering device may include a battery; a wireless communications unit powered by the battery; and a monitoring unit configured to monitor fluid usage information to determine whether the fluid usage information meets a predetermined condition. The wireless communications unit may be powered up if the fluid usage information meets the predetermined condition and the fluid usage information may be transmitted using the wireless communications unit.
1. A fluid metering device, comprising: a battery; a wireless communications unit powered by the battery; and a monitoring unit configured to monitor fluid usage information to determine whether the fluid usage information meets a predetermined condition, wherein the wireless communications unit is powered up if the fluid usage information meets the predetermined condition and the fluid usage information is transmitted using the wireless communications unit. 2. The fluid metering device of claim 1, wherein the wireless communications unit is powered down after the transmitting of the fluid usage information. 3. The fluid metering device of claim 2, wherein the wireless communications unit is powered down after the transmitting of the fluid usage information after receipt of an acknowledgement of receipt of the transmitted fluid usage information. 4. The fluid metering device of claim 1, wherein the predetermined condition is whether a change in fluid usage information meets or exceeds a predetermined threshold. 5. The fluid metering device of claim 1, further comprising: a fluid input; a fluid output; a fluid channel between the fluid input and the fluid output; and a flow measuring unit configured to measure a flow amount of fluid passing through the fluid channel, wherein the fluid usage information may be processed from the flow amount of fluid. 6. The fluid metering device of claim 5, further comprising: a memory; and a controller, wherein the controller is configured to process the flow amount of fluid into the fluid usage information, and wherein the fluid usage information may be stored in the memory. 7. The fluid metering device of claim 1, wherein the wireless communications unit is a ZigBee communications unit. 8. The fluid metering device of claim 1, wherein the battery is a lithium battery. 9. The fluid metering device of claim 1, wherein the monitoring is periodic. 10. A fluid metering method, comprising: monitoring, using a monitoring unit, fluid usage information to determine whether the fluid usage information meets a predetermined condition; and powering up a wireless communications unit powered by a battery if the fluid usage information meets the predetermined condition, and transmitting the fluid usage information using the wireless communications unit. 11. The fluid meting method of claim 10, further comprising powering down the wireless communications unit after the transmitting of the fluid usage information. 12. The fluid metering method of claim 11, further comprising receiving an acknowledgement of receipt of the transmitted fluid usage information before said powering down of the wireless communications unit. 13. The fluid metering method of claim 10, wherein the predetermined condition is whether a change in fluid usage information meets or exceeds a predetermined threshold. 14. The fluid metering method of claim 10, further comprising: receiving fluid in a fluid input, through a fluid channel, and out through a fluid output; and measuring, using a flow measuring unit, a flow amount of the fluid passing through the fluid channel, wherein the fluid usage information is processed from the flow amount of fluid. 15. The fluid metering method of claim 14, further comprising: processing, using a processor, the flow amount of fluid into the fluid usage information; and storing the fluid usage information in a memory. 16. The fluid metering method of claim 10, wherein the wireless communications unit is a ZigBee communications unit. 17. The fluid metering method of claim 10, wherein the battery is a lithium battery. 18. The fluid metering method of claim 10, wherein the metering is periodic. 19. A fluid metering device, comprising: a fluid input; a fluid output; a fluid channel between the fluid input and the fluid output; a flow measuring unit configured to measure a flow amount of fluid passing through the fluid channel; a battery; a wireless communications unit powered by the battery; and a monitoring unit configured to monitor the flow amount of fluid to determine whether the flow amount of fluid meets or exceeds a predetermined threshold, wherein the wireless communications unit is powered up if the flow amount of fluid meets or exceeds the predetermined threshold and the flow amount of fluid or fluid usage information is transmitted using the wireless communications unit. 20. The fluid metering device of claim 19, further comprising: a memory; and a controller, wherein the controller is configured to process the flow amount of fluid into the fluid usage information, and wherein the fluid usage information may be stored in the memory.
Extended fluid meter battery life apparatus and methods. In an embodiment, the fluid metering device may include a battery; a wireless communications unit powered by the battery; and a monitoring unit configured to monitor fluid usage information to determine whether the fluid usage information meets a predetermined condition. The wireless communications unit may be powered up if the fluid usage information meets the predetermined condition and the fluid usage information may be transmitted using the wireless communications unit.1. A fluid metering device, comprising: a battery; a wireless communications unit powered by the battery; and a monitoring unit configured to monitor fluid usage information to determine whether the fluid usage information meets a predetermined condition, wherein the wireless communications unit is powered up if the fluid usage information meets the predetermined condition and the fluid usage information is transmitted using the wireless communications unit. 2. The fluid metering device of claim 1, wherein the wireless communications unit is powered down after the transmitting of the fluid usage information. 3. The fluid metering device of claim 2, wherein the wireless communications unit is powered down after the transmitting of the fluid usage information after receipt of an acknowledgement of receipt of the transmitted fluid usage information. 4. The fluid metering device of claim 1, wherein the predetermined condition is whether a change in fluid usage information meets or exceeds a predetermined threshold. 5. The fluid metering device of claim 1, further comprising: a fluid input; a fluid output; a fluid channel between the fluid input and the fluid output; and a flow measuring unit configured to measure a flow amount of fluid passing through the fluid channel, wherein the fluid usage information may be processed from the flow amount of fluid. 6. The fluid metering device of claim 5, further comprising: a memory; and a controller, wherein the controller is configured to process the flow amount of fluid into the fluid usage information, and wherein the fluid usage information may be stored in the memory. 7. The fluid metering device of claim 1, wherein the wireless communications unit is a ZigBee communications unit. 8. The fluid metering device of claim 1, wherein the battery is a lithium battery. 9. The fluid metering device of claim 1, wherein the monitoring is periodic. 10. A fluid metering method, comprising: monitoring, using a monitoring unit, fluid usage information to determine whether the fluid usage information meets a predetermined condition; and powering up a wireless communications unit powered by a battery if the fluid usage information meets the predetermined condition, and transmitting the fluid usage information using the wireless communications unit. 11. The fluid meting method of claim 10, further comprising powering down the wireless communications unit after the transmitting of the fluid usage information. 12. The fluid metering method of claim 11, further comprising receiving an acknowledgement of receipt of the transmitted fluid usage information before said powering down of the wireless communications unit. 13. The fluid metering method of claim 10, wherein the predetermined condition is whether a change in fluid usage information meets or exceeds a predetermined threshold. 14. The fluid metering method of claim 10, further comprising: receiving fluid in a fluid input, through a fluid channel, and out through a fluid output; and measuring, using a flow measuring unit, a flow amount of the fluid passing through the fluid channel, wherein the fluid usage information is processed from the flow amount of fluid. 15. The fluid metering method of claim 14, further comprising: processing, using a processor, the flow amount of fluid into the fluid usage information; and storing the fluid usage information in a memory. 16. The fluid metering method of claim 10, wherein the wireless communications unit is a ZigBee communications unit. 17. The fluid metering method of claim 10, wherein the battery is a lithium battery. 18. The fluid metering method of claim 10, wherein the metering is periodic. 19. A fluid metering device, comprising: a fluid input; a fluid output; a fluid channel between the fluid input and the fluid output; a flow measuring unit configured to measure a flow amount of fluid passing through the fluid channel; a battery; a wireless communications unit powered by the battery; and a monitoring unit configured to monitor the flow amount of fluid to determine whether the flow amount of fluid meets or exceeds a predetermined threshold, wherein the wireless communications unit is powered up if the flow amount of fluid meets or exceeds the predetermined threshold and the flow amount of fluid or fluid usage information is transmitted using the wireless communications unit. 20. The fluid metering device of claim 19, further comprising: a memory; and a controller, wherein the controller is configured to process the flow amount of fluid into the fluid usage information, and wherein the fluid usage information may be stored in the memory.
2,600
9,865
9,865
15,703,633
2,688
A patient monitoring system includes at least two wireless sensing devices, each configured to measure a different physiological parameter from a patient and wirelessly transmit a parameter dataset. The system further includes a receiver that receives each parameter dataset, a processor, and a monitoring regulation module executable on the processor to assign one of the at least two wireless sensing devices as a dominant wireless sensing device and at least one of the remaining wireless sensing devices as a subordinate wireless sensing device. The physiological parameter measured by the dominant wireless sensing device is a key parameter and the parameter dataset transmitted by the dominant wireless sensing device is a key parameter dataset. The key parameter dataset from the dominant wireless sensing device is processed to determine a stability indicator. The subordinate wireless sensing device is then operated based on the stability indicator for the key parameter.
1-20. (canceled) 21. A patient monitoring system comprising: at least two wireless sensing devices, each wireless sensing device configured to measure a different physiological parameter from a patient and wirelessly transmit a parameter dataset; a receiver that receives each parameter dataset from each of the at least two wireless sensing devices; a processor; a monitoring regulation module executable on the processor to: assign one of the at least two wireless sensing devices as a dominant wireless sensing device and at least one of the remaining wireless sensing devices as a subordinate wireless sensing device, wherein the physiological parameter measured by the dominant wireless sensing device is a key parameter and the parameter dataset transmitted by the dominant wireless sensing device is a key parameter dataset; determine a stability indicator for the key parameter based on the key parameter dataset from the dominant wireless sensing device; and control a measurement operation of the subordinate wireless sensing device based on the stability indicator for the key parameter. 22. The patient monitoring system of claim 21, wherein the dominant wireless sensing device is operated to continuously measure the key parameter, and a measurement interval for the subordinate wireless sensing device is assigned based on the stability indicator. 23. The patient monitoring system of claim 21, wherein the dominant wireless sensing device is assigned based on one of a diagnosis or a treatment history for the patient. 24. The patient monitoring system of claim 21, wherein the key parameter and dominant wireless sensing device are rotated amongst at least two of the two or more wireless sensing devices. 25. The patient monitoring system of claim 21, wherein controlling the measurement operation of the subordinate wireless sensing device includes operating the subordinate wireless sensing device to measure the respective physiological parameter upon determining that the stability indicator for the key parameter is outside of a predetermined range. 26. The patient monitoring system of claim 21, wherein controlling the measurement operation of the subordinate wireless sensing device includes operating the subordinate wireless sensing device to measure the respective physiological parameter in a low power mode when the stability indicator is within a predetermined range. 27. The patient monitoring system of claim 26, wherein the subordinate wireless sensing device in the low power mode measures the respective physiological parameter utilizing a reduced number of sensors. 28. The patient monitoring system of claim 26, wherein the subordinate wireless sensing device in the low power mode measures the respective physiological parameter at a less-frequent measurement interval. 29. The patient monitoring system of claim 21, wherein controlling the measurement operation of the subordinate wireless sensing device includes operating in a low power mode when the stability indicator for the key parameter is within a predetermined range, such that the respective subordinate wireless sensing device does not transmit any parameter dataset. 30. A method of monitoring a patient, the method comprising: providing two or more wireless sensing devices, each wireless sensing device configured to measure a different physiological parameter from a patient and communicatively connected to a computing system having a processor; assigning at the processor one of the at least two wireless sensing devices as a dominant wireless sensing device and at least one of the remaining wireless sensing devices as a subordinate wireless sensing device; operating the dominant wireless sensing device to measure a key parameter from a patient and wirelessly transmit a key parameter dataset; determining a stability indicator for the key parameter based on the key parameter dataset; and selectively operating the at least one subordinate wireless sensing device based on the stability indicator for the key parameter. 31. The method of claim 30, further comprising operating the dominant wireless sensing device to continuously measure the key parameter; and assigning a measurement interval for the subordinate wireless sensing device based on the stability indicator. 32. The method of claim 30, wherein the dominant wireless sensing device is assigned based on one of a diagnosis or a medical history for the patient. 33. The method of claim 30, further comprising assigning a new dominant wireless sensing device by selecting one of the subordinate wireless sensing devices to be the dominant wireless sensing device. 34. The method of claim 33, wherein the new dominant wireless sensing device is assigned based on previously transmitted parameter datasets or based on battery power constraints. 35. The method of claim 30, wherein controlling the measurement operation of the subordinate wireless sensing device includes operating the subordinate wireless sensing device to measure the respective physiological parameter upon determining that the stability indicator for the key parameter is outside of a predetermined range. 36. The method of claim 30, wherein selectively operating the subordinate wireless sensing devices includes operating at least one of the subordinate wireless sensing devices in a low power mode when the stability indicator for the key parameter is within a predetermined range. 37. The method of claim 36, wherein the subordinate wireless sensing device in the low power mode measures a physiological parameter from the patient at a predefined interval and stores a predetermined amount of most recent measurement data, and does not transmit any parameter dataset. 38. The method of claim 36, wherein the subordinate wireless sensing device in the low power mode measures a physiological parameter from the patient utilizing a reduced number of sensors. 39. A method of monitoring a patient, the method comprising: providing two or more wireless sensing devices, each wireless sensing device configured to measure a different physiological parameter from a patient and communicatively connected to a computing system having a processor; assigning at the processor one of the at least two wireless sensing devices as a dominant wireless sensing device and at least one of the remaining wireless sensing devices as a subordinate wireless sensing device; operating the dominant wireless sensing device to measure a key parameter from a patient and wirelessly transmit a key parameter dataset; determining a stability indicator for the key parameter based on the key parameter dataset; and selectively operating the at least one subordinate wireless sensing device based on the stability indicator for the key parameter, wherein selectively operating the subordinate wireless sensing devices includes operating at least one of the subordinate wireless sensing devices when the stability indicator for the key parameter is outside of a predetermined range. 40. The method of claim 38, wherein the key parameter and dominant wireless sensing device are rotated amongst at least two of the two or more wireless sensing devices.
A patient monitoring system includes at least two wireless sensing devices, each configured to measure a different physiological parameter from a patient and wirelessly transmit a parameter dataset. The system further includes a receiver that receives each parameter dataset, a processor, and a monitoring regulation module executable on the processor to assign one of the at least two wireless sensing devices as a dominant wireless sensing device and at least one of the remaining wireless sensing devices as a subordinate wireless sensing device. The physiological parameter measured by the dominant wireless sensing device is a key parameter and the parameter dataset transmitted by the dominant wireless sensing device is a key parameter dataset. The key parameter dataset from the dominant wireless sensing device is processed to determine a stability indicator. The subordinate wireless sensing device is then operated based on the stability indicator for the key parameter.1-20. (canceled) 21. A patient monitoring system comprising: at least two wireless sensing devices, each wireless sensing device configured to measure a different physiological parameter from a patient and wirelessly transmit a parameter dataset; a receiver that receives each parameter dataset from each of the at least two wireless sensing devices; a processor; a monitoring regulation module executable on the processor to: assign one of the at least two wireless sensing devices as a dominant wireless sensing device and at least one of the remaining wireless sensing devices as a subordinate wireless sensing device, wherein the physiological parameter measured by the dominant wireless sensing device is a key parameter and the parameter dataset transmitted by the dominant wireless sensing device is a key parameter dataset; determine a stability indicator for the key parameter based on the key parameter dataset from the dominant wireless sensing device; and control a measurement operation of the subordinate wireless sensing device based on the stability indicator for the key parameter. 22. The patient monitoring system of claim 21, wherein the dominant wireless sensing device is operated to continuously measure the key parameter, and a measurement interval for the subordinate wireless sensing device is assigned based on the stability indicator. 23. The patient monitoring system of claim 21, wherein the dominant wireless sensing device is assigned based on one of a diagnosis or a treatment history for the patient. 24. The patient monitoring system of claim 21, wherein the key parameter and dominant wireless sensing device are rotated amongst at least two of the two or more wireless sensing devices. 25. The patient monitoring system of claim 21, wherein controlling the measurement operation of the subordinate wireless sensing device includes operating the subordinate wireless sensing device to measure the respective physiological parameter upon determining that the stability indicator for the key parameter is outside of a predetermined range. 26. The patient monitoring system of claim 21, wherein controlling the measurement operation of the subordinate wireless sensing device includes operating the subordinate wireless sensing device to measure the respective physiological parameter in a low power mode when the stability indicator is within a predetermined range. 27. The patient monitoring system of claim 26, wherein the subordinate wireless sensing device in the low power mode measures the respective physiological parameter utilizing a reduced number of sensors. 28. The patient monitoring system of claim 26, wherein the subordinate wireless sensing device in the low power mode measures the respective physiological parameter at a less-frequent measurement interval. 29. The patient monitoring system of claim 21, wherein controlling the measurement operation of the subordinate wireless sensing device includes operating in a low power mode when the stability indicator for the key parameter is within a predetermined range, such that the respective subordinate wireless sensing device does not transmit any parameter dataset. 30. A method of monitoring a patient, the method comprising: providing two or more wireless sensing devices, each wireless sensing device configured to measure a different physiological parameter from a patient and communicatively connected to a computing system having a processor; assigning at the processor one of the at least two wireless sensing devices as a dominant wireless sensing device and at least one of the remaining wireless sensing devices as a subordinate wireless sensing device; operating the dominant wireless sensing device to measure a key parameter from a patient and wirelessly transmit a key parameter dataset; determining a stability indicator for the key parameter based on the key parameter dataset; and selectively operating the at least one subordinate wireless sensing device based on the stability indicator for the key parameter. 31. The method of claim 30, further comprising operating the dominant wireless sensing device to continuously measure the key parameter; and assigning a measurement interval for the subordinate wireless sensing device based on the stability indicator. 32. The method of claim 30, wherein the dominant wireless sensing device is assigned based on one of a diagnosis or a medical history for the patient. 33. The method of claim 30, further comprising assigning a new dominant wireless sensing device by selecting one of the subordinate wireless sensing devices to be the dominant wireless sensing device. 34. The method of claim 33, wherein the new dominant wireless sensing device is assigned based on previously transmitted parameter datasets or based on battery power constraints. 35. The method of claim 30, wherein controlling the measurement operation of the subordinate wireless sensing device includes operating the subordinate wireless sensing device to measure the respective physiological parameter upon determining that the stability indicator for the key parameter is outside of a predetermined range. 36. The method of claim 30, wherein selectively operating the subordinate wireless sensing devices includes operating at least one of the subordinate wireless sensing devices in a low power mode when the stability indicator for the key parameter is within a predetermined range. 37. The method of claim 36, wherein the subordinate wireless sensing device in the low power mode measures a physiological parameter from the patient at a predefined interval and stores a predetermined amount of most recent measurement data, and does not transmit any parameter dataset. 38. The method of claim 36, wherein the subordinate wireless sensing device in the low power mode measures a physiological parameter from the patient utilizing a reduced number of sensors. 39. A method of monitoring a patient, the method comprising: providing two or more wireless sensing devices, each wireless sensing device configured to measure a different physiological parameter from a patient and communicatively connected to a computing system having a processor; assigning at the processor one of the at least two wireless sensing devices as a dominant wireless sensing device and at least one of the remaining wireless sensing devices as a subordinate wireless sensing device; operating the dominant wireless sensing device to measure a key parameter from a patient and wirelessly transmit a key parameter dataset; determining a stability indicator for the key parameter based on the key parameter dataset; and selectively operating the at least one subordinate wireless sensing device based on the stability indicator for the key parameter, wherein selectively operating the subordinate wireless sensing devices includes operating at least one of the subordinate wireless sensing devices when the stability indicator for the key parameter is outside of a predetermined range. 40. The method of claim 38, wherein the key parameter and dominant wireless sensing device are rotated amongst at least two of the two or more wireless sensing devices.
2,600
9,866
9,866
15,293,251
2,651
A method provides binaural sound to a listener while the listener watches a movie so sounds from the movie localize to a location of a character in the movie. Sound is convolved with head related transfer functions (HRTFs) of the listener, and the convolved sound is provided to the listener who wears a wearable electronic device.
1. A method to provide binaural sound to a listener in a movie theater, the method comprising: determining different head orientations of the listener with respect to an image of a character as the image of the character moves to different locations from one side of a movie screen to an opposite side of the movie screen while being displayed to the listener on the movie screen during a movie in the movie theater; selecting, based on the head orientations of the listener with respect to the image of the character, head related transfer functions (HRTFs) so a voice of the character is heard to originate to the listener from the image of the character as the image of the character moves to the different locations from the one side of the movie screen to the opposite side of the movie screen; convolving, with a digital signal processor, the voice of the character with the HRTFs; and providing, through a wearable electronic device and to the listener, the voice of the character convolved with the HRTFs so the voice of the character localizes to the listener as originating from the image of the character as the image of the character moves to the different locations from the one side of the movie screen to the opposite side of the movie screen. 2. The method of claim 1, wherein the one side of the movie screen is a left side of the movie screen as seen from the listener and the opposite side of the movie screen is a right side of the movie screen as seen from the listener. 3. The method of claim 1 further comprising: determining azimuth angles and elevation angles between a forward facing head orientation of the listener and the image of the character that is located on the movie screen as the image of the character moves to the different locations from the one side of the movie screen to the opposite side of the movie screen; and selecting the HRTFs based on the azimuth angles and the elevation angles so the voice of the character localizes to the listener as originating from the image of the character as the image of the character moves to the different locations from the one side of the movie screen to the opposite side of the movie screen. 4. The method of claim 1 further comprising: determining a distance from the listener to where the character is visually represented in a scene in the movie; and adjusting a loudness of the voice of the character based on the distance. 5. The method of claim 1 further comprising: providing the listener with a selection of different characters that appear in the movie shown on the movie screen; receiving a selection of one of the characters; and providing the listener with binaural sound from the point-of-view of the one of the characters so the listener hears sounds as if the listener were the one of the characters. 6. The method of claim 1 further comprising: determining when a position of the character is no longer located on the movie screen; and switching, in response to the position of the character no longer being located on the movie screen, the voice of the character from being provided in binaural sound to being provided to the listener in stereo sound. 7. The method of claim 1 further comprising: determining when a position of the character is no longer located on the movie screen; and switching, in response to a position of the character no longer being located on the movie screen, a point-of-view for sound being provided to the listener to another character that is on the movie screen so the listener continues to localize an origin of sound to the movie screen. 8. A method to provide binaural sound to a listener from a point-of-view of a character in a feature length movie while the listener watches the feature length movie, the method comprising: determining an azimuth angle between the character and a source of sound in the feature length movie; selecting a head related transfer function (HRTF) that corresponds to the azimuth angle; convolving, with a digital signal processor, sound from the source of the sound with the HRTF; and providing, through a wearable electronic device and to the listener, the sound convolved with the HRTF so the listener hears the sound from the point-of-view of the character while the listener watches the feature length movie. 9. The method of claim 8 further comprising: receiving, from the listener, a selection of the character indicating that the listener desires to hear sound from the point-of-view of the character. 10. The method of claim 8 further comprising: tracking a head orientation of the listener; and switching, while the listener watches the feature length movie, the sound from being provided in binaural sound to the listener to being provided in stereo sound to the listener when the head orientation of the listener is not directed to the feature length movie. 11. The method of claim 8 further comprising: playing a signal sound to notify the listener when the point-of-view changes from the character to another character. 12. The method of claim 8 further comprising: providing, while the listener watches the feature length movie, a voice of the character to the listener in stereo sound while other sounds are simultaneously provided to the listener in binaural sound. 13. The method of claim 8 further comprising: providing, while the listener watches the feature length movie, a voice of the character to the listener in binaural sound so the voice of the character localizes to the listener one meter above the head of the listener while other sounds are simultaneously provided to the listener in stereo sound. 14. The method of claim 8 further comprising: providing, through the wearable electronic device and to the listener, sounds that localize to the listener as originating behind the listener while the listener watches the feature length movie. 15. The method of claim 8 further comprising: providing, through the wearable electronic device and to the listener, sounds that localize to the listener as originating between one to two meters away from the listener while the listener watches the feature length movie. 16. A method to provide binaural sound to a listener while the listener watches a movie so sounds from the movie localize behind the listener, the method comprising: obtaining head related transfer functions (HRTFs) for the listener; selecting a character in the movie as an audial point-of-view of the listener; convolving, with a digital signal processor and with the HRTFs, a sound in the movie that originates from behind the character so the sound originates from behind the listener at a location that is in empty space not occupied by a tangible object; and providing, through a wearable electronic device and to the listener, the sound convolved with the HRTFs so the listener localizes the sound to originate from behind the listener at the location that is in the empty space not occupied by a tangible object while the listener watches the movie. 17. The method of claim 16 further comprising: convolving, with the digital signal processor and with the HRTFs, a voice in the movie that originates in front of the character so the voice originates in front of the listener in the empty space; and providing, through the wearable electronic device and to the listener, the voice convolved with the HRTFs so the listener localizes the voice to originate in front of the listener in the empty space while the listener watches the movie. 18. The method of claim 16 further comprising: convolving, with the digital signal processor and with the HRTFs, a voice in the movie that originates above the character so the voice originates above the listener in the empty space; and providing, through the wearable electronic device and to the listener, the voice convolved with the HRTFs so the listener localizes the voice to originate above the listener in the empty space while the listener watches the movie. 19. The method of claim 16, wherein the location that is in the empty space is located one meter from a head of the listener. 20. The method of claim 16, wherein the location that is in the empty space is located at an azimuth angle with a value in a range between 135° to 225° when a line-of-sight of the listener is an azimuth angle of 0°.
A method provides binaural sound to a listener while the listener watches a movie so sounds from the movie localize to a location of a character in the movie. Sound is convolved with head related transfer functions (HRTFs) of the listener, and the convolved sound is provided to the listener who wears a wearable electronic device.1. A method to provide binaural sound to a listener in a movie theater, the method comprising: determining different head orientations of the listener with respect to an image of a character as the image of the character moves to different locations from one side of a movie screen to an opposite side of the movie screen while being displayed to the listener on the movie screen during a movie in the movie theater; selecting, based on the head orientations of the listener with respect to the image of the character, head related transfer functions (HRTFs) so a voice of the character is heard to originate to the listener from the image of the character as the image of the character moves to the different locations from the one side of the movie screen to the opposite side of the movie screen; convolving, with a digital signal processor, the voice of the character with the HRTFs; and providing, through a wearable electronic device and to the listener, the voice of the character convolved with the HRTFs so the voice of the character localizes to the listener as originating from the image of the character as the image of the character moves to the different locations from the one side of the movie screen to the opposite side of the movie screen. 2. The method of claim 1, wherein the one side of the movie screen is a left side of the movie screen as seen from the listener and the opposite side of the movie screen is a right side of the movie screen as seen from the listener. 3. The method of claim 1 further comprising: determining azimuth angles and elevation angles between a forward facing head orientation of the listener and the image of the character that is located on the movie screen as the image of the character moves to the different locations from the one side of the movie screen to the opposite side of the movie screen; and selecting the HRTFs based on the azimuth angles and the elevation angles so the voice of the character localizes to the listener as originating from the image of the character as the image of the character moves to the different locations from the one side of the movie screen to the opposite side of the movie screen. 4. The method of claim 1 further comprising: determining a distance from the listener to where the character is visually represented in a scene in the movie; and adjusting a loudness of the voice of the character based on the distance. 5. The method of claim 1 further comprising: providing the listener with a selection of different characters that appear in the movie shown on the movie screen; receiving a selection of one of the characters; and providing the listener with binaural sound from the point-of-view of the one of the characters so the listener hears sounds as if the listener were the one of the characters. 6. The method of claim 1 further comprising: determining when a position of the character is no longer located on the movie screen; and switching, in response to the position of the character no longer being located on the movie screen, the voice of the character from being provided in binaural sound to being provided to the listener in stereo sound. 7. The method of claim 1 further comprising: determining when a position of the character is no longer located on the movie screen; and switching, in response to a position of the character no longer being located on the movie screen, a point-of-view for sound being provided to the listener to another character that is on the movie screen so the listener continues to localize an origin of sound to the movie screen. 8. A method to provide binaural sound to a listener from a point-of-view of a character in a feature length movie while the listener watches the feature length movie, the method comprising: determining an azimuth angle between the character and a source of sound in the feature length movie; selecting a head related transfer function (HRTF) that corresponds to the azimuth angle; convolving, with a digital signal processor, sound from the source of the sound with the HRTF; and providing, through a wearable electronic device and to the listener, the sound convolved with the HRTF so the listener hears the sound from the point-of-view of the character while the listener watches the feature length movie. 9. The method of claim 8 further comprising: receiving, from the listener, a selection of the character indicating that the listener desires to hear sound from the point-of-view of the character. 10. The method of claim 8 further comprising: tracking a head orientation of the listener; and switching, while the listener watches the feature length movie, the sound from being provided in binaural sound to the listener to being provided in stereo sound to the listener when the head orientation of the listener is not directed to the feature length movie. 11. The method of claim 8 further comprising: playing a signal sound to notify the listener when the point-of-view changes from the character to another character. 12. The method of claim 8 further comprising: providing, while the listener watches the feature length movie, a voice of the character to the listener in stereo sound while other sounds are simultaneously provided to the listener in binaural sound. 13. The method of claim 8 further comprising: providing, while the listener watches the feature length movie, a voice of the character to the listener in binaural sound so the voice of the character localizes to the listener one meter above the head of the listener while other sounds are simultaneously provided to the listener in stereo sound. 14. The method of claim 8 further comprising: providing, through the wearable electronic device and to the listener, sounds that localize to the listener as originating behind the listener while the listener watches the feature length movie. 15. The method of claim 8 further comprising: providing, through the wearable electronic device and to the listener, sounds that localize to the listener as originating between one to two meters away from the listener while the listener watches the feature length movie. 16. A method to provide binaural sound to a listener while the listener watches a movie so sounds from the movie localize behind the listener, the method comprising: obtaining head related transfer functions (HRTFs) for the listener; selecting a character in the movie as an audial point-of-view of the listener; convolving, with a digital signal processor and with the HRTFs, a sound in the movie that originates from behind the character so the sound originates from behind the listener at a location that is in empty space not occupied by a tangible object; and providing, through a wearable electronic device and to the listener, the sound convolved with the HRTFs so the listener localizes the sound to originate from behind the listener at the location that is in the empty space not occupied by a tangible object while the listener watches the movie. 17. The method of claim 16 further comprising: convolving, with the digital signal processor and with the HRTFs, a voice in the movie that originates in front of the character so the voice originates in front of the listener in the empty space; and providing, through the wearable electronic device and to the listener, the voice convolved with the HRTFs so the listener localizes the voice to originate in front of the listener in the empty space while the listener watches the movie. 18. The method of claim 16 further comprising: convolving, with the digital signal processor and with the HRTFs, a voice in the movie that originates above the character so the voice originates above the listener in the empty space; and providing, through the wearable electronic device and to the listener, the voice convolved with the HRTFs so the listener localizes the voice to originate above the listener in the empty space while the listener watches the movie. 19. The method of claim 16, wherein the location that is in the empty space is located one meter from a head of the listener. 20. The method of claim 16, wherein the location that is in the empty space is located at an azimuth angle with a value in a range between 135° to 225° when a line-of-sight of the listener is an azimuth angle of 0°.
2,600
9,867
9,867
14,665,159
2,636
An apparatus in one embodiment includes a transceiver housing operable to be inserted into a port of a host system, the port comprising at least a first channel and a second channel. The transceiver housing may be a compact small form-factor (SFP) pluggable module housing. The apparatus also includes a printed circuit board mounted in the transceiver housing and an electrical interface of the printed circuit board operable to interface with the port of the host system. The electrical interface includes a first transmit pin and a first receive pin configured to interface with the first channel of the port and a second transmit pin and a second receive pin configured to interface with the second channel of the port. A first connector couples the first transmit pin and the second receive pin, and a second connector couples the second transmit pin and the first receive pin.
1-28. (canceled) 29. An apparatus comprising: a transceiver housing configured to be inserted into a port of a host system, the port comprising at least a first channel and a second channel; a printed circuit board mounted in the transceiver housing and comprising an electrical interface configured to interface with the port of the host system; and an identification module mounted on the printed circuit board, the identification module configured to: identify the apparatus to the host system; and indicate to the host system, during testing of the host system, that the apparatus is connected to a network even if the apparatus is not connected to the network. 30. The apparatus of claim 29, wherein the transceiver housing comprises a compact small form-factor (SFP) pluggable module housing. 31. The apparatus of claim 29, wherein the electrical interface comprises: a first connector physically coupling a first transmit pin of the first channel to a second receive pin of the second channel; and a second connector physically coupling a second transmit pin of the second channel to a first receive pin of the first channel. 32. The apparatus of claim 29, wherein the electrical interface is configured to physically couple the first and second channels together in order to provide loopback testing of the host system. 33. The apparatus of claim 29, further comprising an authentication module mounted on the printed circuit board, the authentication module configured to transmit an authentication signature to the host system. 34. The apparatus of claim 29, wherein the electrical interface is configured to simulate losses associated with fiber optic transmission lines. 35. An apparatus comprising: a transceiver housing configured to be inserted into a host system; and an identification module coupled to a printed circuit board within the transceiver housing and configured to: identify the apparatus to the host system; and indicate to the host system, during testing of the host system, that the apparatus is connected to a network even if the apparatus is not connected to the network. 36. The apparatus of claim 35, further comprising a printed circuit board coupled to the transceiver housing, the printed circuit board comprising traces to physically couple a first channel of the host system to second channel of the host system, wherein the printed circuit board is configured to: receive first information on the first channel and transmit the first information to the second channel; and receive second information on the second channel and transmit the second information to the first channel. 37. The apparatus of claim 36, further comprising an authentication module mounted on the printed circuit board, the authentication module configured to transmit an authentication signature to the host system. 38. The apparatus of claim 35, wherein the apparatus is configured to physically couple a first channel of the host system and a second channel a first channel of the host system together in order to provide loopback testing of the host system. 39. The apparatus of claim 35, wherein the apparatus is configured to simulate losses associated with fiber optic transmission lines. 40. A method comprising: initiating a transceiver module connected to a port of a host system; querying an identification module in the transceiver module; determining the identity of the transceiver module; and receiving, during testing of the host system, an indication that the transceiver module is connected to a network even if the transceiver module is not connected to the network. 41. The method of claim 40, further comprising performing loopback testing of the host system using the transceiver module. 42. The method of claim 41, wherein the loopback testing comprises: transmitting first information on a first channel of the host system; determining that the first information is received on a second channel of the host system; transmitting second information on the second channel; and determining that the second information is received on the first channel. 43. The method of claim 40, wherein: the port comprises a first channel and a second channel; and the transceiver module comprises traces configured to physically couple the first channel to the second channel in order to provide loopback testing of the host system. 44. The method of claim 40, wherein the transceiver module comprises a printed circuit board (PCB) that is coupled to the compact SFP module housing, the PCB comprising traces configured to physically couple a first channel of the host system to a second channel of the host system in order to provide loopback testing of the host system. 45. The method of claim 40, wherein the transceiver module comprises a compact small form-factor (SFP) pluggable module housing. 46. The method of claim 40, further comprising: requesting that the transceiver module transmit an authentication signature to the host system; receiving the authentication signature from the transceiver module; and authenticating the transceiver module. 47. The method of claim 40, wherein the transceiver module is configured to simulate losses associated with fiber optic transmission lines. 48. The method of claim 40, wherein the transceiver module is configured to physically couple a first channel of the host system and a second channel a first channel of the host system together in order to provide loopback testing of the host system.
An apparatus in one embodiment includes a transceiver housing operable to be inserted into a port of a host system, the port comprising at least a first channel and a second channel. The transceiver housing may be a compact small form-factor (SFP) pluggable module housing. The apparatus also includes a printed circuit board mounted in the transceiver housing and an electrical interface of the printed circuit board operable to interface with the port of the host system. The electrical interface includes a first transmit pin and a first receive pin configured to interface with the first channel of the port and a second transmit pin and a second receive pin configured to interface with the second channel of the port. A first connector couples the first transmit pin and the second receive pin, and a second connector couples the second transmit pin and the first receive pin.1-28. (canceled) 29. An apparatus comprising: a transceiver housing configured to be inserted into a port of a host system, the port comprising at least a first channel and a second channel; a printed circuit board mounted in the transceiver housing and comprising an electrical interface configured to interface with the port of the host system; and an identification module mounted on the printed circuit board, the identification module configured to: identify the apparatus to the host system; and indicate to the host system, during testing of the host system, that the apparatus is connected to a network even if the apparatus is not connected to the network. 30. The apparatus of claim 29, wherein the transceiver housing comprises a compact small form-factor (SFP) pluggable module housing. 31. The apparatus of claim 29, wherein the electrical interface comprises: a first connector physically coupling a first transmit pin of the first channel to a second receive pin of the second channel; and a second connector physically coupling a second transmit pin of the second channel to a first receive pin of the first channel. 32. The apparatus of claim 29, wherein the electrical interface is configured to physically couple the first and second channels together in order to provide loopback testing of the host system. 33. The apparatus of claim 29, further comprising an authentication module mounted on the printed circuit board, the authentication module configured to transmit an authentication signature to the host system. 34. The apparatus of claim 29, wherein the electrical interface is configured to simulate losses associated with fiber optic transmission lines. 35. An apparatus comprising: a transceiver housing configured to be inserted into a host system; and an identification module coupled to a printed circuit board within the transceiver housing and configured to: identify the apparatus to the host system; and indicate to the host system, during testing of the host system, that the apparatus is connected to a network even if the apparatus is not connected to the network. 36. The apparatus of claim 35, further comprising a printed circuit board coupled to the transceiver housing, the printed circuit board comprising traces to physically couple a first channel of the host system to second channel of the host system, wherein the printed circuit board is configured to: receive first information on the first channel and transmit the first information to the second channel; and receive second information on the second channel and transmit the second information to the first channel. 37. The apparatus of claim 36, further comprising an authentication module mounted on the printed circuit board, the authentication module configured to transmit an authentication signature to the host system. 38. The apparatus of claim 35, wherein the apparatus is configured to physically couple a first channel of the host system and a second channel a first channel of the host system together in order to provide loopback testing of the host system. 39. The apparatus of claim 35, wherein the apparatus is configured to simulate losses associated with fiber optic transmission lines. 40. A method comprising: initiating a transceiver module connected to a port of a host system; querying an identification module in the transceiver module; determining the identity of the transceiver module; and receiving, during testing of the host system, an indication that the transceiver module is connected to a network even if the transceiver module is not connected to the network. 41. The method of claim 40, further comprising performing loopback testing of the host system using the transceiver module. 42. The method of claim 41, wherein the loopback testing comprises: transmitting first information on a first channel of the host system; determining that the first information is received on a second channel of the host system; transmitting second information on the second channel; and determining that the second information is received on the first channel. 43. The method of claim 40, wherein: the port comprises a first channel and a second channel; and the transceiver module comprises traces configured to physically couple the first channel to the second channel in order to provide loopback testing of the host system. 44. The method of claim 40, wherein the transceiver module comprises a printed circuit board (PCB) that is coupled to the compact SFP module housing, the PCB comprising traces configured to physically couple a first channel of the host system to a second channel of the host system in order to provide loopback testing of the host system. 45. The method of claim 40, wherein the transceiver module comprises a compact small form-factor (SFP) pluggable module housing. 46. The method of claim 40, further comprising: requesting that the transceiver module transmit an authentication signature to the host system; receiving the authentication signature from the transceiver module; and authenticating the transceiver module. 47. The method of claim 40, wherein the transceiver module is configured to simulate losses associated with fiber optic transmission lines. 48. The method of claim 40, wherein the transceiver module is configured to physically couple a first channel of the host system and a second channel a first channel of the host system together in order to provide loopback testing of the host system.
2,600
9,868
9,868
15,393,620
2,647
The present disclosure is directed to a method of enabling an application programmed on a mobile device to collect data. The application may operate in the background of the mobile device and be temporarily activated, and may be capable of collecting data while activated. The method may include receiving location data indicating a current location of the mobile device, generating a set of custom virtual borders around the current location, and instructing an operating system of the mobile device to temporarily activate the application when the mobile device crosses one or more of the custom virtual borders. The method is repeated upon temporary activation of the application, such that an updated set of custom virtual borders is generated when the mobile device crosses one or more of the previous custom virtual borders.
1. A method performed on a mobile device, wherein the mobile device includes an operating system configured to track a current location of the mobile device based on location data collected from hardware of the mobile device, the operating system being further configured to control permission for an application program included in the mobile device to access, collect or store data based on a set of one or more custom virtual borders, and wherein the mobile device further includes a software component external from the operating system and in communication with the operating system for providing the set of one or more custom virtual borders, wherein the method is performed by the software component and comprises: the software component receiving the location data indicating the current location of the mobile device; the software component generating the set of one or more custom virtual borders around the current location based on the received location data; the software component providing the generated set of custom virtual borders to the operating system, thereby causing the operating system to permit the application program to access, collect or store data only when the mobile device crosses one or more of the custom virtual borders; upon the mobile device crossing the one or more custom virtual borders, the software component receiving new location data indicating a new current location of the mobile device; the software component generating a new set of custom virtual borders around the new current location based on the new location data; and the software component providing the new set of custom virtual borders to the operating system, thereby causing the operating system to permit the application program to access, collect or store data only when the mobile device crosses one or more of the new custom virtual borders. 2. The method of claim 1, further comprising: logging the received location data; and instructing the mobile device to transmit the logged data to a remote location. 3. The method of claim 1, wherein generating the set of custom virtual borders comprises generating a main boundary enclosing an area including the current location of the mobile device, and wherein the software component instructs the operating system to permit the application program to access, collect or store data when the mobile device enters or exits the enclosed area. 4. The method of claim 1, wherein generating the set of custom virtual borders comprises generating a plurality of secondary boundaries, each secondary boundary enclosing an area not including the current location of the mobile device, and wherein the software component instructs the operating system to permit the application program to access, collect or store data when the mobile device enters one of the enclosed areas of the secondary boundaries. 5. The method of claim 1, wherein generating the set of custom virtual borders comprises generating a main boundary enclosing an area including the current location of the mobile device and a plurality of secondary boundaries, each secondary boundary enclosing an area not including the current location of the mobile device, and wherein the software component instructs the operating system to permit the application program to access, collect or store data when the mobile device exits the enclosed area of the main boundary and enters one of the enclosed areas of the secondary boundaries. 6. The method of claim 5, wherein the main boundary overlaps each of the secondary boundaries, such that the mobile device enters a secondary boundary before exiting the main boundary. 7. The method of claim 5, wherein the number of boundaries included in the set of custom virtual border is less than the maximum number of geofence regions that the mobile device is operable to monitor. 8. The method of claim 5, further comprising the software component determining at least one of: a radius of the main boundary of the set of custom virtual borders; and a distance between a centerpoint of the main boundary and a centerpoint of a secondary boundary, based on the number of secondary boundaries that are generated. 9. The method of claim 5, wherein the software component generates at least one secondary boundary of the set of custom virtual borders having a radius of between about 100 meters and about 200 meters. 10. The method of claim 1, further comprising the software component instructing the operating system to temporarily activate the application, thereby enabling the application to collect location data indicating a current location of the mobile device. 11. The method of claim 10, wherein the location data comprises one or more of an identification of the mobile device, a timestamp associated with the location data, and an indication of either or both horizontal and vertical accuracy of the mobile device's position. 12. A non-transitory computer-readable storage medium of a mobile device, including an operating system configured to track a current location of the mobile device based on location data collected from hardware of the mobile device, the operating system being further configured to control permission for an application program included in the mobile device to access, collect or store data based on a set of one or more custom virtual borders, the storage medium having encoded thereon a software component external from the operating system and in communication with the operating system for providing the set of one or more custom virtual borders, wherein the software component comprises instructions configured to: receive the location data indicating the current location of the mobile device; generate the set of custom virtual borders around the current location of the mobile device based on the received location data; provide the generated set of custom virtual borders to the operating system, thereby causing the operating system to permit the application program to access, collect or store data only when the mobile device crosses one or more of the custom virtual borders; upon the mobile device crossing the one or more custom virtual borders, receive new location data indicating a new current location of the mobile device; generate a new set of custom virtual borders around the new current location based on the new location data; and provide the new set of custom virtual borders to the operating system, thereby causing the operating system to permit the application program to access, collect or store data only when the mobile device crosses one or more of the new custom virtual borders. 13. The non-transitory computer-readable storage medium of claim 12, wherein the set of custom virtual borders is generated only when the application program is activated. 14. The non-transitory computer-readable storage medium of claim 12, wherein the data collected by the application program is stored on the non-transitory computer-readable storage medium of the device, and wherein the mobile device is further configured to transmit the stored data to a remote storage medium. 15. The non-transitory computer-readable storage medium of claim 12, wherein the set of custom virtual borders comprises a main boundary enclosing an area including the current location of the mobile device, and wherein the instructions are configured to permit the application program to access, collect or store data when the mobile device exits the enclosed area. 16. The non-transitory computer-readable storage medium of claim 12, wherein the set of custom virtual borders comprises a plurality of secondary boundaries, each secondary boundary enclosing an area not including the current location of the mobile device, and wherein the instructions are configured to permit the application program to access, collect or store data when the mobile device enters one of the enclosed areas of the secondary boundaries. 17. The non-transitory computer-readable storage medium of claim 12, wherein the set of custom virtual borders comprises a main boundary enclosing an area including the current location of the mobile device and a plurality of secondary boundaries, each secondary boundary enclosing an area not including the current location of the mobile device, and wherein the instructions are configured to permit the application program to access, collect or store data when the mobile device exits the enclosed area of the main boundary and enters one of the enclosed areas of the secondary boundaries. 18. A mobile device comprising: a receiver for receiving location data indicating a current location of the mobile device; a processor; and a non-transitory computer-readable storage medium having encoded thereon: an operating system configured to cause the processor to control operations of the mobile device; a software component external from the operating system; and an application program interface for enabling communication between the operating system and the software component, wherein the operating system is configured to track the mobile device based the location data, and to control permission for an application program included in the mobile device to access, collect or store data based on a set of one or more custom virtual borders, and wherein the software component comprises instructions configured to cause the processor to: receive the location data indicating the current location of the mobile device; generate the set of custom virtual borders around the current location of the mobile device based on the received location data; provide the generated set of custom virtual borders to the operating system, thereby causing the operating system to permit the application program to access, collect or store data only when the mobile device crosses one or more of the custom virtual borders; upon the mobile device crossing the one or more custom virtual borders, receive new location data indicating a new current location of the mobile device; generate a new set of custom virtual borders around the new current location based on the new location data; and provide the new set of custom virtual borders to the operating system, thereby causing the operating system to permit the application program to access, collect or store data only when the mobile device crosses one or more of the new custom virtual borders. 19. The mobile device of claim 18, further comprising a transmitter to transmit the stored location data to a remote storage medium, wherein the processor is further configured to erase the stored location data from the local storage medium after the stored location data is transmitted to the remote storage medium. 20. A mobile device comprising: a receiver for receiving location data indicating a current location of the mobile device; a processor for controlling operations of the mobile device; and a non-transitory computer-readable storage medium having encoded thereon: an operating system configured to cause the processor to control operations of the mobile device; a software component external from the operating system; and an application program interface for enabling communication between the operating system and the software component, wherein the software component comprises instructions configured to cause the processor to: dynamically generate a set of one or more virtual borders around a current location of the mobile device; and transmit the generated set of custom virtual borders to the operating system; wherein the operating system is programmed to, detect the mobile device crossing one or more of the dynamically generated virtual borders based on the received location data; and wherein the operating system is further programmed to temporarily enable an application programmed on the mobile device to collect and store data upon detecting the mobile device crossing one or more of the dynamically generated virtual borders, and wherein the software component comprises instructions further configured to cause the processor to dynamically generate an updated set of virtual borders around the detected location of the mobile device upon the operating system detecting the mobile device crossing one or more of the dynamically generated virtual borders. 21. The mobile device of claim 20, further comprising a transmitter, and wherein the program instructions are further configured to cause the processor to: store location data corresponding to the location of the mobile device when one or more of the dynamically generated virtual borders is crossed; and transmit the stored location data to a remote location. 22. The method of claim 1, wherein the operating system restricts how frequently the program application is temporarily activated, whereby said restriction reduces drainage of a battery of the mobile device. 23. The computer-readable storage medium of claim 12, wherein the operating system is configured to restrict the application program from causing the processor to collect data, whereby said restriction reduces drainage of a battery of the mobile device. 24. The method of claim 1, wherein the software component is included within the application. 25. The method of claim 1, wherein the location data received by the software component is collected by data collection hardware included in the mobile device, and wherein the operating system is configured to control transmission of the location data from the data collection hardware to the software component.
The present disclosure is directed to a method of enabling an application programmed on a mobile device to collect data. The application may operate in the background of the mobile device and be temporarily activated, and may be capable of collecting data while activated. The method may include receiving location data indicating a current location of the mobile device, generating a set of custom virtual borders around the current location, and instructing an operating system of the mobile device to temporarily activate the application when the mobile device crosses one or more of the custom virtual borders. The method is repeated upon temporary activation of the application, such that an updated set of custom virtual borders is generated when the mobile device crosses one or more of the previous custom virtual borders.1. A method performed on a mobile device, wherein the mobile device includes an operating system configured to track a current location of the mobile device based on location data collected from hardware of the mobile device, the operating system being further configured to control permission for an application program included in the mobile device to access, collect or store data based on a set of one or more custom virtual borders, and wherein the mobile device further includes a software component external from the operating system and in communication with the operating system for providing the set of one or more custom virtual borders, wherein the method is performed by the software component and comprises: the software component receiving the location data indicating the current location of the mobile device; the software component generating the set of one or more custom virtual borders around the current location based on the received location data; the software component providing the generated set of custom virtual borders to the operating system, thereby causing the operating system to permit the application program to access, collect or store data only when the mobile device crosses one or more of the custom virtual borders; upon the mobile device crossing the one or more custom virtual borders, the software component receiving new location data indicating a new current location of the mobile device; the software component generating a new set of custom virtual borders around the new current location based on the new location data; and the software component providing the new set of custom virtual borders to the operating system, thereby causing the operating system to permit the application program to access, collect or store data only when the mobile device crosses one or more of the new custom virtual borders. 2. The method of claim 1, further comprising: logging the received location data; and instructing the mobile device to transmit the logged data to a remote location. 3. The method of claim 1, wherein generating the set of custom virtual borders comprises generating a main boundary enclosing an area including the current location of the mobile device, and wherein the software component instructs the operating system to permit the application program to access, collect or store data when the mobile device enters or exits the enclosed area. 4. The method of claim 1, wherein generating the set of custom virtual borders comprises generating a plurality of secondary boundaries, each secondary boundary enclosing an area not including the current location of the mobile device, and wherein the software component instructs the operating system to permit the application program to access, collect or store data when the mobile device enters one of the enclosed areas of the secondary boundaries. 5. The method of claim 1, wherein generating the set of custom virtual borders comprises generating a main boundary enclosing an area including the current location of the mobile device and a plurality of secondary boundaries, each secondary boundary enclosing an area not including the current location of the mobile device, and wherein the software component instructs the operating system to permit the application program to access, collect or store data when the mobile device exits the enclosed area of the main boundary and enters one of the enclosed areas of the secondary boundaries. 6. The method of claim 5, wherein the main boundary overlaps each of the secondary boundaries, such that the mobile device enters a secondary boundary before exiting the main boundary. 7. The method of claim 5, wherein the number of boundaries included in the set of custom virtual border is less than the maximum number of geofence regions that the mobile device is operable to monitor. 8. The method of claim 5, further comprising the software component determining at least one of: a radius of the main boundary of the set of custom virtual borders; and a distance between a centerpoint of the main boundary and a centerpoint of a secondary boundary, based on the number of secondary boundaries that are generated. 9. The method of claim 5, wherein the software component generates at least one secondary boundary of the set of custom virtual borders having a radius of between about 100 meters and about 200 meters. 10. The method of claim 1, further comprising the software component instructing the operating system to temporarily activate the application, thereby enabling the application to collect location data indicating a current location of the mobile device. 11. The method of claim 10, wherein the location data comprises one or more of an identification of the mobile device, a timestamp associated with the location data, and an indication of either or both horizontal and vertical accuracy of the mobile device's position. 12. A non-transitory computer-readable storage medium of a mobile device, including an operating system configured to track a current location of the mobile device based on location data collected from hardware of the mobile device, the operating system being further configured to control permission for an application program included in the mobile device to access, collect or store data based on a set of one or more custom virtual borders, the storage medium having encoded thereon a software component external from the operating system and in communication with the operating system for providing the set of one or more custom virtual borders, wherein the software component comprises instructions configured to: receive the location data indicating the current location of the mobile device; generate the set of custom virtual borders around the current location of the mobile device based on the received location data; provide the generated set of custom virtual borders to the operating system, thereby causing the operating system to permit the application program to access, collect or store data only when the mobile device crosses one or more of the custom virtual borders; upon the mobile device crossing the one or more custom virtual borders, receive new location data indicating a new current location of the mobile device; generate a new set of custom virtual borders around the new current location based on the new location data; and provide the new set of custom virtual borders to the operating system, thereby causing the operating system to permit the application program to access, collect or store data only when the mobile device crosses one or more of the new custom virtual borders. 13. The non-transitory computer-readable storage medium of claim 12, wherein the set of custom virtual borders is generated only when the application program is activated. 14. The non-transitory computer-readable storage medium of claim 12, wherein the data collected by the application program is stored on the non-transitory computer-readable storage medium of the device, and wherein the mobile device is further configured to transmit the stored data to a remote storage medium. 15. The non-transitory computer-readable storage medium of claim 12, wherein the set of custom virtual borders comprises a main boundary enclosing an area including the current location of the mobile device, and wherein the instructions are configured to permit the application program to access, collect or store data when the mobile device exits the enclosed area. 16. The non-transitory computer-readable storage medium of claim 12, wherein the set of custom virtual borders comprises a plurality of secondary boundaries, each secondary boundary enclosing an area not including the current location of the mobile device, and wherein the instructions are configured to permit the application program to access, collect or store data when the mobile device enters one of the enclosed areas of the secondary boundaries. 17. The non-transitory computer-readable storage medium of claim 12, wherein the set of custom virtual borders comprises a main boundary enclosing an area including the current location of the mobile device and a plurality of secondary boundaries, each secondary boundary enclosing an area not including the current location of the mobile device, and wherein the instructions are configured to permit the application program to access, collect or store data when the mobile device exits the enclosed area of the main boundary and enters one of the enclosed areas of the secondary boundaries. 18. A mobile device comprising: a receiver for receiving location data indicating a current location of the mobile device; a processor; and a non-transitory computer-readable storage medium having encoded thereon: an operating system configured to cause the processor to control operations of the mobile device; a software component external from the operating system; and an application program interface for enabling communication between the operating system and the software component, wherein the operating system is configured to track the mobile device based the location data, and to control permission for an application program included in the mobile device to access, collect or store data based on a set of one or more custom virtual borders, and wherein the software component comprises instructions configured to cause the processor to: receive the location data indicating the current location of the mobile device; generate the set of custom virtual borders around the current location of the mobile device based on the received location data; provide the generated set of custom virtual borders to the operating system, thereby causing the operating system to permit the application program to access, collect or store data only when the mobile device crosses one or more of the custom virtual borders; upon the mobile device crossing the one or more custom virtual borders, receive new location data indicating a new current location of the mobile device; generate a new set of custom virtual borders around the new current location based on the new location data; and provide the new set of custom virtual borders to the operating system, thereby causing the operating system to permit the application program to access, collect or store data only when the mobile device crosses one or more of the new custom virtual borders. 19. The mobile device of claim 18, further comprising a transmitter to transmit the stored location data to a remote storage medium, wherein the processor is further configured to erase the stored location data from the local storage medium after the stored location data is transmitted to the remote storage medium. 20. A mobile device comprising: a receiver for receiving location data indicating a current location of the mobile device; a processor for controlling operations of the mobile device; and a non-transitory computer-readable storage medium having encoded thereon: an operating system configured to cause the processor to control operations of the mobile device; a software component external from the operating system; and an application program interface for enabling communication between the operating system and the software component, wherein the software component comprises instructions configured to cause the processor to: dynamically generate a set of one or more virtual borders around a current location of the mobile device; and transmit the generated set of custom virtual borders to the operating system; wherein the operating system is programmed to, detect the mobile device crossing one or more of the dynamically generated virtual borders based on the received location data; and wherein the operating system is further programmed to temporarily enable an application programmed on the mobile device to collect and store data upon detecting the mobile device crossing one or more of the dynamically generated virtual borders, and wherein the software component comprises instructions further configured to cause the processor to dynamically generate an updated set of virtual borders around the detected location of the mobile device upon the operating system detecting the mobile device crossing one or more of the dynamically generated virtual borders. 21. The mobile device of claim 20, further comprising a transmitter, and wherein the program instructions are further configured to cause the processor to: store location data corresponding to the location of the mobile device when one or more of the dynamically generated virtual borders is crossed; and transmit the stored location data to a remote location. 22. The method of claim 1, wherein the operating system restricts how frequently the program application is temporarily activated, whereby said restriction reduces drainage of a battery of the mobile device. 23. The computer-readable storage medium of claim 12, wherein the operating system is configured to restrict the application program from causing the processor to collect data, whereby said restriction reduces drainage of a battery of the mobile device. 24. The method of claim 1, wherein the software component is included within the application. 25. The method of claim 1, wherein the location data received by the software component is collected by data collection hardware included in the mobile device, and wherein the operating system is configured to control transmission of the location data from the data collection hardware to the software component.
2,600
9,869
9,869
13,046,263
2,621
Systems and methods for pre-touch and true touch are disclosed. For example, in one described system for pre-touch and true touch includes a touch-sensitive interface configured to detect a user interaction and transmit a first interface signal based at least in part on the user interaction. The system also includes a processor in communication with the touch-sensitive interface and configured to receive the first interface signal and determine a haptic effect based at least in part on the first interface signal. The processor is further configured to preload a haptic signal associated with the haptic effect. The system also includes a cache in communication with the processor and configured to store the preloaded haptic signal for a time period and then transmit the haptic signal and a haptic effect generator in communication with the cache and configured to receive the haptic signal from the cache and, in response, output a haptic effect based at least in part on the haptic signal.
1. A system comprising: a touch-sensitive interface configured to detect a user interaction and transmit a first interface signal based at least in part on the user interaction; a processor in communication with the touch-sensitive interface and configured to receive the first interface signal and determine a haptic effect based at least in part on the first interface signal, the processor further configured to preload a haptic signal associated with the haptic effect; a cache in communication with the processor and configured to store the preloaded haptic signal for a time period, then transmit the haptic signal; and a haptic effect generator in communication with the cache and configured to receive the haptic signal from the cache and output a haptic effect based at least in part on the haptic signal. 2. The system of claim 1, wherein the touch-sensitive interface comprises a touch-screen. 3. The system of claim 1, wherein the touch-sensitive interface is configured: to detect the user interaction before a physical contact; and to detect the physical contact. 4. The system of claim 3, wherein the touch-sensitive interface is configured to transmit the first interface signal based at least in part on the user interaction before the physical contact, and wherein the touch-sensitive interface is configured to transmit a second interface signal based at least in part on the physical contact. 5. The system of claim 4, wherein: the processor is configured to determine the haptic effect based at least in part on the first interface signal, and the cache is configured to output the haptic signal based at least in part on the second interface signal. 6. The system of claim 3, wherein the period of time is the time between detecting user interaction and detecting the physical contact. 7. The system of claim 3, wherein the cache transmits the haptic signal to the haptic effect generator after detecting the physical contact. 8. The system of claim 1, wherein the time period is predefined. 9. The system of claim 1, wherein the cache comprises a memory. 10. The system of claim 1, wherein the cache comprises one of: a capacitor, an inductor, or a battery. 11. The system of claim 1, wherein: the cache comprises a flywheel, and storing the haptic signal comprises rotating the flywheel. 12. The system of claim 11, further comprising a flywheel brake and wherein outputting the haptic effect comprises slowing the flywheel. 13. The system of claim 1, further comprising a sensor configured to detect a user interaction above the surface of the touch-sensitive interface and transmit a sensor signal to the processor corresponding to the user interaction. 14. The system of claim 13, wherein the sensor comprises one of: an optical sensor, an infrared sensor, or a motion sensor. 15. The system of claim 1, further comprising a housing configured to contain the processor, the cache, and the haptic effect generator. 16. The system of claim 15, wherein the housing comprises a handheld device housing. 17. The system of claim 15, wherein the haptic effect generator is configured to output the haptic effect onto the housing. 18. A method for generating a haptic effect comprising: receiving a first interface signal from a touch-sensitive interface configured to detect a user interaction and transmit an interface signal based at least in part on the user interaction; determining a haptic effect based at least in part on the first interface signal; preloading a haptic signal associated with the haptic effect to a cache configured to store the haptic signal for a period of time; and transmitting the haptic signal to a haptic effect generator configured to output the haptic effect. 19. The method of claim 18, wherein the touch-sensitive interface is configured to detect the user interaction before a physical contact and to detect the physical contact. 20. The method of claim 19, wherein the period of time is the period between detecting the user interaction and detecting physical contact. 21. The method of claim 19, further comprising receiving a second interface signal from the touch-sensitive interface, and wherein the touch-sensitive interface is configured to transmit the first interface signal based at least in part on the user interaction before the physical contact and transmit the second interface signal based at least in part on the physical contact. 22. The method of claim 21, wherein the haptic signal is transmitted to the haptic effect generator after receiving the second interface signal. 23. A computer readable medium encoded with processor executable program code, the computer readable medium comprising: program code to receive an interface signal from a touch-sensitive interface configured to detect a user interaction, the touch-sensitive interface further configured to transmit an interface signal based at least in part on the user interaction; program code to determine a haptic effect based at least in part on the interface signal; program code to preload a haptic signal associated with the haptic effect to a cache configured to store the haptic signal for a time period; and program code to transmit the stored haptic signal to a haptic effect generator configured to output the haptic effect.
Systems and methods for pre-touch and true touch are disclosed. For example, in one described system for pre-touch and true touch includes a touch-sensitive interface configured to detect a user interaction and transmit a first interface signal based at least in part on the user interaction. The system also includes a processor in communication with the touch-sensitive interface and configured to receive the first interface signal and determine a haptic effect based at least in part on the first interface signal. The processor is further configured to preload a haptic signal associated with the haptic effect. The system also includes a cache in communication with the processor and configured to store the preloaded haptic signal for a time period and then transmit the haptic signal and a haptic effect generator in communication with the cache and configured to receive the haptic signal from the cache and, in response, output a haptic effect based at least in part on the haptic signal.1. A system comprising: a touch-sensitive interface configured to detect a user interaction and transmit a first interface signal based at least in part on the user interaction; a processor in communication with the touch-sensitive interface and configured to receive the first interface signal and determine a haptic effect based at least in part on the first interface signal, the processor further configured to preload a haptic signal associated with the haptic effect; a cache in communication with the processor and configured to store the preloaded haptic signal for a time period, then transmit the haptic signal; and a haptic effect generator in communication with the cache and configured to receive the haptic signal from the cache and output a haptic effect based at least in part on the haptic signal. 2. The system of claim 1, wherein the touch-sensitive interface comprises a touch-screen. 3. The system of claim 1, wherein the touch-sensitive interface is configured: to detect the user interaction before a physical contact; and to detect the physical contact. 4. The system of claim 3, wherein the touch-sensitive interface is configured to transmit the first interface signal based at least in part on the user interaction before the physical contact, and wherein the touch-sensitive interface is configured to transmit a second interface signal based at least in part on the physical contact. 5. The system of claim 4, wherein: the processor is configured to determine the haptic effect based at least in part on the first interface signal, and the cache is configured to output the haptic signal based at least in part on the second interface signal. 6. The system of claim 3, wherein the period of time is the time between detecting user interaction and detecting the physical contact. 7. The system of claim 3, wherein the cache transmits the haptic signal to the haptic effect generator after detecting the physical contact. 8. The system of claim 1, wherein the time period is predefined. 9. The system of claim 1, wherein the cache comprises a memory. 10. The system of claim 1, wherein the cache comprises one of: a capacitor, an inductor, or a battery. 11. The system of claim 1, wherein: the cache comprises a flywheel, and storing the haptic signal comprises rotating the flywheel. 12. The system of claim 11, further comprising a flywheel brake and wherein outputting the haptic effect comprises slowing the flywheel. 13. The system of claim 1, further comprising a sensor configured to detect a user interaction above the surface of the touch-sensitive interface and transmit a sensor signal to the processor corresponding to the user interaction. 14. The system of claim 13, wherein the sensor comprises one of: an optical sensor, an infrared sensor, or a motion sensor. 15. The system of claim 1, further comprising a housing configured to contain the processor, the cache, and the haptic effect generator. 16. The system of claim 15, wherein the housing comprises a handheld device housing. 17. The system of claim 15, wherein the haptic effect generator is configured to output the haptic effect onto the housing. 18. A method for generating a haptic effect comprising: receiving a first interface signal from a touch-sensitive interface configured to detect a user interaction and transmit an interface signal based at least in part on the user interaction; determining a haptic effect based at least in part on the first interface signal; preloading a haptic signal associated with the haptic effect to a cache configured to store the haptic signal for a period of time; and transmitting the haptic signal to a haptic effect generator configured to output the haptic effect. 19. The method of claim 18, wherein the touch-sensitive interface is configured to detect the user interaction before a physical contact and to detect the physical contact. 20. The method of claim 19, wherein the period of time is the period between detecting the user interaction and detecting physical contact. 21. The method of claim 19, further comprising receiving a second interface signal from the touch-sensitive interface, and wherein the touch-sensitive interface is configured to transmit the first interface signal based at least in part on the user interaction before the physical contact and transmit the second interface signal based at least in part on the physical contact. 22. The method of claim 21, wherein the haptic signal is transmitted to the haptic effect generator after receiving the second interface signal. 23. A computer readable medium encoded with processor executable program code, the computer readable medium comprising: program code to receive an interface signal from a touch-sensitive interface configured to detect a user interaction, the touch-sensitive interface further configured to transmit an interface signal based at least in part on the user interaction; program code to determine a haptic effect based at least in part on the interface signal; program code to preload a haptic signal associated with the haptic effect to a cache configured to store the haptic signal for a time period; and program code to transmit the stored haptic signal to a haptic effect generator configured to output the haptic effect.
2,600
9,870
9,870
14,189,009
2,621
The present invention provides information processing method and apparatus. The method includes detecting, at a first electronic device, whether a first operation for adjusting a first parameter of a first component of the first electronic device is received; generating an adjustment instruction based on the first operation; and executing the adjustment instruction and transmitting the adjustment instruction to a second electronic device capable of exchanging data with the first electronic device for multi-device cooperation, so as to enable the second electronic device to adjust the first parameter of a second component of the second electronic device by executing the adjustment instruction. The second component and the first component are of the same type.
1. An information processing method, comprising: detecting, at a first electronic device, whether a first operation for adjusting a first parameter of a first component of the first electronic device is received; generating an adjustment instruction based on the first operation; and executing the adjustment instruction and transmitting the adjustment instruction to a second electronic device capable of exchanging data with the first electronic device for multi-device cooperation, so as to enable the second electronic device to adjust the first parameter of a second component of the second electronic device by executing the adjustment instruction, wherein the second component and the first component are of a same type. 2. The method of claim 1, further comprising, prior to detecting whether the first operation is received: generating a control instruction upon detecting that the second electronic device is attached to the first electronic device; and executing, at the first electronic device, the control instruction, and/or transmitting the control instruction to the second electronic device to enable the second electronic device to execute the control instruction, wherein the first electronic device and/or the second electronic device execute the control instruction such that the first parameter of the first component of the first electronic device and the first parameter of the second component of the second electronic device have a same value. 3. The method of claim 1, wherein the first component is a first audio output unit of the first electronic device, and the second component is a second audio output unit of the second electronic device, said detecting whether the first operation is received comprises: detecting whether a volume adjustment operation for adjusting a volume value of the first audio output unit is received. 4. The method of claim 3, wherein said generating the adjustment instruction based on the first operation comprises: generating a volume adjustment instruction based on the volume adjustment operation. 5. The method of claim 4, wherein said executing the adjustment instruction and transmitting the adjustment instruction to the second electronic device so as to enable the second electronic device to adjust the first parameter of the second component of the second electronic device by executing the adjustment instruction comprises: executing the volume adjustment instruction to adjust the volume value of the first audio output unit from a first volume value to a second volume value different from the first volume value; and transmitting the volume adjustment instruction to the second electronic device, so as to enable the second electronic device to adjust a volume value of the second audio output unit from the first volume value to the second volume value by executing the volume adjustment instruction. 6. The method of claim 1, wherein the first component is a first display unit of the first electronic device, and the second component is a second display unit of the second electronic device, said detecting whether the first operation is received comprises: detecting whether a display parameter adjustment operation for adjusting a display parameter of the first display unit is received. 7. The method of claim 6, wherein said generating the adjustment instruction based on the first operation comprises: generating a display parameter adjustment instruction based on the display parameter adjustment operation. 8. The method of claim 7, wherein said executing the adjustment instruction and transmitting the adjustment instruction to the second electronic device so as to enable the second electronic device to adjust the first parameter of the second component of the second electronic device by executing the adjustment instruction comprises: executing the display parameter adjustment instruction to adjust the display parameter of the first display unit from a first display parameter value to a second display parameter value different from the first display parameter value; and transmitting the display parameter adjustment instruction to the second electronic device, so as to enable the second electronic device to adjust a display parameter value of the second display unit from the first display parameter value to the second display parameter value by executing the display parameter adjustment instruction. 9. An information processing method, comprising: receiving, at a second electronic device, an adjustment instruction that is generated at a first electronic device in response to receiving a first operation from a user for adjusting a first parameter of a first component of the first electronic device, the first electronic device being capable of exchanging data with the second electronic device for multi-device cooperation; and executing the adjustment instruction to adjust the first parameter of a second component of the second electronic device, wherein the first electronic device is capable of executing the adjustment instruction to adjust the first parameter of the first component of the first electronic device, and wherein the second component and the first component are of a same type. 10. The method of claim 9, further comprising, prior to receiving the adjustment instruction: receiving a control instruction upon detecting that the second electronic device is attached to the first electronic device; and executing the control instruction, wherein the second electronic device executes the control instruction, or the first and second electronic devices execute the control instruction simultaneously such that the first parameter of the first component of the first electronic device and the first parameter of the second component of the second electronic device have a same value. 11. The method of claim 9, wherein said receiving the adjustment instruction comprises: receiving a volume adjustment instruction; or receiving a display parameter adjustment instruction. 12. The method of claim 11, wherein the first component is a first audio output unit of the first electronic device, and the second component is a second audio output unit of the second electronic device, said executing the adjustment instruction to adjust the first parameter of the second component of the second electronic device comprises: executing the volume adjustment instruction to adjust a volume value of the second audio output unit from a first volume value to a second volume value different from the first volume value, and wherein the first electronic device is capable of adjusting a volume value of the first audio output unit from the first volume value to the second volume value by executing the volume adjustment instruction. 13. The method of claim 11, wherein the first component is a first display unit of the first electronic device, and the second component is a second display unit of the second electronic device, said executing the adjustment instruction to adjust the first parameter of the second component of the second electronic device comprises: executing the display parameter adjustment instruction to adjust a display parameter value of the second display unit from a first display parameter value to a second display parameter value different from the first display parameter value, and wherein the first electronic device is capable of adjusting a display parameter value of the first display unit from the first display parameter value to the second display parameter value by executing the display parameter adjustment instruction. 14. An apparatus for use in a first electronic device, the first electronic device being capable of exchanging data with the second electronic device for multi-device cooperation, the apparatus comprising: a first detection unit configured to detect whether a first operation for adjusting a first parameter of a first component of the first electronic device is received; a generation unit configured to generate an adjustment instruction based on the first operation; and a first execution unit configured to execute the adjustment instruction and transmit the adjustment instruction to a second electronic device capable of exchanging data with the first electronic device for multi-device cooperation, so as to enable the second electronic device to adjust the first parameter of a second component of the second electronic device by executing the adjustment instruction, wherein the second component and the first component are of a same type. 15. The apparatus of claim 14, further comprising: a second detection unit configured to generate a control instruction upon detecting that the second electronic device is attached to the first electronic device; and a second execution unit configured to execute the control instruction and/or transmit the control instruction to the second electronic device to enable the second electronic device to execute the control instruction, wherein the first electronic device and/or the second electronic device execute the control instruction such that the first parameter of the first component of the first electronic device and the first parameter of the second component of the second electronic device have a same value. 16. An apparatus for use in a second electronic device, the second electronic device being capable of exchanging data with the first electronic device for multi-device cooperation, the apparatus comprising: a first reception unit configured to receive an adjustment instruction that is generated at a first electronic device in response to receiving a first operation from a user for adjusting a first parameter of a first component of the first electronic device; and a first execution unit configured to execute the adjustment instruction to adjust the first parameter of a second component of the second electronic device, wherein the first electronic device is capable of executing the adjustment instruction to adjust the first parameter of the first component of the first electronic device. 17. The apparatus of claim 16, further comprising: a second reception unit configured to receive a control instruction upon detecting that the second electronic device is attached to the first electronic device; and a second execution unit configured to execute the control instruction, wherein the second electronic device executes the control instruction, or the first and second electronic devices execute the control instruction simultaneously, such that the first parameter of the first component of the first electronic device and the first parameter of the second component of the second electronic device have a same value.
The present invention provides information processing method and apparatus. The method includes detecting, at a first electronic device, whether a first operation for adjusting a first parameter of a first component of the first electronic device is received; generating an adjustment instruction based on the first operation; and executing the adjustment instruction and transmitting the adjustment instruction to a second electronic device capable of exchanging data with the first electronic device for multi-device cooperation, so as to enable the second electronic device to adjust the first parameter of a second component of the second electronic device by executing the adjustment instruction. The second component and the first component are of the same type.1. An information processing method, comprising: detecting, at a first electronic device, whether a first operation for adjusting a first parameter of a first component of the first electronic device is received; generating an adjustment instruction based on the first operation; and executing the adjustment instruction and transmitting the adjustment instruction to a second electronic device capable of exchanging data with the first electronic device for multi-device cooperation, so as to enable the second electronic device to adjust the first parameter of a second component of the second electronic device by executing the adjustment instruction, wherein the second component and the first component are of a same type. 2. The method of claim 1, further comprising, prior to detecting whether the first operation is received: generating a control instruction upon detecting that the second electronic device is attached to the first electronic device; and executing, at the first electronic device, the control instruction, and/or transmitting the control instruction to the second electronic device to enable the second electronic device to execute the control instruction, wherein the first electronic device and/or the second electronic device execute the control instruction such that the first parameter of the first component of the first electronic device and the first parameter of the second component of the second electronic device have a same value. 3. The method of claim 1, wherein the first component is a first audio output unit of the first electronic device, and the second component is a second audio output unit of the second electronic device, said detecting whether the first operation is received comprises: detecting whether a volume adjustment operation for adjusting a volume value of the first audio output unit is received. 4. The method of claim 3, wherein said generating the adjustment instruction based on the first operation comprises: generating a volume adjustment instruction based on the volume adjustment operation. 5. The method of claim 4, wherein said executing the adjustment instruction and transmitting the adjustment instruction to the second electronic device so as to enable the second electronic device to adjust the first parameter of the second component of the second electronic device by executing the adjustment instruction comprises: executing the volume adjustment instruction to adjust the volume value of the first audio output unit from a first volume value to a second volume value different from the first volume value; and transmitting the volume adjustment instruction to the second electronic device, so as to enable the second electronic device to adjust a volume value of the second audio output unit from the first volume value to the second volume value by executing the volume adjustment instruction. 6. The method of claim 1, wherein the first component is a first display unit of the first electronic device, and the second component is a second display unit of the second electronic device, said detecting whether the first operation is received comprises: detecting whether a display parameter adjustment operation for adjusting a display parameter of the first display unit is received. 7. The method of claim 6, wherein said generating the adjustment instruction based on the first operation comprises: generating a display parameter adjustment instruction based on the display parameter adjustment operation. 8. The method of claim 7, wherein said executing the adjustment instruction and transmitting the adjustment instruction to the second electronic device so as to enable the second electronic device to adjust the first parameter of the second component of the second electronic device by executing the adjustment instruction comprises: executing the display parameter adjustment instruction to adjust the display parameter of the first display unit from a first display parameter value to a second display parameter value different from the first display parameter value; and transmitting the display parameter adjustment instruction to the second electronic device, so as to enable the second electronic device to adjust a display parameter value of the second display unit from the first display parameter value to the second display parameter value by executing the display parameter adjustment instruction. 9. An information processing method, comprising: receiving, at a second electronic device, an adjustment instruction that is generated at a first electronic device in response to receiving a first operation from a user for adjusting a first parameter of a first component of the first electronic device, the first electronic device being capable of exchanging data with the second electronic device for multi-device cooperation; and executing the adjustment instruction to adjust the first parameter of a second component of the second electronic device, wherein the first electronic device is capable of executing the adjustment instruction to adjust the first parameter of the first component of the first electronic device, and wherein the second component and the first component are of a same type. 10. The method of claim 9, further comprising, prior to receiving the adjustment instruction: receiving a control instruction upon detecting that the second electronic device is attached to the first electronic device; and executing the control instruction, wherein the second electronic device executes the control instruction, or the first and second electronic devices execute the control instruction simultaneously such that the first parameter of the first component of the first electronic device and the first parameter of the second component of the second electronic device have a same value. 11. The method of claim 9, wherein said receiving the adjustment instruction comprises: receiving a volume adjustment instruction; or receiving a display parameter adjustment instruction. 12. The method of claim 11, wherein the first component is a first audio output unit of the first electronic device, and the second component is a second audio output unit of the second electronic device, said executing the adjustment instruction to adjust the first parameter of the second component of the second electronic device comprises: executing the volume adjustment instruction to adjust a volume value of the second audio output unit from a first volume value to a second volume value different from the first volume value, and wherein the first electronic device is capable of adjusting a volume value of the first audio output unit from the first volume value to the second volume value by executing the volume adjustment instruction. 13. The method of claim 11, wherein the first component is a first display unit of the first electronic device, and the second component is a second display unit of the second electronic device, said executing the adjustment instruction to adjust the first parameter of the second component of the second electronic device comprises: executing the display parameter adjustment instruction to adjust a display parameter value of the second display unit from a first display parameter value to a second display parameter value different from the first display parameter value, and wherein the first electronic device is capable of adjusting a display parameter value of the first display unit from the first display parameter value to the second display parameter value by executing the display parameter adjustment instruction. 14. An apparatus for use in a first electronic device, the first electronic device being capable of exchanging data with the second electronic device for multi-device cooperation, the apparatus comprising: a first detection unit configured to detect whether a first operation for adjusting a first parameter of a first component of the first electronic device is received; a generation unit configured to generate an adjustment instruction based on the first operation; and a first execution unit configured to execute the adjustment instruction and transmit the adjustment instruction to a second electronic device capable of exchanging data with the first electronic device for multi-device cooperation, so as to enable the second electronic device to adjust the first parameter of a second component of the second electronic device by executing the adjustment instruction, wherein the second component and the first component are of a same type. 15. The apparatus of claim 14, further comprising: a second detection unit configured to generate a control instruction upon detecting that the second electronic device is attached to the first electronic device; and a second execution unit configured to execute the control instruction and/or transmit the control instruction to the second electronic device to enable the second electronic device to execute the control instruction, wherein the first electronic device and/or the second electronic device execute the control instruction such that the first parameter of the first component of the first electronic device and the first parameter of the second component of the second electronic device have a same value. 16. An apparatus for use in a second electronic device, the second electronic device being capable of exchanging data with the first electronic device for multi-device cooperation, the apparatus comprising: a first reception unit configured to receive an adjustment instruction that is generated at a first electronic device in response to receiving a first operation from a user for adjusting a first parameter of a first component of the first electronic device; and a first execution unit configured to execute the adjustment instruction to adjust the first parameter of a second component of the second electronic device, wherein the first electronic device is capable of executing the adjustment instruction to adjust the first parameter of the first component of the first electronic device. 17. The apparatus of claim 16, further comprising: a second reception unit configured to receive a control instruction upon detecting that the second electronic device is attached to the first electronic device; and a second execution unit configured to execute the control instruction, wherein the second electronic device executes the control instruction, or the first and second electronic devices execute the control instruction simultaneously, such that the first parameter of the first component of the first electronic device and the first parameter of the second component of the second electronic device have a same value.
2,600
9,871
9,871
15,592,397
2,683
A method for providing incident specific information at a vehicle computer. In operation, the vehicle computer receives an incident assignment including information related to a current incident from a dispatch computer. The vehicle computer further receives an information request and in response, identifies a first context parameter by co-relating the information request with information related to the current incident and a second context parameter by co-relating the information request with information not related to the current incident. The vehicle computer generates a response to the information request based on the first context parameter when the current incident status identifies that the vehicle is responding to the current incident. Otherwise, the vehicle computer generates a response to the information request based on the second context parameter.
1. A method for providing incident specific information at a vehicle computer, the method comprising: receiving, at a network interface of the vehicle computer, an incident assignment including information related to a current incident from a dispatch computer; receiving, at an input interface of the vehicle computer, a query from an occupant of a vehicle associated with the vehicle computer; identifying, at an electronic processor of the vehicle computer, a first context parameter by co-relating the query with information related to the current incident and a second context parameter by co-relating the query with information not related to the current incident; determining, at the electronic processor, in response to receiving the query, whether a current incident status of the vehicle identifies that the vehicle is responding to the current incident or not responding to the current incident; generating, at the electronic processor, a response to the query based on the first context parameter when the current incident status identifies that the vehicle is responding to the current incident; and generating, at the electronic processor, a response to the query based on the second context parameter when the current incident status identifies that the vehicle is not responding to the current incident. 2. The method of claim 1, further comprising: receiving, at the input interface, a signal indicating acceptance of the incident assignment; and updating, at a memory, the current incident status to identify that the vehicle is responding to the current incident in response to receiving the signal. 3. The method of claim 1, further comprising: determining, at the electronic processor, a location of the vehicle; and updating, at a memory, the current incident status to identify that the vehicle is responding to the current incident when the location of the vehicle indicates that the vehicle is en-route to a location of the current incident. 4. The method of claim 1, wherein the information related to the current incident includes at least one of an identifier of the current incident, location of the current incident, type and severity level of the current incident, and user profiles related to the current incident. 5. The method of claim 1, wherein the first context parameter is identified based at least on one of an identifier of the current incident, location of the current incident, type and severity level of the current incident, user profiles related to the current incident, operating conditions of the vehicle, location of the vehicle, and information related to a user associated with the vehicle. 6. The method of claim 1, wherein the query includes a request to contact another user and further wherein generating the response further comprises: when the current incident status of the vehicle identifies that the vehicle is responding to the current incident, obtaining, based on the first context parameter, contact information of a first user profile related to the current incident that matches with the request, and configuring a radio associated with the vehicle to initiate a communication based on the contact information of the first user profile; and when the current incident status of the vehicle identifies that the vehicle is not responding to the current incident, obtaining, based on the second context parameter, contact information of a second user profile that is not related to the current incident and further matches with the request, and configuring the radio to initiate a communication based on the contact information of the second user profile. 7. The method of claim 1, wherein the query includes a request to join a talk group and further wherein generating the response comprises: when the current incident status of the vehicle identifies that the vehicle is responding to the current incident, obtaining, based on the first context parameter, first talk group information related to the current incident, and configuring a radio associated with the vehicle to join one or more talk groups of the current incident based on the first talk group information; and when the current incident status of the vehicle identifies that the vehicle is not responding to the current incident, obtaining, based on the second context parameter, second talk group information that is not related to the current incident, and configuring the radio to join one or more talk groups based on the second talk group information. 8. The method of claim 1, wherein the query includes a request to check status of one or more users and further wherein generating the response further comprises: when the current incident status of the vehicle identifies that the vehicle is responding to the current incident, obtaining, based on the first context parameter, status of one or more first user profiles related to the current incident; and providing, at an output interface of the vehicle computer, notification identifying the status of the one or more first user profiles; and when the current incident status of the vehicle identifies that the vehicle is not responding to the current incident, obtaining, based on the second context parameter, status of one or more second user profiles that are not related to the current incident; and providing, at the output interface, notification identifying the status of the one or more second user profiles. 9. The method of claim 1, wherein the query includes a request for navigation and further wherein generating the response based on the first context parameter further comprises: when the current incident status of the vehicle identifies that the vehicle is responding to the current incident, obtaining, based on the first context parameter, a first location that is related to the current incident, and providing, at an output interface of the vehicle computer, information identifying one or more routes to the first location of the current incident; and when the current incident status of the vehicle identifies that the vehicle is not responding to the current incident; obtaining, based on the second context parameter, a second location that is not related to the current incident; and providing, at the output interface, information identifying one or more routes to the second location. 10. The method of claim 1, wherein the query includes a request to adjust an operating condition of the vehicle and further wherein generating the response further comprises: when the current incident status of the vehicle identifies that the vehicle is responding to the current incident, obtaining, based on the first context parameter, a first set of one or more operating parameters for the vehicle that is mapped to the current incident, and adjusting the operating condition of the vehicle based on the first set of one or more operating parameters; and when the current incident status of the vehicle identifies that the vehicle is not responding to the current incident, obtaining, based on the second context parameter, a second set of one or more operating parameters for the vehicle that is not mapped to the current incident; and adjusting the operating condition of the vehicle based on the second set of one or more operating parameters. 11. The method of claim 1, wherein the query is received via an audio signal, the method further comprising: processing the audio signal to extract one or more keywords; identifying the first context parameter by comparing the extracted keywords with information related to the current incident; and identifying the second context parameter by comparing the extracted keywords with information not related to the current incident. 12. A vehicle computer, comprising a network interface configured to receive an incident assignment including information related to a current incident from a dispatch computer; an input interface configured to receive a query from an occupant of a vehicle associated with the vehicle computer; and an electronic processor communicatively coupled to the network interface and the input interface, wherein the electronic processor is configured to identify a first context parameter by co-relating the query with information related to the current incident and a second context parameter by co-relating the query with information not related to the current incident, determine, in response to receiving the query from the occupant of the vehicle, whether a current incident status of the vehicle identifies that the vehicle is responding to the current incident or not responding to the current incident, generate a response to the query based on the first context parameter when the current incident status identifies that the vehicle is responding to the current incident, and generate a response to the query based on the second context parameter when the current incident status identifies that the vehicle is not responding to the current incident. 13. The vehicle computer of claim 12, further comprising: a memory configured to store non-transitory computer readable instructions including the current incident status, wherein the electronic processor updates the current incident status at the memory to identify that the vehicle is responding to the current incident when a signal indicating acceptance of the incident assignment is received at the input interface. 14. The vehicle computer of claim 13, wherein the electronic processor determines a location of the vehicle and updates the current incident status at the memory to identify that the vehicle is responding to the current incident when a location of the vehicle indicates that the vehicle is en-route to a location of the current incident. 15. The vehicle computer of claim 12, wherein the information related to the current incident includes at least one of an identifier of the current incident, location of the current incident, type and severity level of the current incident, and user profiles related to the current incident. 16. The vehicle computer of claim 12, wherein the first context parameter is identified based at least on one of an identifier of the current incident, location of the current incident, type and severity level of the current incident, user profiles related to the current incident, operating conditions of the vehicle, location of the vehicle, and information related to a user associated with the vehicle. 17. The vehicle computer of claim 12, wherein the query includes a request to contact another user and further wherein the electronic processor is configured to: when the current incident status of the vehicle identifies that the vehicle is responding to the current incident, obtain, based on the first context parameter, contact information of a first user profile related to the current incident that matches with the request, and configure a radio associated with the vehicle to initiate a communication based on the contact information; and when the current incident status of the vehicle identifies that the vehicle is not responding to the current incident, obtain, based on the second context parameter, contact information of a second user profile that is not related to the current incident and further matches with the request, and configure the radio to initiate a communication based on the contact information of the second user profile. 18. The vehicle computer of claim 12, wherein the query includes a request to join a talk group and further wherein the electronic processor is configured to: when the current incident status of the vehicle identifies that the vehicle is responding to the current incident, obtain, based on the first context parameter, first talk group information related to the current incident, and configure a radio associated with the vehicle to join one or more talk groups of the current incident based on the first talk group information; and when the current incident status of the vehicle identifies that the vehicle is not responding to the current incident, obtain, based on the second context parameter, second talk group information that is not related to the current incident, and configure the radio to join one or more talk groups based on the second talk group information. 19. The vehicle computer of claim 12, wherein the query includes a request for navigation and further wherein the electronic processor is configured to: when the current incident status of the vehicle identifies that the vehicle is responding to the current incident, obtain, based on the first context parameter, a first location of the current incident, and provide, at an output interface of the vehicle computer, information identifying one or more routes to the first location of the current incident; and when the current incident status of the vehicle identifies that the vehicle is not responding to the current incident; obtain, based on the second context parameter, a second location that is not related to the current incident; and provide, at the output interface, information identifying one or more routes to the second location. 20. The vehicle computer of claim 12, wherein the query includes a request to adjust an operating condition of the vehicle and further wherein the electronic processor is configured to: when the current incident status of the vehicle identifies that the vehicle is responding to the current incident, obtain, based on the first context parameter, a first set of one or more operating parameters for the vehicle that is mapped to the current incident, and adjust the operating condition of the vehicle based on the first set of one or more operating parameters; and when the current incident status of the vehicle identifies that the vehicle is not responding to the current incident, obtain, based on the second context parameter, a second set of one or more operating parameters for the vehicle that is not mapped to the current incident; and adjust the operating condition of the vehicle based on the second set of one or more operating parameters.
A method for providing incident specific information at a vehicle computer. In operation, the vehicle computer receives an incident assignment including information related to a current incident from a dispatch computer. The vehicle computer further receives an information request and in response, identifies a first context parameter by co-relating the information request with information related to the current incident and a second context parameter by co-relating the information request with information not related to the current incident. The vehicle computer generates a response to the information request based on the first context parameter when the current incident status identifies that the vehicle is responding to the current incident. Otherwise, the vehicle computer generates a response to the information request based on the second context parameter.1. A method for providing incident specific information at a vehicle computer, the method comprising: receiving, at a network interface of the vehicle computer, an incident assignment including information related to a current incident from a dispatch computer; receiving, at an input interface of the vehicle computer, a query from an occupant of a vehicle associated with the vehicle computer; identifying, at an electronic processor of the vehicle computer, a first context parameter by co-relating the query with information related to the current incident and a second context parameter by co-relating the query with information not related to the current incident; determining, at the electronic processor, in response to receiving the query, whether a current incident status of the vehicle identifies that the vehicle is responding to the current incident or not responding to the current incident; generating, at the electronic processor, a response to the query based on the first context parameter when the current incident status identifies that the vehicle is responding to the current incident; and generating, at the electronic processor, a response to the query based on the second context parameter when the current incident status identifies that the vehicle is not responding to the current incident. 2. The method of claim 1, further comprising: receiving, at the input interface, a signal indicating acceptance of the incident assignment; and updating, at a memory, the current incident status to identify that the vehicle is responding to the current incident in response to receiving the signal. 3. The method of claim 1, further comprising: determining, at the electronic processor, a location of the vehicle; and updating, at a memory, the current incident status to identify that the vehicle is responding to the current incident when the location of the vehicle indicates that the vehicle is en-route to a location of the current incident. 4. The method of claim 1, wherein the information related to the current incident includes at least one of an identifier of the current incident, location of the current incident, type and severity level of the current incident, and user profiles related to the current incident. 5. The method of claim 1, wherein the first context parameter is identified based at least on one of an identifier of the current incident, location of the current incident, type and severity level of the current incident, user profiles related to the current incident, operating conditions of the vehicle, location of the vehicle, and information related to a user associated with the vehicle. 6. The method of claim 1, wherein the query includes a request to contact another user and further wherein generating the response further comprises: when the current incident status of the vehicle identifies that the vehicle is responding to the current incident, obtaining, based on the first context parameter, contact information of a first user profile related to the current incident that matches with the request, and configuring a radio associated with the vehicle to initiate a communication based on the contact information of the first user profile; and when the current incident status of the vehicle identifies that the vehicle is not responding to the current incident, obtaining, based on the second context parameter, contact information of a second user profile that is not related to the current incident and further matches with the request, and configuring the radio to initiate a communication based on the contact information of the second user profile. 7. The method of claim 1, wherein the query includes a request to join a talk group and further wherein generating the response comprises: when the current incident status of the vehicle identifies that the vehicle is responding to the current incident, obtaining, based on the first context parameter, first talk group information related to the current incident, and configuring a radio associated with the vehicle to join one or more talk groups of the current incident based on the first talk group information; and when the current incident status of the vehicle identifies that the vehicle is not responding to the current incident, obtaining, based on the second context parameter, second talk group information that is not related to the current incident, and configuring the radio to join one or more talk groups based on the second talk group information. 8. The method of claim 1, wherein the query includes a request to check status of one or more users and further wherein generating the response further comprises: when the current incident status of the vehicle identifies that the vehicle is responding to the current incident, obtaining, based on the first context parameter, status of one or more first user profiles related to the current incident; and providing, at an output interface of the vehicle computer, notification identifying the status of the one or more first user profiles; and when the current incident status of the vehicle identifies that the vehicle is not responding to the current incident, obtaining, based on the second context parameter, status of one or more second user profiles that are not related to the current incident; and providing, at the output interface, notification identifying the status of the one or more second user profiles. 9. The method of claim 1, wherein the query includes a request for navigation and further wherein generating the response based on the first context parameter further comprises: when the current incident status of the vehicle identifies that the vehicle is responding to the current incident, obtaining, based on the first context parameter, a first location that is related to the current incident, and providing, at an output interface of the vehicle computer, information identifying one or more routes to the first location of the current incident; and when the current incident status of the vehicle identifies that the vehicle is not responding to the current incident; obtaining, based on the second context parameter, a second location that is not related to the current incident; and providing, at the output interface, information identifying one or more routes to the second location. 10. The method of claim 1, wherein the query includes a request to adjust an operating condition of the vehicle and further wherein generating the response further comprises: when the current incident status of the vehicle identifies that the vehicle is responding to the current incident, obtaining, based on the first context parameter, a first set of one or more operating parameters for the vehicle that is mapped to the current incident, and adjusting the operating condition of the vehicle based on the first set of one or more operating parameters; and when the current incident status of the vehicle identifies that the vehicle is not responding to the current incident, obtaining, based on the second context parameter, a second set of one or more operating parameters for the vehicle that is not mapped to the current incident; and adjusting the operating condition of the vehicle based on the second set of one or more operating parameters. 11. The method of claim 1, wherein the query is received via an audio signal, the method further comprising: processing the audio signal to extract one or more keywords; identifying the first context parameter by comparing the extracted keywords with information related to the current incident; and identifying the second context parameter by comparing the extracted keywords with information not related to the current incident. 12. A vehicle computer, comprising a network interface configured to receive an incident assignment including information related to a current incident from a dispatch computer; an input interface configured to receive a query from an occupant of a vehicle associated with the vehicle computer; and an electronic processor communicatively coupled to the network interface and the input interface, wherein the electronic processor is configured to identify a first context parameter by co-relating the query with information related to the current incident and a second context parameter by co-relating the query with information not related to the current incident, determine, in response to receiving the query from the occupant of the vehicle, whether a current incident status of the vehicle identifies that the vehicle is responding to the current incident or not responding to the current incident, generate a response to the query based on the first context parameter when the current incident status identifies that the vehicle is responding to the current incident, and generate a response to the query based on the second context parameter when the current incident status identifies that the vehicle is not responding to the current incident. 13. The vehicle computer of claim 12, further comprising: a memory configured to store non-transitory computer readable instructions including the current incident status, wherein the electronic processor updates the current incident status at the memory to identify that the vehicle is responding to the current incident when a signal indicating acceptance of the incident assignment is received at the input interface. 14. The vehicle computer of claim 13, wherein the electronic processor determines a location of the vehicle and updates the current incident status at the memory to identify that the vehicle is responding to the current incident when a location of the vehicle indicates that the vehicle is en-route to a location of the current incident. 15. The vehicle computer of claim 12, wherein the information related to the current incident includes at least one of an identifier of the current incident, location of the current incident, type and severity level of the current incident, and user profiles related to the current incident. 16. The vehicle computer of claim 12, wherein the first context parameter is identified based at least on one of an identifier of the current incident, location of the current incident, type and severity level of the current incident, user profiles related to the current incident, operating conditions of the vehicle, location of the vehicle, and information related to a user associated with the vehicle. 17. The vehicle computer of claim 12, wherein the query includes a request to contact another user and further wherein the electronic processor is configured to: when the current incident status of the vehicle identifies that the vehicle is responding to the current incident, obtain, based on the first context parameter, contact information of a first user profile related to the current incident that matches with the request, and configure a radio associated with the vehicle to initiate a communication based on the contact information; and when the current incident status of the vehicle identifies that the vehicle is not responding to the current incident, obtain, based on the second context parameter, contact information of a second user profile that is not related to the current incident and further matches with the request, and configure the radio to initiate a communication based on the contact information of the second user profile. 18. The vehicle computer of claim 12, wherein the query includes a request to join a talk group and further wherein the electronic processor is configured to: when the current incident status of the vehicle identifies that the vehicle is responding to the current incident, obtain, based on the first context parameter, first talk group information related to the current incident, and configure a radio associated with the vehicle to join one or more talk groups of the current incident based on the first talk group information; and when the current incident status of the vehicle identifies that the vehicle is not responding to the current incident, obtain, based on the second context parameter, second talk group information that is not related to the current incident, and configure the radio to join one or more talk groups based on the second talk group information. 19. The vehicle computer of claim 12, wherein the query includes a request for navigation and further wherein the electronic processor is configured to: when the current incident status of the vehicle identifies that the vehicle is responding to the current incident, obtain, based on the first context parameter, a first location of the current incident, and provide, at an output interface of the vehicle computer, information identifying one or more routes to the first location of the current incident; and when the current incident status of the vehicle identifies that the vehicle is not responding to the current incident; obtain, based on the second context parameter, a second location that is not related to the current incident; and provide, at the output interface, information identifying one or more routes to the second location. 20. The vehicle computer of claim 12, wherein the query includes a request to adjust an operating condition of the vehicle and further wherein the electronic processor is configured to: when the current incident status of the vehicle identifies that the vehicle is responding to the current incident, obtain, based on the first context parameter, a first set of one or more operating parameters for the vehicle that is mapped to the current incident, and adjust the operating condition of the vehicle based on the first set of one or more operating parameters; and when the current incident status of the vehicle identifies that the vehicle is not responding to the current incident, obtain, based on the second context parameter, a second set of one or more operating parameters for the vehicle that is not mapped to the current incident; and adjust the operating condition of the vehicle based on the second set of one or more operating parameters.
2,600
9,872
9,872
13,815,769
2,625
A method of providing visual information to a human viewer includes the steps of defining a range of distances from a surface and a range of viewing angles with respect to the surface, determining the location and viewing angle of a human viewer with respect to the surface, and providing a virtual image to the human viewer via a visual display device worn by the human viewer when the location and viewing angle of the human viewer with respect to the surface is determined to be within the defined range of distances and viewing angles, such that the virtual image is perceived to be defined on the surface by the human viewer.
1. A method of providing visual information to a human viewer, the method comprising the steps of: a) defining a range of distances from a surface and a range of viewing angles with respect to the surface, b) determining the location and viewing angle of a human viewer with respect to the surface, and c) providing a virtual image to the human viewer via a visual display device worn by the human viewer when the location and viewing angle of the human viewer with respect to the surface is determined to be within the range of distances and viewing angles selected in step a), such that the virtual image is perceived by the human viewer to be within an area defined on the surface. 2. The method of claim 1 wherein the surface is selected from the group consisting of a billboard, a wall, a static display and a hand-held item. 3. The method of claims 2 wherein the hand-held item is selected from the group consisting of a book, a magazine, a newspaper and a menu. 4. The method of claim 2 wherein at least a portion of the surface is blank. 5. The method of claim 2 wherein at least a portion of the surface is a blue surface. 6. The method of claim 1 wherein in step b) the location and viewing angle of the human viewer with respect to the surface are determined using the GPS coordinates of the human viewer and the surface. 7. The method of claim 1 wherein in step c) the visual display device comprises a heads-up display. 8. The method of claim 1 wherein in step c) the virtual image provided to the human viewer comprises an image selected from the group consisting of an advertisement, a menu and a public notice. 9. The method of claim 1 wherein in step c) the virtual image is provided to the human viewer via wireless transmission means. 10. The method of claim 1 wherein the human viewer is a member of an organization and the virtual image is provided to the human viewer from a central site affiliated with the organization. 11. The method of claim 10 wherein prior to step c) the human viewer selects at least one good or service for which the human viewer requests the provision of a virtual image. 12. The method of claim 10 wherein prior to step c) the human viewer selects at least one advertisement format in which the virtual image is to be displayed. 13. The method of claim 12 wherein the advertisement format is selected from the group consisting of text, still images, video images and combinations thereof. 14. The method of claim 13 wherein the advertisement format comprises images of human models. 15. The method of claim 11 wherein the virtual image comprises an image selected from the group consisting of an advertisement and a menu. 16. The method of claim 10 wherein prior to step c) the human viewer selects at least one event for which the human viewer requests the provision of a virtual image. 17. The method of claim 16 wherein the event is selected from the group consisting of a sale, a sporting event, a movie and a live performance.
A method of providing visual information to a human viewer includes the steps of defining a range of distances from a surface and a range of viewing angles with respect to the surface, determining the location and viewing angle of a human viewer with respect to the surface, and providing a virtual image to the human viewer via a visual display device worn by the human viewer when the location and viewing angle of the human viewer with respect to the surface is determined to be within the defined range of distances and viewing angles, such that the virtual image is perceived to be defined on the surface by the human viewer.1. A method of providing visual information to a human viewer, the method comprising the steps of: a) defining a range of distances from a surface and a range of viewing angles with respect to the surface, b) determining the location and viewing angle of a human viewer with respect to the surface, and c) providing a virtual image to the human viewer via a visual display device worn by the human viewer when the location and viewing angle of the human viewer with respect to the surface is determined to be within the range of distances and viewing angles selected in step a), such that the virtual image is perceived by the human viewer to be within an area defined on the surface. 2. The method of claim 1 wherein the surface is selected from the group consisting of a billboard, a wall, a static display and a hand-held item. 3. The method of claims 2 wherein the hand-held item is selected from the group consisting of a book, a magazine, a newspaper and a menu. 4. The method of claim 2 wherein at least a portion of the surface is blank. 5. The method of claim 2 wherein at least a portion of the surface is a blue surface. 6. The method of claim 1 wherein in step b) the location and viewing angle of the human viewer with respect to the surface are determined using the GPS coordinates of the human viewer and the surface. 7. The method of claim 1 wherein in step c) the visual display device comprises a heads-up display. 8. The method of claim 1 wherein in step c) the virtual image provided to the human viewer comprises an image selected from the group consisting of an advertisement, a menu and a public notice. 9. The method of claim 1 wherein in step c) the virtual image is provided to the human viewer via wireless transmission means. 10. The method of claim 1 wherein the human viewer is a member of an organization and the virtual image is provided to the human viewer from a central site affiliated with the organization. 11. The method of claim 10 wherein prior to step c) the human viewer selects at least one good or service for which the human viewer requests the provision of a virtual image. 12. The method of claim 10 wherein prior to step c) the human viewer selects at least one advertisement format in which the virtual image is to be displayed. 13. The method of claim 12 wherein the advertisement format is selected from the group consisting of text, still images, video images and combinations thereof. 14. The method of claim 13 wherein the advertisement format comprises images of human models. 15. The method of claim 11 wherein the virtual image comprises an image selected from the group consisting of an advertisement and a menu. 16. The method of claim 10 wherein prior to step c) the human viewer selects at least one event for which the human viewer requests the provision of a virtual image. 17. The method of claim 16 wherein the event is selected from the group consisting of a sale, a sporting event, a movie and a live performance.
2,600
9,873
9,873
14,943,609
2,618
Augmented reality may be used to facilitate a planogram (POG) reset in a retail environment. The system may include a location benchmark positioned adjacent a retail display, and a display generator located relative to the location benchmark. The display generator may display an image corresponding to the POG reset on the retail display. A control processor may drive the display generator to display the image, and a data source may provide control signals to the control processor based on pre-stored POG data.
1. A system using augmented reality to facilitate a planogram (POG) reset in a retail environment, the system comprising: a location benchmark positioned adjacent a retail display; a display generator located relative to the location benchmark, the display generator displaying an image corresponding to the POG reset on the retail display; a control processor communicating with the display generator and driving the display generator to display the image; and a data source communicating with the control processor, the data source being a computer device that provides control signals to the control processor based on pre-stored POG data. 2. A system according to claim 1, wherein the location benchmark comprises a keystone positioned in an aisle in the retail environment. 3. A system according to claim 1, wherein the location benchmark comprises a marker in a corner of an aisle in the retail environment. 4. A system according to claim 3, wherein the control processor and the display generator modify the image based on a position of the marker relative to the display generator to fit the retail display. 5. A system according to claim 1, wherein the display generator comprises an LED projector. 6. A system according to claim 1, wherein the display generator comprises a wearable device. 7. A system according to claim 1, wherein the image on the retail display comprises peg indicators positioned over peg holes on the retail display that receive a peg. 8. A system according to claim 7, wherein the image on the retail display further comprises shelf indicators positioned over peg holes or shelf slots on the retail display that receive a shelf. 9. A system according to claim 1, further comprising a location beacon in the retail environment, the location beacon identifying a location of the display generator. 10. A system using augmented reality to facilitate a planogram (POG) reset for a retail display in a retail environment, the system comprising: a data source including a computer device, the data source storing POG data and outputting control signals corresponding to the POG data; a control processor in communication with the data source and receiving the control signals from the data source; a display generator in communication with the control processor, the control processor driving the display generator to display an image corresponding to the POG reset on the retail display, wherein the image on the retail display includes at least one of peg indicators positioned over peg holes on the retail display that receive a peg, shelf indicators positioned over peg holes or shelf slots on the retail display that receive a shelf, and product indicators positioned adjacent pegs or shelves that identify product locations on the pegs or shelves, respectively. 11. A system according to claim 10, further comprising a location benchmark positioned adjacent the retail display, wherein the display generator is positioned relative to the location benchmark. 12. A system according to claim 11, wherein the location benchmark comprises a marker in a corner of an aisle in the retail environment. 13. A system according to claim 12, wherein the control processor and the display generator modify the image based on a position of the marker relative to the display generator to fit the retail display. 14. A system according to claim 1, wherein the display generator comprises an LED projector. 15. A method of resetting a retail display in a retail environment according to a planogram (POG) reset using augmented reality, the method comprising: positioning a display generator adjacent a retail display; providing control signals from a data source to a control processor based on pre-stored POG data; driving the display generator with the control processor; the display generator displaying an image corresponding to the POG reset on the retail display; and placing at least one of pegs and shelves on the retail display in accordance with the image displayed on the retail display. 16. A method according to claim 15, wherein the positioning step comprises positioning the display generator relative to a location benchmark adjacent the retail display. 17. A method according to claim 16, further comprising modifying the image based on a position of the location benchmark relative to the display generator to fit the retail display. 18. A method according to claim 15, further comprising modifying the image to fit the retail display based on a position of the display generator relative to the retail display. 19. A method according to claim 15, wherein the displaying step comprises displaying peg indicators over peg holes on the retail display that receive a peg. 20. A method according to claim 19, wherein the displaying step comprises displaying shelf indicators positioned over peg holes or shelf slots on the retail display that receive a shelf.
Augmented reality may be used to facilitate a planogram (POG) reset in a retail environment. The system may include a location benchmark positioned adjacent a retail display, and a display generator located relative to the location benchmark. The display generator may display an image corresponding to the POG reset on the retail display. A control processor may drive the display generator to display the image, and a data source may provide control signals to the control processor based on pre-stored POG data.1. A system using augmented reality to facilitate a planogram (POG) reset in a retail environment, the system comprising: a location benchmark positioned adjacent a retail display; a display generator located relative to the location benchmark, the display generator displaying an image corresponding to the POG reset on the retail display; a control processor communicating with the display generator and driving the display generator to display the image; and a data source communicating with the control processor, the data source being a computer device that provides control signals to the control processor based on pre-stored POG data. 2. A system according to claim 1, wherein the location benchmark comprises a keystone positioned in an aisle in the retail environment. 3. A system according to claim 1, wherein the location benchmark comprises a marker in a corner of an aisle in the retail environment. 4. A system according to claim 3, wherein the control processor and the display generator modify the image based on a position of the marker relative to the display generator to fit the retail display. 5. A system according to claim 1, wherein the display generator comprises an LED projector. 6. A system according to claim 1, wherein the display generator comprises a wearable device. 7. A system according to claim 1, wherein the image on the retail display comprises peg indicators positioned over peg holes on the retail display that receive a peg. 8. A system according to claim 7, wherein the image on the retail display further comprises shelf indicators positioned over peg holes or shelf slots on the retail display that receive a shelf. 9. A system according to claim 1, further comprising a location beacon in the retail environment, the location beacon identifying a location of the display generator. 10. A system using augmented reality to facilitate a planogram (POG) reset for a retail display in a retail environment, the system comprising: a data source including a computer device, the data source storing POG data and outputting control signals corresponding to the POG data; a control processor in communication with the data source and receiving the control signals from the data source; a display generator in communication with the control processor, the control processor driving the display generator to display an image corresponding to the POG reset on the retail display, wherein the image on the retail display includes at least one of peg indicators positioned over peg holes on the retail display that receive a peg, shelf indicators positioned over peg holes or shelf slots on the retail display that receive a shelf, and product indicators positioned adjacent pegs or shelves that identify product locations on the pegs or shelves, respectively. 11. A system according to claim 10, further comprising a location benchmark positioned adjacent the retail display, wherein the display generator is positioned relative to the location benchmark. 12. A system according to claim 11, wherein the location benchmark comprises a marker in a corner of an aisle in the retail environment. 13. A system according to claim 12, wherein the control processor and the display generator modify the image based on a position of the marker relative to the display generator to fit the retail display. 14. A system according to claim 1, wherein the display generator comprises an LED projector. 15. A method of resetting a retail display in a retail environment according to a planogram (POG) reset using augmented reality, the method comprising: positioning a display generator adjacent a retail display; providing control signals from a data source to a control processor based on pre-stored POG data; driving the display generator with the control processor; the display generator displaying an image corresponding to the POG reset on the retail display; and placing at least one of pegs and shelves on the retail display in accordance with the image displayed on the retail display. 16. A method according to claim 15, wherein the positioning step comprises positioning the display generator relative to a location benchmark adjacent the retail display. 17. A method according to claim 16, further comprising modifying the image based on a position of the location benchmark relative to the display generator to fit the retail display. 18. A method according to claim 15, further comprising modifying the image to fit the retail display based on a position of the display generator relative to the retail display. 19. A method according to claim 15, wherein the displaying step comprises displaying peg indicators over peg holes on the retail display that receive a peg. 20. A method according to claim 19, wherein the displaying step comprises displaying shelf indicators positioned over peg holes or shelf slots on the retail display that receive a shelf.
2,600
9,874
9,874
14,890,554
2,623
According to the teachings herein, a method and apparatus are provided for facilitating touch entries to a touchscreen of an electronic device. In particular, the teachings herein facilitate one-handed touch entry, such as where a user operates the touchscreen of the device using a digit of the same hand used to hold the device. Advantageously, an electronic device ( 10 ) detects when a user is reaching to make a touch input to the touchscreen ( 14 ) and it correspondingly adapts the visual content currently being displayed—i.e., the current screen ( 16 )—responsive to detecting the reach. Example adaptations include any one or more of shifting, warping and resealing the screen, to bring an estimated touch target within a defined reach extent ( 130 ) configured in the electronic device.
1-31. (canceled) 32. A method performed by an electronic device having a touchscreen, said method comprising: detecting that a user is reaching with a digit to make a touch input to the touchscreen; and temporarily adapting a screen currently being displayed on the touchscreen, to bring an estimated touch target within a defined reach extent that is configured in the electronic device. 33. The method of claim 32, wherein temporarily adapting the screen comprises displaying a modified version of the screen until at least one of: detecting a touch input to the touchscreen, detecting expiration of an adaptation time-out period, or detecting that the digit of the user is no longer in a reaching orientation. 34. The method of claim 32, wherein temporarily adapting the screen comprises determining a layout modification for the screen to bring the touch target within the defined reach extent, and modifying a layout of the screen according to the layout modification. 35. The method of claim 32, further comprising identifying the touch target as being a screen element or screen region that is outside of the defined reach extent and in a determined reach direction. 36. The method of claim 32, wherein temporarily adapting the screen comprises at least one of: shifting the screen, rescaling the screen, and warping the screen. 37. The method of claim 32, further comprising, in a calibration routine, prompting the user to make one or more touch inputs to the touchscreen and defining the defined reach extent based on the one or more touch inputs received during the calibration routine. 38. The method of claim 32, wherein detecting that the user is reaching with the digit to make the touch input to the touchscreen comprises detecting that the digit is hovering over the touchscreen in conjunction with detecting that the digit is in a reaching orientation with respect to the touchscreen. 39. The method of claim 32, wherein the electronic device includes a camera and wherein detecting that the user is reaching with the digit to make the touch input to the touchscreen comprises obtaining one or more images from the camera, and determining from image data obtained from the one or more images that the digit of the user is in a reaching orientation with respect to the touchscreen. 40. The method of claim 39, further comprising determining a reach direction from the image data, and determining the touch target based at least on the reach direction. 41. The method of claim 39, wherein the touchscreen does not lie within a field of view of the camera, wherein the camera is oriented to face the user in at least one handheld orientation of the electronic device, and wherein detecting that the user is reaching with the digit to make the touch input to the touchscreen comprises: extracting one or more cornea-reflected or eyewear-reflected images from the one or more images; processing the one or more reflected images, as said image data, to obtain orientation information for the digit with respect to the touchscreen; and detecting that the digit is in a reaching orientation with respect to the touchscreen and detecting a corresponding reach direction, from the orientation information obtained for the digit. 42. The method of claim 39, further comprising controlling the camera to be active in response to at least one of: determining that the screen is a certain screen or a certain type of screen, for which reach detection is to be active; determining that the screen includes one or more screen elements that are operative as touch inputs and outside of the defined reach extent; and detecting a movement or orientation of the electronic device that is characteristic of reach events, said movement or orientation being determined from inertial sensor data available within the electronic device. 43. The method of claim 39, wherein the one or more images comprise at least two images, and further comprising jointly processing two or more of the at least two images to obtain one or more enhanced-resolution images and using the enhanced-resolution images for determining whether the digit of the user is in a reaching orientation with respect to the touchscreen. 44. The method of claim 32, wherein detecting that the user is reaching with the digit to make the touch input to the touchscreen comprises detecting a movement or orientation of the electronic device that is characteristic of the user extending the digit in a reaching motion with respect to the touchscreen while holding the electronic device in the hand associated with the digit, in conjunction with detecting that the digit of the user is in a reaching orientation with respect to the touchscreen. 45. The method of claim 32, wherein detecting that the user is reaching with the digit to make the touch input to the touchscreen comprises processing one or more images obtained from an included camera having a field of view that encompasses at least a portion of the face of the user, to obtain one or more cornea-reflected images, and processing the one or more cornea-reflected images to determine whether the digit, as visible in the one or more cornea-reflected images, is in a reaching orientation with respect to the touchscreen. 46. An electronic device comprising: a touchscreen; and processing circuitry configured to: detect that a user is reaching with a digit to make a touch input to the touchscreen; and temporarily adapt a screen currently being displayed on the touchscreen, to bring an estimated touch target within a defined reach extent that is configured in the electronic device. 47. The electronic device of claim 46, wherein the processing circuitry is configured to temporarily adapt the screen by displaying a modified version of the screen until at least one of: detecting a touch input to the touchscreen, detecting expiration of an adaptation time-out period, or detecting that the digit of the user is no longer in a reaching orientation. 48. The electronic device of claim 46, wherein the processing circuitry is configured to temporarily adapt the screen by determining a layout modification for the screen to bring the touch target within the defined reach extent, and modifying a layout of the screen according to the layout modification. 49. The electronic device of claim 46, wherein the processing circuitry is configured to identify the touch target as being a screen element or screen region that is outside of the defined reach extent and in a determined reach direction. 50. The electronic device of claim 46, wherein the processing circuitry is configured to temporarily adapt the screen by at least one of: shifting the screen, rescaling the screen, and warping the screen. 51. The electronic device of claim 46, wherein the processing circuitry is configured to perform a calibration routine, wherein the processing circuitry prompts the user to make one or more touch inputs to the touchscreen and defines the defined reach extent based on the one or more touch inputs received during the calibration routine. 52. The electronic device of claim 46, wherein the processing circuitry is configured to detect that the user is reaching with the digit to make the touch input to the touchscreen by detecting that the digit is hovering over the touchscreen in conjunction with detecting that the digit is in a reaching orientation with respect to the touchscreen. 53. The electronic device of claim 46, wherein the electronic device includes a camera and wherein the processing circuitry is configured to detect that the user is reaching with the digit to make the touch input to the touchscreen by obtaining one or more images from the camera, and determining from image data obtained from the one or more images that the digit of the user is in a reaching orientation with respect to the touchscreen. 54. The electronic device of claim 53, wherein the processing circuitry is configured to determine a reach direction from the image data, and determine the touch target based at least on the reach direction. 55. The electronic device of claim 53, wherein the touchscreen does not lie within a field of view of the camera, wherein the camera is oriented to face the user in at least one handheld orientation of the electronic device, and wherein the processing circuitry is configured to determine that the user is reaching with the digit to make the touch input to the touchscreen by: extracting one or more cornea-reflected or eyewear-reflected images from the one or more images; processing the one or more reflected images, as said image data, to obtain orientation information for the digit with respect to the touchscreen; and detecting that the digit is in a reaching orientation with respect to the touchscreen and detecting a corresponding reach direction, from the orientation information obtained for the digit. 56. The electronic device of claim 53, wherein the processing circuitry is configured to control the camera to be active in response to at least one of: determining that that the screen is a certain screen or a certain type of screen for which reach detection is to be active; determining that the screen includes one or more screen elements that are operative as touch inputs and outside of the reach extent; and detecting a movement or orientation of the electronic device that is characteristic of reach events, said movement or orientation being determined from inertial sensor data available within the electronic device. 57. The electronic device of claim 53, wherein the one or more images comprise at least two images, and wherein the processing circuitry is configured to jointly process two or more of the at least two images to obtain one or more enhanced-resolution images and to use the enhanced-resolution images for determining whether the digit of the user is in a reaching orientation with respect to the touchscreen. 58. The electronic device of claim 46, wherein the processing circuitry is configured to detect that the user is reaching with the digit to make the touch input to the touchscreen by detecting a movement or orientation of the electronic device that is characteristic of the user extending the digit in a reaching motion with respect to the touchscreen while holding the electronic device in the hand associated with the digit, in conjunction with detecting that the digit of the user is in a reaching orientation with respect to the touchscreen. 59. The electronic device of claim 46, wherein the processing circuitry is configured to detect that the user is reaching with the digit to make the touch input to the touchscreen by processing one or more images obtained from a camera integrated within the electronic device and having a field of view that encompasses at least a portion of the face of the user, to obtain one or more cornea-reflected images, and processing the one or more cornea-reflected images to determine whether the digit, as visible in the one or more cornea-reflected images, is in a reaching orientation with respect to the touchscreen. 60. The electronic device of claim 46, wherein the electronic device comprises one of a mobile terminal, a mobile phone, a smartphone, or a User Equipment, UE. 61. An electronic device having a touchscreen, and further comprising: a reach detection module for detecting that a user is reaching with a digit to make a touch input to the touchscreen; and a screen adaptation module for temporarily adapting a screen currently being displayed on the touchscreen, to bring an estimated touch target within a defined reach extent that is configured in the electronic device. 62. A non-transitory computer-readable medium storing a computer program comprising program instructions that, when executed by processing circuitry of an electronic device having a touchscreen, configures the electronic device to: detect that a user is reaching with a digit to make a touch input to the touchscreen; and temporarily adapt a screen currently being displayed on the touchscreen, to bring an estimated touch target within a defined reach extent that is configured in the electronic device.
According to the teachings herein, a method and apparatus are provided for facilitating touch entries to a touchscreen of an electronic device. In particular, the teachings herein facilitate one-handed touch entry, such as where a user operates the touchscreen of the device using a digit of the same hand used to hold the device. Advantageously, an electronic device ( 10 ) detects when a user is reaching to make a touch input to the touchscreen ( 14 ) and it correspondingly adapts the visual content currently being displayed—i.e., the current screen ( 16 )—responsive to detecting the reach. Example adaptations include any one or more of shifting, warping and resealing the screen, to bring an estimated touch target within a defined reach extent ( 130 ) configured in the electronic device.1-31. (canceled) 32. A method performed by an electronic device having a touchscreen, said method comprising: detecting that a user is reaching with a digit to make a touch input to the touchscreen; and temporarily adapting a screen currently being displayed on the touchscreen, to bring an estimated touch target within a defined reach extent that is configured in the electronic device. 33. The method of claim 32, wherein temporarily adapting the screen comprises displaying a modified version of the screen until at least one of: detecting a touch input to the touchscreen, detecting expiration of an adaptation time-out period, or detecting that the digit of the user is no longer in a reaching orientation. 34. The method of claim 32, wherein temporarily adapting the screen comprises determining a layout modification for the screen to bring the touch target within the defined reach extent, and modifying a layout of the screen according to the layout modification. 35. The method of claim 32, further comprising identifying the touch target as being a screen element or screen region that is outside of the defined reach extent and in a determined reach direction. 36. The method of claim 32, wherein temporarily adapting the screen comprises at least one of: shifting the screen, rescaling the screen, and warping the screen. 37. The method of claim 32, further comprising, in a calibration routine, prompting the user to make one or more touch inputs to the touchscreen and defining the defined reach extent based on the one or more touch inputs received during the calibration routine. 38. The method of claim 32, wherein detecting that the user is reaching with the digit to make the touch input to the touchscreen comprises detecting that the digit is hovering over the touchscreen in conjunction with detecting that the digit is in a reaching orientation with respect to the touchscreen. 39. The method of claim 32, wherein the electronic device includes a camera and wherein detecting that the user is reaching with the digit to make the touch input to the touchscreen comprises obtaining one or more images from the camera, and determining from image data obtained from the one or more images that the digit of the user is in a reaching orientation with respect to the touchscreen. 40. The method of claim 39, further comprising determining a reach direction from the image data, and determining the touch target based at least on the reach direction. 41. The method of claim 39, wherein the touchscreen does not lie within a field of view of the camera, wherein the camera is oriented to face the user in at least one handheld orientation of the electronic device, and wherein detecting that the user is reaching with the digit to make the touch input to the touchscreen comprises: extracting one or more cornea-reflected or eyewear-reflected images from the one or more images; processing the one or more reflected images, as said image data, to obtain orientation information for the digit with respect to the touchscreen; and detecting that the digit is in a reaching orientation with respect to the touchscreen and detecting a corresponding reach direction, from the orientation information obtained for the digit. 42. The method of claim 39, further comprising controlling the camera to be active in response to at least one of: determining that the screen is a certain screen or a certain type of screen, for which reach detection is to be active; determining that the screen includes one or more screen elements that are operative as touch inputs and outside of the defined reach extent; and detecting a movement or orientation of the electronic device that is characteristic of reach events, said movement or orientation being determined from inertial sensor data available within the electronic device. 43. The method of claim 39, wherein the one or more images comprise at least two images, and further comprising jointly processing two or more of the at least two images to obtain one or more enhanced-resolution images and using the enhanced-resolution images for determining whether the digit of the user is in a reaching orientation with respect to the touchscreen. 44. The method of claim 32, wherein detecting that the user is reaching with the digit to make the touch input to the touchscreen comprises detecting a movement or orientation of the electronic device that is characteristic of the user extending the digit in a reaching motion with respect to the touchscreen while holding the electronic device in the hand associated with the digit, in conjunction with detecting that the digit of the user is in a reaching orientation with respect to the touchscreen. 45. The method of claim 32, wherein detecting that the user is reaching with the digit to make the touch input to the touchscreen comprises processing one or more images obtained from an included camera having a field of view that encompasses at least a portion of the face of the user, to obtain one or more cornea-reflected images, and processing the one or more cornea-reflected images to determine whether the digit, as visible in the one or more cornea-reflected images, is in a reaching orientation with respect to the touchscreen. 46. An electronic device comprising: a touchscreen; and processing circuitry configured to: detect that a user is reaching with a digit to make a touch input to the touchscreen; and temporarily adapt a screen currently being displayed on the touchscreen, to bring an estimated touch target within a defined reach extent that is configured in the electronic device. 47. The electronic device of claim 46, wherein the processing circuitry is configured to temporarily adapt the screen by displaying a modified version of the screen until at least one of: detecting a touch input to the touchscreen, detecting expiration of an adaptation time-out period, or detecting that the digit of the user is no longer in a reaching orientation. 48. The electronic device of claim 46, wherein the processing circuitry is configured to temporarily adapt the screen by determining a layout modification for the screen to bring the touch target within the defined reach extent, and modifying a layout of the screen according to the layout modification. 49. The electronic device of claim 46, wherein the processing circuitry is configured to identify the touch target as being a screen element or screen region that is outside of the defined reach extent and in a determined reach direction. 50. The electronic device of claim 46, wherein the processing circuitry is configured to temporarily adapt the screen by at least one of: shifting the screen, rescaling the screen, and warping the screen. 51. The electronic device of claim 46, wherein the processing circuitry is configured to perform a calibration routine, wherein the processing circuitry prompts the user to make one or more touch inputs to the touchscreen and defines the defined reach extent based on the one or more touch inputs received during the calibration routine. 52. The electronic device of claim 46, wherein the processing circuitry is configured to detect that the user is reaching with the digit to make the touch input to the touchscreen by detecting that the digit is hovering over the touchscreen in conjunction with detecting that the digit is in a reaching orientation with respect to the touchscreen. 53. The electronic device of claim 46, wherein the electronic device includes a camera and wherein the processing circuitry is configured to detect that the user is reaching with the digit to make the touch input to the touchscreen by obtaining one or more images from the camera, and determining from image data obtained from the one or more images that the digit of the user is in a reaching orientation with respect to the touchscreen. 54. The electronic device of claim 53, wherein the processing circuitry is configured to determine a reach direction from the image data, and determine the touch target based at least on the reach direction. 55. The electronic device of claim 53, wherein the touchscreen does not lie within a field of view of the camera, wherein the camera is oriented to face the user in at least one handheld orientation of the electronic device, and wherein the processing circuitry is configured to determine that the user is reaching with the digit to make the touch input to the touchscreen by: extracting one or more cornea-reflected or eyewear-reflected images from the one or more images; processing the one or more reflected images, as said image data, to obtain orientation information for the digit with respect to the touchscreen; and detecting that the digit is in a reaching orientation with respect to the touchscreen and detecting a corresponding reach direction, from the orientation information obtained for the digit. 56. The electronic device of claim 53, wherein the processing circuitry is configured to control the camera to be active in response to at least one of: determining that that the screen is a certain screen or a certain type of screen for which reach detection is to be active; determining that the screen includes one or more screen elements that are operative as touch inputs and outside of the reach extent; and detecting a movement or orientation of the electronic device that is characteristic of reach events, said movement or orientation being determined from inertial sensor data available within the electronic device. 57. The electronic device of claim 53, wherein the one or more images comprise at least two images, and wherein the processing circuitry is configured to jointly process two or more of the at least two images to obtain one or more enhanced-resolution images and to use the enhanced-resolution images for determining whether the digit of the user is in a reaching orientation with respect to the touchscreen. 58. The electronic device of claim 46, wherein the processing circuitry is configured to detect that the user is reaching with the digit to make the touch input to the touchscreen by detecting a movement or orientation of the electronic device that is characteristic of the user extending the digit in a reaching motion with respect to the touchscreen while holding the electronic device in the hand associated with the digit, in conjunction with detecting that the digit of the user is in a reaching orientation with respect to the touchscreen. 59. The electronic device of claim 46, wherein the processing circuitry is configured to detect that the user is reaching with the digit to make the touch input to the touchscreen by processing one or more images obtained from a camera integrated within the electronic device and having a field of view that encompasses at least a portion of the face of the user, to obtain one or more cornea-reflected images, and processing the one or more cornea-reflected images to determine whether the digit, as visible in the one or more cornea-reflected images, is in a reaching orientation with respect to the touchscreen. 60. The electronic device of claim 46, wherein the electronic device comprises one of a mobile terminal, a mobile phone, a smartphone, or a User Equipment, UE. 61. An electronic device having a touchscreen, and further comprising: a reach detection module for detecting that a user is reaching with a digit to make a touch input to the touchscreen; and a screen adaptation module for temporarily adapting a screen currently being displayed on the touchscreen, to bring an estimated touch target within a defined reach extent that is configured in the electronic device. 62. A non-transitory computer-readable medium storing a computer program comprising program instructions that, when executed by processing circuitry of an electronic device having a touchscreen, configures the electronic device to: detect that a user is reaching with a digit to make a touch input to the touchscreen; and temporarily adapt a screen currently being displayed on the touchscreen, to bring an estimated touch target within a defined reach extent that is configured in the electronic device.
2,600
9,875
9,875
13,700,119
2,622
An operator may operate, without a feeling of strangeness, provided with a realistic tactile sensation matching an object on a touch sensor by a tactile sensation providing apparatus 1 including a touch sensor 11 , a load detection unit 12 for detecting a pressure load, a tactile sensation providing unit 13 , and a control unit for controlling drive of the tactile sensation proving unit 13 such that: a first tactile sensation is provided upon detecting a position of the pressure load satisfying a predetermined standard load shifts into a predetermined area, a second tactile sensation is provided upon detecting the position of the pressure load satisfying the predetermined standard load shifts out of the predetermined area, and a tactile sensation different from the first and the second tactile sensations is provided upon detecting the position of the pressure load shifts out of the predetermined area and into another predetermined area.
1. A tactile sensation providing apparatus comprising: a touch sensor; a load detection unit configured to detect a pressure load on a touch face of the touch sensor; a tactile sensation providing unit configured to vibrate the touch face; and a control unit configured to control drive of the tactile sensation providing unit such that, upon detecting that a position of the pressure load satisfying a predetermined standard load shifts into a predetermined area, a first tactile sensation is provided to a pressing object pressing the touch face, wherein the control unit controls drive of the tactile sensation providing unit such that, upon detecting that the position of the pressure load satisfying the predetermined standard load shifts out of the predetermined area, a second tactile sensation is provided to the pressing object, and controls drive of the tactile sensation providing unit such that, upon detecting that the position of the pressure load shifts out of the predetermined area into another predetermined area, a tactile sensation different from the first tactile sensation and the second tactile sensation is provided.
An operator may operate, without a feeling of strangeness, provided with a realistic tactile sensation matching an object on a touch sensor by a tactile sensation providing apparatus 1 including a touch sensor 11 , a load detection unit 12 for detecting a pressure load, a tactile sensation providing unit 13 , and a control unit for controlling drive of the tactile sensation proving unit 13 such that: a first tactile sensation is provided upon detecting a position of the pressure load satisfying a predetermined standard load shifts into a predetermined area, a second tactile sensation is provided upon detecting the position of the pressure load satisfying the predetermined standard load shifts out of the predetermined area, and a tactile sensation different from the first and the second tactile sensations is provided upon detecting the position of the pressure load shifts out of the predetermined area and into another predetermined area.1. A tactile sensation providing apparatus comprising: a touch sensor; a load detection unit configured to detect a pressure load on a touch face of the touch sensor; a tactile sensation providing unit configured to vibrate the touch face; and a control unit configured to control drive of the tactile sensation providing unit such that, upon detecting that a position of the pressure load satisfying a predetermined standard load shifts into a predetermined area, a first tactile sensation is provided to a pressing object pressing the touch face, wherein the control unit controls drive of the tactile sensation providing unit such that, upon detecting that the position of the pressure load satisfying the predetermined standard load shifts out of the predetermined area, a second tactile sensation is provided to the pressing object, and controls drive of the tactile sensation providing unit such that, upon detecting that the position of the pressure load shifts out of the predetermined area into another predetermined area, a tactile sensation different from the first tactile sensation and the second tactile sensation is provided.
2,600
9,876
9,876
14,809,149
2,613
A portable electronic device comprises a double-sided display including a first display side and a second display side formed on a side opposite the first display side; a direct memory access (DMA) controller configured to read first image data from a memory; at least one sensor configured to detect at least one of a position change of the double-sided display and a movement of a user's pupil and to output a detection signal; a status signal generator configured to interpret the detection signal output and to output a status signal; a transmission order determiner configured to receive the first image data from the DMA controller, to determine a transmission order of the first image data based on the status signal, and to output second image data corresponding to the determined transmission order; and a display driver integrated circuit (IC) configured to transmit the second image data to the display.
1. A system, comprising: a status signal generator configured to output a status signal to indicate one of a plurality of positions of at least one double-sided display; and an image rotator coupled to the position signal, the image rotator configured to receive image data in a read order, and to output the image data in a transmission order, the transmission order being the same as the read order if the status signal indicates a first position of the at least one double-sided display, and the transmission order being a reverse of the read order if the status signal indicates a second position of the at least one double-sided display. 2. The system according to claim 1, wherein the plurality of positions of the at least one double-sided display comprises an unfolded position and a folded position, and wherein the first position comprises the unfolded position and the second position comprises the folded position. 3. The system according to claim 1, wherein the at least one double-sided display comprises a double-sided display, a transparent double-sided display, a double-sided foldable display, a dual-sided foldable display, a double-sided foldable flexible display, or a dual-sided foldable flexible display. 4. The system according to claim 1, wherein the at least one double-sided display comprises at least two double-sided displays. 5. The system according to claim 4, wherein the status signal generator is further configured to output the status signal in response to at least one of an angle between the at least two double-sided displays, an orientation of one of the at least two double-sided displays, a rotation of one of the at least two double-sided displays or a rotation direction of one of the at least two double-sided displays. 6. The system according to claim 1, further comprising: a direct memory access (DMA) controller comprising an output coupled to the image rotator, the DMA controller being configured to output the image data to the image rotator; and a display driver IC configured to receive the image data. 7. The system according to claim 1, wherein the system comprises part of an electronic device comprising the at least one double-sided display. 8. A System on a Chip (SOC), comprising: a direct memory access (DMA) controller configured to output image data in a read order; a status signal generator configured to output a status signal to indicate one of a plurality of positions of at least one double-sided display, the plurality of positions comprising at least an unfolded position and a folded position, and a first position comprising the unfolded position and a second position comprising the folded position; and an image rotator coupled to the status signal generator, the image rotator configured to receive the image data in the read order, and to output the image data in a transmission order, the transmission order being the same as the read order if the status signal indicates the first position of the at least one double-sided display, and the transmission order being a reverse of the read order if the status signal indicates the second position of the at least one double-sided display. 9. The SoC according to claim 8, wherein the at least one double-sided display comprises a double-sided display, a transparent double-sided display, a double-sided foldable display, a dual-sided foldable display, a double-sided foldable flexible display, or a dual-sided foldable flexible display. 10. The SoC according to claim 8, wherein the at least one double-sided display comprises at least two double-sided displays. 11. The SoC according to claim 10, wherein the status signal generator is further configured to output the status signal in response to at least one of an angle between the at least two double-sided displays, an orientation of one of the at least two double-sided displays, a rotation of one of the at least two double-sided displays or a rotation direction of one of the at least two double-sided displays. 12. The SoC according to claim 8, further comprising a display driver configured to receive the image data from the image rotator and to drive the at least one double-sided display with the image data received from the image rotator. 13. The SoC according to claim 8, wherein the SoC comprises part of an electronic device comprising the at least one double-sided display. 14. The SoC according to claim 13, wherein the electronic device comprises a computing device, a personal digital assistant (PDA), a laptop computer, a mobile computer, a web tablet, a wireless phone, a cell phone, a smart phone, a digital music player, or a wireline or wireless electronic device. 15. A system, comprising: at least one double-sided display; a status signal generator configured to output a first status signal to indicate an unfolded position of the at least one double-sided display and a second status signal to indicate a folded position of the at least one double-sided display; and an image rotator coupled to the status signal generator, the image rotator configured to receive image data in a read order, and to output the image data in a transmission order, the transmission order being the same as the read order if the status signal generator outputs the first position signal, and the transmission order being a reverse of the read order if the status signal generator outputs the second position signal. 16. The system according to claim 15, wherein the at least one double-sided display comprises a double-sided display, a transparent double-sided display, a double-sided foldable display, a dual-sided foldable display, a double-sided foldable flexible display, or a dual-sided foldable flexible display. 17. The system according to claim 15, wherein the at least one double-sided display comprises at least two double-sided displays. 18. The system according to claim 15, wherein the status signal generator is further configured to output the first status signal or the second status signal in response to at least one of an angle between the at least two double-sided displays, an orientation of one of the at least two double-sided displays, a rotation of one of the at least two double-sided displays or a rotation direction of one of the at least two double-sided displays. 19. The system according to claim 15, wherein the system comprises part of an electronic device comprising the at least one double-sided display. 20. The system according to claim 19, wherein the electronic device comprises a computing device, a personal digital assistant (PDA), a laptop computer, a mobile computer, a web tablet, a wireless phone, a cell phone, a smart phone, a digital music player, or a wireline or wireless electronic device.
A portable electronic device comprises a double-sided display including a first display side and a second display side formed on a side opposite the first display side; a direct memory access (DMA) controller configured to read first image data from a memory; at least one sensor configured to detect at least one of a position change of the double-sided display and a movement of a user's pupil and to output a detection signal; a status signal generator configured to interpret the detection signal output and to output a status signal; a transmission order determiner configured to receive the first image data from the DMA controller, to determine a transmission order of the first image data based on the status signal, and to output second image data corresponding to the determined transmission order; and a display driver integrated circuit (IC) configured to transmit the second image data to the display.1. A system, comprising: a status signal generator configured to output a status signal to indicate one of a plurality of positions of at least one double-sided display; and an image rotator coupled to the position signal, the image rotator configured to receive image data in a read order, and to output the image data in a transmission order, the transmission order being the same as the read order if the status signal indicates a first position of the at least one double-sided display, and the transmission order being a reverse of the read order if the status signal indicates a second position of the at least one double-sided display. 2. The system according to claim 1, wherein the plurality of positions of the at least one double-sided display comprises an unfolded position and a folded position, and wherein the first position comprises the unfolded position and the second position comprises the folded position. 3. The system according to claim 1, wherein the at least one double-sided display comprises a double-sided display, a transparent double-sided display, a double-sided foldable display, a dual-sided foldable display, a double-sided foldable flexible display, or a dual-sided foldable flexible display. 4. The system according to claim 1, wherein the at least one double-sided display comprises at least two double-sided displays. 5. The system according to claim 4, wherein the status signal generator is further configured to output the status signal in response to at least one of an angle between the at least two double-sided displays, an orientation of one of the at least two double-sided displays, a rotation of one of the at least two double-sided displays or a rotation direction of one of the at least two double-sided displays. 6. The system according to claim 1, further comprising: a direct memory access (DMA) controller comprising an output coupled to the image rotator, the DMA controller being configured to output the image data to the image rotator; and a display driver IC configured to receive the image data. 7. The system according to claim 1, wherein the system comprises part of an electronic device comprising the at least one double-sided display. 8. A System on a Chip (SOC), comprising: a direct memory access (DMA) controller configured to output image data in a read order; a status signal generator configured to output a status signal to indicate one of a plurality of positions of at least one double-sided display, the plurality of positions comprising at least an unfolded position and a folded position, and a first position comprising the unfolded position and a second position comprising the folded position; and an image rotator coupled to the status signal generator, the image rotator configured to receive the image data in the read order, and to output the image data in a transmission order, the transmission order being the same as the read order if the status signal indicates the first position of the at least one double-sided display, and the transmission order being a reverse of the read order if the status signal indicates the second position of the at least one double-sided display. 9. The SoC according to claim 8, wherein the at least one double-sided display comprises a double-sided display, a transparent double-sided display, a double-sided foldable display, a dual-sided foldable display, a double-sided foldable flexible display, or a dual-sided foldable flexible display. 10. The SoC according to claim 8, wherein the at least one double-sided display comprises at least two double-sided displays. 11. The SoC according to claim 10, wherein the status signal generator is further configured to output the status signal in response to at least one of an angle between the at least two double-sided displays, an orientation of one of the at least two double-sided displays, a rotation of one of the at least two double-sided displays or a rotation direction of one of the at least two double-sided displays. 12. The SoC according to claim 8, further comprising a display driver configured to receive the image data from the image rotator and to drive the at least one double-sided display with the image data received from the image rotator. 13. The SoC according to claim 8, wherein the SoC comprises part of an electronic device comprising the at least one double-sided display. 14. The SoC according to claim 13, wherein the electronic device comprises a computing device, a personal digital assistant (PDA), a laptop computer, a mobile computer, a web tablet, a wireless phone, a cell phone, a smart phone, a digital music player, or a wireline or wireless electronic device. 15. A system, comprising: at least one double-sided display; a status signal generator configured to output a first status signal to indicate an unfolded position of the at least one double-sided display and a second status signal to indicate a folded position of the at least one double-sided display; and an image rotator coupled to the status signal generator, the image rotator configured to receive image data in a read order, and to output the image data in a transmission order, the transmission order being the same as the read order if the status signal generator outputs the first position signal, and the transmission order being a reverse of the read order if the status signal generator outputs the second position signal. 16. The system according to claim 15, wherein the at least one double-sided display comprises a double-sided display, a transparent double-sided display, a double-sided foldable display, a dual-sided foldable display, a double-sided foldable flexible display, or a dual-sided foldable flexible display. 17. The system according to claim 15, wherein the at least one double-sided display comprises at least two double-sided displays. 18. The system according to claim 15, wherein the status signal generator is further configured to output the first status signal or the second status signal in response to at least one of an angle between the at least two double-sided displays, an orientation of one of the at least two double-sided displays, a rotation of one of the at least two double-sided displays or a rotation direction of one of the at least two double-sided displays. 19. The system according to claim 15, wherein the system comprises part of an electronic device comprising the at least one double-sided display. 20. The system according to claim 19, wherein the electronic device comprises a computing device, a personal digital assistant (PDA), a laptop computer, a mobile computer, a web tablet, a wireless phone, a cell phone, a smart phone, a digital music player, or a wireline or wireless electronic device.
2,600
9,877
9,877
14,628,113
2,612
The present disclosure relates to techniques for capturing and displaying partial motion in VAR scenes. VAR scenes can include a plurality of images combined and oriented over any suitable geometry. Although VAR scenes may provide an immersive view of a static scene, current systems do not generally support VAR scenes that include dynamic content (e.g., content that varies over time). Embodiments of the present invention can capture, generate, and/or share VAR scenes. This immersive, yet static, view of the VAR scene lacks dynamic content (e.g., content which varies over time). Embodiments of the present invention can efficiently add dynamic content to the VAR scene, allowing VAR scenes including dynamic content to be uploaded, shared, or otherwise transmitted without prohibitive resource requirements. Dynamic content can be captured by device and combined with a preexisting or simultaneously captured VAR scene, and the dynamic content may be played back upon selection.
1. A method comprising: capturing a first plurality of images associated with a virtual or augmented reality (VAR) scene; capturing a second plurality of images associated with a highlight of the VAR scene; determining differential data between the first plurality of images and the second plurality of images; detecting location and orientation data of a viewer device; rendering the VAR scene based on the location and orientation of the viewer device; receiving a selection of the highlight; and rendering the highlight within the VAR scene using the differential data. 2. The method of claim 1, wherein the VAR scene comprises a static base VAR scene and the highlight of the VAR scene which represents a portion of the VAR scene and varies with time. 3. The method of claim 1, wherein capturing a first plurality of images further comprises: detecting location and orientation data of a capture device; associating each image of the first plurality of images with the location and orientation data detected when that image is captured. 4. The method of claim 3, wherein capturing a second plurality of images further comprises: receiving a selection to capture the second plurality of images with the device in a starting location and orientation; and detecting location and orientation data of a capture device; associating the location and orientation data of the capture device over time during capture of the second plurality of images. 5. The method of claim 4, wherein rendering the VAR scene further comprises: assembling the first plurality of images into a spherical VAR scene based on the detected location and orientation data associated with each image of the first plurality of images; and overlaying at least one image from the second plurality of images on the spherical VAR scene based on the location and orientation data associated with the at least one image from the second plurality of images. 6. The method of claim 5, wherein overlaying at least one image from the second plurality of images on the spherical VAR scene further comprises: blending the at least one image from the second plurality of images with the spherical VAR scene. 7. The method of claim 6, wherein blending includes one or more of direct placement, blending using a Gaussian mask, poisson blending, and seaming. 8. A system comprising: a capture device configured to: capture a first plurality of images associated with a virtual or augmented reality (VAR) scene; capture a second plurality of images associated with a highlight of the VAR scene; and determine differential data between the first plurality of images and the second plurality of images; and a viewer device configured to: detect location and orientation data of the viewer device; render the VAR scene based on the location and orientation of the viewer device; receive a selection of the highlight; and render the highlight within the VAR scene using the differential data. 9. The system of claim 8, wherein the VAR scene comprises a static base VAR scene and the highlight of the VAR scene which represents a portion of the VAR scene and varies with time. 10. The system of claim 8, wherein capturing a first plurality of images further comprises: detecting location and orientation data of a capture device; associating each image of the first plurality of images with the location and orientation data detected when that image is captured. 11. The system of claim 10, wherein capturing a second plurality of images further comprises: receiving a selection to capture the second plurality of images with the device in a starting location and orientation; and detecting location and orientation data of a capture device; associating the location and orientation data of the capture device over time during capture of the second plurality of images. 12. The system of claim 11, wherein rendering the VAR scene further comprises: assembling the first plurality of images into a spherical VAR scene based on the detected location and orientation data associated with each image of the first plurality of images; and overlaying at least one image from the second plurality of images on the spherical VAR scene based on the location and orientation data associated with the at least one image from the second plurality of images. 13. The system of claim 13, wherein overlaying at least one image from the second plurality of images on the spherical VAR scene further comprises: blending the at least one image from the second plurality of images with the spherical VAR scene. 14. The system of claim 13, wherein blending includes one or more of direct placement, blending using a Gaussian mask, poisson blending, and seaming. 15. A non-transitory computer readable storage medium including instructions stored thereon which, when executed by a processor, causes the processor to perform a method comprising: capturing a first plurality of images associated with a virtual or augmented reality (VAR) scene; capturing a second plurality of images associated with a highlight of the VAR scene; determining differential data between the first plurality of images and the second plurality of images; detecting location and orientation data of a viewer device; rendering the VAR scene based on the location and orientation of the viewer device; receiving a selection of the highlight; and rendering the highlight within the VAR scene using the differential data. 16. The non-transitory computer readable storage medium of claim 15, wherein the VAR scene comprises a static base VAR scene and the highlight of the VAR scene which represents a portion of the VAR scene and varies with time. 17. The non-transitory computer readable storage medium of claim 15, wherein capturing a first plurality of images further comprises: detecting location and orientation data of a capture device; associating each image of the first plurality of images with the location and orientation data detected when that image is captured. 18. The non-transitory computer readable storage medium of claim 17, wherein capturing a second plurality of images further comprises: receiving a selection to capture the second plurality of images with the device in a starting location and orientation; and detecting location and orientation data of a capture device; associating the location and orientation data of the capture device over time during capture of the second plurality of images. 19. The non-transitory computer readable storage medium of claim 18, wherein rendering the VAR scene further comprises: assembling the first plurality of images into a spherical VAR scene based on the detected location and orientation data associated with each image of the first plurality of images; and overlaying at least one image from the second plurality of images on the spherical VAR scene based on the location and orientation data associated with the at least one image from the second plurality of images. 20. The non-transitory computer readable storage medium of claim 19, wherein overlaying at least one image from the second plurality of images on the spherical VAR scene further comprises: blending the at least one image from the second plurality of images with the spherical VAR scene, wherein blending includes one or more of direct placement, blending using a Gaussian mask, poisson blending, and seaming.
The present disclosure relates to techniques for capturing and displaying partial motion in VAR scenes. VAR scenes can include a plurality of images combined and oriented over any suitable geometry. Although VAR scenes may provide an immersive view of a static scene, current systems do not generally support VAR scenes that include dynamic content (e.g., content that varies over time). Embodiments of the present invention can capture, generate, and/or share VAR scenes. This immersive, yet static, view of the VAR scene lacks dynamic content (e.g., content which varies over time). Embodiments of the present invention can efficiently add dynamic content to the VAR scene, allowing VAR scenes including dynamic content to be uploaded, shared, or otherwise transmitted without prohibitive resource requirements. Dynamic content can be captured by device and combined with a preexisting or simultaneously captured VAR scene, and the dynamic content may be played back upon selection.1. A method comprising: capturing a first plurality of images associated with a virtual or augmented reality (VAR) scene; capturing a second plurality of images associated with a highlight of the VAR scene; determining differential data between the first plurality of images and the second plurality of images; detecting location and orientation data of a viewer device; rendering the VAR scene based on the location and orientation of the viewer device; receiving a selection of the highlight; and rendering the highlight within the VAR scene using the differential data. 2. The method of claim 1, wherein the VAR scene comprises a static base VAR scene and the highlight of the VAR scene which represents a portion of the VAR scene and varies with time. 3. The method of claim 1, wherein capturing a first plurality of images further comprises: detecting location and orientation data of a capture device; associating each image of the first plurality of images with the location and orientation data detected when that image is captured. 4. The method of claim 3, wherein capturing a second plurality of images further comprises: receiving a selection to capture the second plurality of images with the device in a starting location and orientation; and detecting location and orientation data of a capture device; associating the location and orientation data of the capture device over time during capture of the second plurality of images. 5. The method of claim 4, wherein rendering the VAR scene further comprises: assembling the first plurality of images into a spherical VAR scene based on the detected location and orientation data associated with each image of the first plurality of images; and overlaying at least one image from the second plurality of images on the spherical VAR scene based on the location and orientation data associated with the at least one image from the second plurality of images. 6. The method of claim 5, wherein overlaying at least one image from the second plurality of images on the spherical VAR scene further comprises: blending the at least one image from the second plurality of images with the spherical VAR scene. 7. The method of claim 6, wherein blending includes one or more of direct placement, blending using a Gaussian mask, poisson blending, and seaming. 8. A system comprising: a capture device configured to: capture a first plurality of images associated with a virtual or augmented reality (VAR) scene; capture a second plurality of images associated with a highlight of the VAR scene; and determine differential data between the first plurality of images and the second plurality of images; and a viewer device configured to: detect location and orientation data of the viewer device; render the VAR scene based on the location and orientation of the viewer device; receive a selection of the highlight; and render the highlight within the VAR scene using the differential data. 9. The system of claim 8, wherein the VAR scene comprises a static base VAR scene and the highlight of the VAR scene which represents a portion of the VAR scene and varies with time. 10. The system of claim 8, wherein capturing a first plurality of images further comprises: detecting location and orientation data of a capture device; associating each image of the first plurality of images with the location and orientation data detected when that image is captured. 11. The system of claim 10, wherein capturing a second plurality of images further comprises: receiving a selection to capture the second plurality of images with the device in a starting location and orientation; and detecting location and orientation data of a capture device; associating the location and orientation data of the capture device over time during capture of the second plurality of images. 12. The system of claim 11, wherein rendering the VAR scene further comprises: assembling the first plurality of images into a spherical VAR scene based on the detected location and orientation data associated with each image of the first plurality of images; and overlaying at least one image from the second plurality of images on the spherical VAR scene based on the location and orientation data associated with the at least one image from the second plurality of images. 13. The system of claim 13, wherein overlaying at least one image from the second plurality of images on the spherical VAR scene further comprises: blending the at least one image from the second plurality of images with the spherical VAR scene. 14. The system of claim 13, wherein blending includes one or more of direct placement, blending using a Gaussian mask, poisson blending, and seaming. 15. A non-transitory computer readable storage medium including instructions stored thereon which, when executed by a processor, causes the processor to perform a method comprising: capturing a first plurality of images associated with a virtual or augmented reality (VAR) scene; capturing a second plurality of images associated with a highlight of the VAR scene; determining differential data between the first plurality of images and the second plurality of images; detecting location and orientation data of a viewer device; rendering the VAR scene based on the location and orientation of the viewer device; receiving a selection of the highlight; and rendering the highlight within the VAR scene using the differential data. 16. The non-transitory computer readable storage medium of claim 15, wherein the VAR scene comprises a static base VAR scene and the highlight of the VAR scene which represents a portion of the VAR scene and varies with time. 17. The non-transitory computer readable storage medium of claim 15, wherein capturing a first plurality of images further comprises: detecting location and orientation data of a capture device; associating each image of the first plurality of images with the location and orientation data detected when that image is captured. 18. The non-transitory computer readable storage medium of claim 17, wherein capturing a second plurality of images further comprises: receiving a selection to capture the second plurality of images with the device in a starting location and orientation; and detecting location and orientation data of a capture device; associating the location and orientation data of the capture device over time during capture of the second plurality of images. 19. The non-transitory computer readable storage medium of claim 18, wherein rendering the VAR scene further comprises: assembling the first plurality of images into a spherical VAR scene based on the detected location and orientation data associated with each image of the first plurality of images; and overlaying at least one image from the second plurality of images on the spherical VAR scene based on the location and orientation data associated with the at least one image from the second plurality of images. 20. The non-transitory computer readable storage medium of claim 19, wherein overlaying at least one image from the second plurality of images on the spherical VAR scene further comprises: blending the at least one image from the second plurality of images with the spherical VAR scene, wherein blending includes one or more of direct placement, blending using a Gaussian mask, poisson blending, and seaming.
2,600
9,878
9,878
15,515,275
2,616
Examples of techniques to display graphical representations of analyzed augmented reality (AR) consumption data are disclosed. In one example implementation according to aspects of the present disclosure, consumption data generated from an AR experience is analyzed. A graphical representation of the analyzed consumption data is then displayed.
1. A non-transitory computer-readable storage medium storing instructions that, when executed by a processor, cause the processor to: analyze consumption data generated from an augmented reality (AR) experience by determining which of a plurality of AR overlay sets were consumed by a user of the AR experience; and display a graphical representation of the analyzed consumption data. 2. The non-transitory computer-readable storage medium of claim 1, further storing instructions that, when executed by the processor, cause the processor to: alter at least one of the plurality of augmented reality overlay sets based at least in part on the graphical representation of the snared consumption data. 3. The non-transitory computer-readable storage medium of claim 1, wherein the graphical representation includes quantitative measures of the degree to which the plurality of AR overlay sets and an associated plurality of AR events were consumed by the user of the AR experience. 4. The non-transitory computer-readable storage medium of claim 1, wherein analyzing the consumption data generated from the AR experience includes determining a frequency of AR overlay consumption. 5. The non-transitory computer-readable storage medium of claim 1, wherein the consumption data generated from the AR experience is received from a plurality of user devices, the user devices displaying the AR experience. 6. A computing system comprising: an overlay analysis module to analyze consumption data generated from an augmented reality (AR) experience by determining which of a plurality of AR overlay sets and which of a plurality of AR events associated with the plurality of overlay sets were consumed by a user of the AR experience; and an overlay display module to display a graphical representation of the analyzed consumption data on a display. 7. The system of claim 5, further comprising: an overlay alteration module to enable alteration of at least one of the AR overlay sets based at least in part on the graphical representation. 8. The system of claim 5, further comprising: a consumption data module to receive the consumption data from a plurality of user devices. 9. The system of claim 8 wherein the graphical representation includes illustrating a user journey through the AR experience along with the analyzed consumption data. 10. The system of claim 8, wherein the graphical representation includes a plurality of nodes connected by edges, wherein the nodes comprises AR overlay sets visible at a given time, and wherein the edges comprises user actions or system events that causes the AR overlay to change visibility. 11. A method comprising: receiving, by a computing system, consumption data from a plurality of user devices based on an augmented reality (AR) experience; analyzing, by a computing system, the consumption data by determining a frequency of consumption of a plurality of AR overlay sets; and displaying, by the computing system, a graphical representation of the frequency of consumption of the plurality of AR overlay sets. 12. The method of claim 11, wherein the consumption data includes a user edge selection and an associated node viewed when the user edge selection occurred, wherein the associated node comprises an AR overlay set visible at a given time; and wherein the user edge selection comprises a user action or system event that causes the AR overlay set to change visibility. 13. The method of claim 12, wherein causing the AR overlay set to change visibility includes navigating to another AR overlay set. 14. The method of claim 12, wherein the graphical representation of the frequency of consumption of the plurality of AR overlay sets includes graphical representations of the user edge selection and the associated node viewed when the user edge selection occurred. 15. The method of claim 11, wherein the graphical representation is displayed as a user journey through the AR experience.
Examples of techniques to display graphical representations of analyzed augmented reality (AR) consumption data are disclosed. In one example implementation according to aspects of the present disclosure, consumption data generated from an AR experience is analyzed. A graphical representation of the analyzed consumption data is then displayed.1. A non-transitory computer-readable storage medium storing instructions that, when executed by a processor, cause the processor to: analyze consumption data generated from an augmented reality (AR) experience by determining which of a plurality of AR overlay sets were consumed by a user of the AR experience; and display a graphical representation of the analyzed consumption data. 2. The non-transitory computer-readable storage medium of claim 1, further storing instructions that, when executed by the processor, cause the processor to: alter at least one of the plurality of augmented reality overlay sets based at least in part on the graphical representation of the snared consumption data. 3. The non-transitory computer-readable storage medium of claim 1, wherein the graphical representation includes quantitative measures of the degree to which the plurality of AR overlay sets and an associated plurality of AR events were consumed by the user of the AR experience. 4. The non-transitory computer-readable storage medium of claim 1, wherein analyzing the consumption data generated from the AR experience includes determining a frequency of AR overlay consumption. 5. The non-transitory computer-readable storage medium of claim 1, wherein the consumption data generated from the AR experience is received from a plurality of user devices, the user devices displaying the AR experience. 6. A computing system comprising: an overlay analysis module to analyze consumption data generated from an augmented reality (AR) experience by determining which of a plurality of AR overlay sets and which of a plurality of AR events associated with the plurality of overlay sets were consumed by a user of the AR experience; and an overlay display module to display a graphical representation of the analyzed consumption data on a display. 7. The system of claim 5, further comprising: an overlay alteration module to enable alteration of at least one of the AR overlay sets based at least in part on the graphical representation. 8. The system of claim 5, further comprising: a consumption data module to receive the consumption data from a plurality of user devices. 9. The system of claim 8 wherein the graphical representation includes illustrating a user journey through the AR experience along with the analyzed consumption data. 10. The system of claim 8, wherein the graphical representation includes a plurality of nodes connected by edges, wherein the nodes comprises AR overlay sets visible at a given time, and wherein the edges comprises user actions or system events that causes the AR overlay to change visibility. 11. A method comprising: receiving, by a computing system, consumption data from a plurality of user devices based on an augmented reality (AR) experience; analyzing, by a computing system, the consumption data by determining a frequency of consumption of a plurality of AR overlay sets; and displaying, by the computing system, a graphical representation of the frequency of consumption of the plurality of AR overlay sets. 12. The method of claim 11, wherein the consumption data includes a user edge selection and an associated node viewed when the user edge selection occurred, wherein the associated node comprises an AR overlay set visible at a given time; and wherein the user edge selection comprises a user action or system event that causes the AR overlay set to change visibility. 13. The method of claim 12, wherein causing the AR overlay set to change visibility includes navigating to another AR overlay set. 14. The method of claim 12, wherein the graphical representation of the frequency of consumption of the plurality of AR overlay sets includes graphical representations of the user edge selection and the associated node viewed when the user edge selection occurred. 15. The method of claim 11, wherein the graphical representation is displayed as a user journey through the AR experience.
2,600
9,879
9,879
14,954,597
2,632
A system for exerting forces on a user. The system includes a user-mounted device including one or more masses, one or more sensors configured to acquire sensor data, and a processor coupled to the one or more sensors. The processor is configured to determine at least one of an orientation and a position associated with the user-mounted device based on the sensor data. The processor is further configured to compute a force to be exerted on the user via the one or more masses based on a force direction associated with a force event and at least one of the orientation and the position. The processor is further configured to generate, based on the force, a control signal to change a position of the one or more masses relative to the user-mounted device.
1. A system for exerting forces on a user, the system comprising: a user-mounted device including one or more masses; one or more sensors configured to acquire sensor data; and a processor coupled to the one or more sensors and configured to: determine at least one of an orientation and a position of the user-mounted device based on the sensor data; compute a force to be exerted on the user via the one or more masses based on a force direction associated with a force event and at least one of the orientation and the position; and generate, based on the computed force, an actuator control signal to change a position of the one or more masses relative to the user-mounted device. 2. The system of claim 1, wherein the processor is further configured to determine that at least one of the orientation and the position has changed, and, in response, generate a second actuator control signal to reposition at least one mass relative to the user-mounted device. 3. The system of claim 1, wherein the processor is configured to compute the force to be exerted on the user by computing a center of gravity associated with the one or more masses. 4. The system of claim 1, wherein the user-mounted device further comprises one or more actuators coupled to the processor and configured to reposition the one or more masses relative to the user-mounted device based on the actuator control signal. 5. The system of claim 4, wherein the one or more masses comprise one or more weights, the user-mounted device further includes one or more tracks, and the one or more actuators are configured to move the one or more weights along the one or more tracks based on the actuator control signal. 6. The system of claim 4, wherein the one or more masses comprise one or more weights, the user-mounted device further includes one or more hinges, and the one or more actuators are configured to rotate the one or more weights about the one or more hinges based on the actuator control signal. 7. The system of claim 4, wherein the one or more masses comprise a fluid mass, the user-mounted device further includes one or more tubes, and the one or more actuators are configured to move at least a portion of the fluid mass through the one or more tubes based on the actuator control signal. 8. The system of claim 1, wherein the force is computed based on the orientation of the user-mounted device, and the processor is further configured to determine that the user-mounted device has reached a target orientation associated with the force event, and, in response, generate a second actuator control signal to reposition the one or more masses relative to the user-mounted device. 9. The system of claim 1, wherein the one or more sensors comprise at least one of a global navigation satellite system (GNSS) receiver, a magnetometer, an accelerometer, and an optical sensor. 10. A non-transitory computer-readable storage medium including instructions that, when executed by a processor, configure the processor to exert forces on a user, by performing the steps of: determining at least one an orientation and a position of a force device based on sensor data; computing a force to be exerted on the user via one or more masses included in the force device based on a force direction associated with a force event and at least one of the orientation and the position; and generating, based on the computed force, an actuator control signal to change a position of the one or more masses relative to the force device. 11. The non-transitory computer-readable storage medium of claim 10, wherein the force event is associated with a navigation instruction, and the processor is configured to generate the actuator control signal to change the position the one or more masses relative to the force device in response to determining that the force device is approaching a street intersection. 12. The non-transitory computer-readable storage medium of claim 11, wherein the processor is further configured to: receive a second force event associated with a second navigation instruction; compute a second force to be exerted via the one or more masses based on a second force direction associated with the second force event and at least one of the orientation and the position of the force device; and generate a second actuator control signal to reposition the one or more masses based on the second force in response to determining that the force device is approaching a second street intersection. 13. The non-transitory computer-readable storage medium of claim 10, wherein the force device comprises a head-mounted device, and the orientation and the position of the head-mounted device comprise a head orientation and a head position, respectively. 14. The non-transitory computer-readable storage medium of claim 10, further comprising generating the force event by identifying an object in a surrounding environment, wherein the object is located in the force direction relative to the force device. 15. The non-transitory computer-readable storage medium of claim 10, further comprising generating the force event in response to determining that the orientation of the force device is outside of a threshold range, wherein the force is configured to be exerted on the user to instruct the user to return the force device to within the threshold range. 16. The non-transitory computer-readable storage medium of claim 15, wherein the sensor data is acquired via an angular sensor, and the threshold range comprises an angular range associated with the posture of the user. 17. The non-transitory computer-readable storage medium of claim 15, wherein the sensor data is acquired via a magnetometer, and the threshold range is associated with a direction towards a destination to which the user is navigating. 18. The non-transitory computer-readable storage medium of claim 10, further comprising determining that the orientation has changed based on the sensor data, and, in response, repositioning at least one mass based on an updated orientation. 19. A method for exerting forces on a user, the method comprising: determining at least one of an orientation and a position of a user-mounted device based on the sensor data; computing a force to be exerted on the user via one or more masses included in the user-mounted device based on a force direction associated with a force event and at least one of the orientation and the position; and generating, based on the computed force, an actuator control signal to change a position of the one or more masses relative to the user-mounted device. 20. The method of claim 19, wherein the force event is associated with a navigation instruction, and generating the actuator control signal is performed in response to determining that the user-mounted device is approaching a street intersection.
A system for exerting forces on a user. The system includes a user-mounted device including one or more masses, one or more sensors configured to acquire sensor data, and a processor coupled to the one or more sensors. The processor is configured to determine at least one of an orientation and a position associated with the user-mounted device based on the sensor data. The processor is further configured to compute a force to be exerted on the user via the one or more masses based on a force direction associated with a force event and at least one of the orientation and the position. The processor is further configured to generate, based on the force, a control signal to change a position of the one or more masses relative to the user-mounted device.1. A system for exerting forces on a user, the system comprising: a user-mounted device including one or more masses; one or more sensors configured to acquire sensor data; and a processor coupled to the one or more sensors and configured to: determine at least one of an orientation and a position of the user-mounted device based on the sensor data; compute a force to be exerted on the user via the one or more masses based on a force direction associated with a force event and at least one of the orientation and the position; and generate, based on the computed force, an actuator control signal to change a position of the one or more masses relative to the user-mounted device. 2. The system of claim 1, wherein the processor is further configured to determine that at least one of the orientation and the position has changed, and, in response, generate a second actuator control signal to reposition at least one mass relative to the user-mounted device. 3. The system of claim 1, wherein the processor is configured to compute the force to be exerted on the user by computing a center of gravity associated with the one or more masses. 4. The system of claim 1, wherein the user-mounted device further comprises one or more actuators coupled to the processor and configured to reposition the one or more masses relative to the user-mounted device based on the actuator control signal. 5. The system of claim 4, wherein the one or more masses comprise one or more weights, the user-mounted device further includes one or more tracks, and the one or more actuators are configured to move the one or more weights along the one or more tracks based on the actuator control signal. 6. The system of claim 4, wherein the one or more masses comprise one or more weights, the user-mounted device further includes one or more hinges, and the one or more actuators are configured to rotate the one or more weights about the one or more hinges based on the actuator control signal. 7. The system of claim 4, wherein the one or more masses comprise a fluid mass, the user-mounted device further includes one or more tubes, and the one or more actuators are configured to move at least a portion of the fluid mass through the one or more tubes based on the actuator control signal. 8. The system of claim 1, wherein the force is computed based on the orientation of the user-mounted device, and the processor is further configured to determine that the user-mounted device has reached a target orientation associated with the force event, and, in response, generate a second actuator control signal to reposition the one or more masses relative to the user-mounted device. 9. The system of claim 1, wherein the one or more sensors comprise at least one of a global navigation satellite system (GNSS) receiver, a magnetometer, an accelerometer, and an optical sensor. 10. A non-transitory computer-readable storage medium including instructions that, when executed by a processor, configure the processor to exert forces on a user, by performing the steps of: determining at least one an orientation and a position of a force device based on sensor data; computing a force to be exerted on the user via one or more masses included in the force device based on a force direction associated with a force event and at least one of the orientation and the position; and generating, based on the computed force, an actuator control signal to change a position of the one or more masses relative to the force device. 11. The non-transitory computer-readable storage medium of claim 10, wherein the force event is associated with a navigation instruction, and the processor is configured to generate the actuator control signal to change the position the one or more masses relative to the force device in response to determining that the force device is approaching a street intersection. 12. The non-transitory computer-readable storage medium of claim 11, wherein the processor is further configured to: receive a second force event associated with a second navigation instruction; compute a second force to be exerted via the one or more masses based on a second force direction associated with the second force event and at least one of the orientation and the position of the force device; and generate a second actuator control signal to reposition the one or more masses based on the second force in response to determining that the force device is approaching a second street intersection. 13. The non-transitory computer-readable storage medium of claim 10, wherein the force device comprises a head-mounted device, and the orientation and the position of the head-mounted device comprise a head orientation and a head position, respectively. 14. The non-transitory computer-readable storage medium of claim 10, further comprising generating the force event by identifying an object in a surrounding environment, wherein the object is located in the force direction relative to the force device. 15. The non-transitory computer-readable storage medium of claim 10, further comprising generating the force event in response to determining that the orientation of the force device is outside of a threshold range, wherein the force is configured to be exerted on the user to instruct the user to return the force device to within the threshold range. 16. The non-transitory computer-readable storage medium of claim 15, wherein the sensor data is acquired via an angular sensor, and the threshold range comprises an angular range associated with the posture of the user. 17. The non-transitory computer-readable storage medium of claim 15, wherein the sensor data is acquired via a magnetometer, and the threshold range is associated with a direction towards a destination to which the user is navigating. 18. The non-transitory computer-readable storage medium of claim 10, further comprising determining that the orientation has changed based on the sensor data, and, in response, repositioning at least one mass based on an updated orientation. 19. A method for exerting forces on a user, the method comprising: determining at least one of an orientation and a position of a user-mounted device based on the sensor data; computing a force to be exerted on the user via one or more masses included in the user-mounted device based on a force direction associated with a force event and at least one of the orientation and the position; and generating, based on the computed force, an actuator control signal to change a position of the one or more masses relative to the user-mounted device. 20. The method of claim 19, wherein the force event is associated with a navigation instruction, and generating the actuator control signal is performed in response to determining that the user-mounted device is approaching a street intersection.
2,600
9,880
9,880
15,630,462
2,659
Various embodiments are generally directed to systems for summarizing data visualizations (i.e., images of data visualizations), such as a graph image, for instance. Some embodiments are particularly directed to a personalized graph summarizer that analyzes a data visualization, or image, to detect pre-defined patterns within the data visualization, and produces a textual summary of the data visualization based on the pre-defined patterns detected within the data visualization. In various embodiments, the personalized graph summarizer may include features to adapt to the preferences of a user for generating an automated, personalized computer-generated narrative. For instance, additional pre-defined patterns may be created for detection and/or the textual summary may be tailored based on user preferences. In some such instances, one or more of the user preferences may be automatically determined by the personalized graph summarizer without requiring the user to explicitly indicate them. Embodiments may integrate machine learning and computer vision concepts.
1. An apparatus comprising a processor and a storage to store instructions that, when executed by the processor, cause the processor to perform operations comprising: identify a data visualization comprising a graph image; determine a set of graph-type correlation scores for the graph image, the set of graph-type correlation scores to include a graph-type correlation score for each graph type of a plurality of graph types, each graph-type correlation score based on a comparison of at least a portion of the graph image with one or more graph-type models associated with each graph type of the plurality of graph types; evaluate the set of graph-type correlation scores to identify a graph type of the graph image; retrieve a set of patterns based on the graph type of the graph image, each pattern in the set of patterns to include one or more pattern examples; determine a set of region of interest correlation scores for the graph image based on matching the one or more pattern examples of each pattern in the set of patterns with at least a portion of the graph image, the set of region of interest correlation scores to include at least one region of interest correlation score for each pattern in the set of patterns; evaluate the set of region of interest correlation scores to identify one or more candidate regions of interest of the graph image, each of the one or more candidate regions of interest to include a portion of the graph image; retrieve a set of pattern models based on the set of candidate regions of interest of the graph image, each candidate region of interest in the set of candidate regions of interest associated with one pattern model in the set of pattern models, and each pattern model in the set of pattern models associated with one pattern in the set of patterns; compare each candidate region of interest in the set of candidate regions of interest to an associated pattern model in the set of pattern models to determine a set of pattern model correlation scores, the set of pattern model correlation scores to include a pattern model correlation score for each candidate region of interest of the one or more candidate regions of interest; identify one or more detected patterns based on the set of pattern model correlation scores; retrieve one or more text templates based on the one or more detected patterns, the one or more text templates to include at least one portion of text associated with each detected pattern of the one or more detected patterns, each text template of the one or more text templates associated with a priority level; arrange the one or more text templates in an order based on the priority level associated with each text template to generate a textual description of the graph image; and generate a personalized summary of the graph image, the summary of the graph image comprising the graph image and the textual description of the graph image. 2. The apparatus of claim 1, wherein the processor is caused to perform operations comprising: detect a portion of the graph image with contextual information; extract a textual element from the portion of the graph image with contextual information; and insert at least a portion of the textual element extracted from the portion of the graph image with contextual information into at least one text template of the one or more text templates to generate the textual description of the graph image. 3. The apparatus of claim 1, wherein the processor is caused to perform operations comprising: identify a component of the graph image based on the graph type; detect a portion of the graph image with potential contextual information; and determine contextual information is absent from the portion of the graph image with potential contextual information based on the component of the graph image identified based on the graph type. 4. The apparatus of claim 1, matching a pattern example of a pattern in the set of patterns with at least a portion of the graph image comprising: overlay at least a portion of the pattern example on the graph image in a plurality of positions; and compute a region of interest correlation score in the set of region of interest correlation scores for each of the plurality of positions. 5. The apparatus of claim 1, wherein the processor is caused to perform operations comprising: receive an additional pattern example; and update a pattern model in the set of pattern models based on the additional pattern example. 6. The apparatus of claim 1, each pattern model correlation score to indicate a likelihood of a respective candidate region of interest of the one or more candidate regions of interest including an associated pattern. 7. The apparatus of claim 1, wherein the processor is caused to perform operations comprising: present the one or more text templates arranged based on the priority level associated with each template sentence via a user interface; arrange the one or more text templates in an updated order based on input received via the user interface; alter a priority level of at least one of the one or more text templates based on the updated order; and generate the textual description of the graph image based on the priority level associated with each text template, the priority level associated with each text template to include the priority level of the at least one of the one or more text templates altered based on the updated order. 8. The apparatus of claim 1, wherein the processor is caused to perform operations comprising: alter the priority level of a text template based on the input received via a user interface. 9. The apparatus of claim 1, at least one pattern in the set of patterns comprising a personalized pattern, wherein the processor is caused to perform operations comprising: create the personalized pattern based on one or more example graph images and one or more pattern examples identified in the example graph images based on input received via a user interface. 10. The apparatus of claim 9, wherein the processor is caused to perform operations comprising: associate one or more of a priority level, a template sentence, or a graph type with the personalized pattern based on input received via the user interface. 11. A computer-implemented method, comprising: identifying a data visualization comprising a graph image; determining a set of graph-type correlation scores for the graph image, the set of graph-type correlation scores to include a graph-type correlation score for each graph type of a plurality of graph types, each graph-type correlation score based on a comparison of at least a portion of the graph image with one or more graph-type models associated with each graph type of the plurality of graph types; evaluating the set of graph-type correlation scores to identify a graph type of the graph image; retrieving a set of patterns based on the graph type of the graph image, each pattern in the set of patterns to include one or more pattern examples; determining a set of region of interest correlation scores for the graph image based on matching the one or more pattern examples of each pattern in the set of patterns with at least a portion of the graph image, the set of region of interest correlation scores to include at least one region of interest correlation score for each pattern in the set of patterns; evaluating the set of region of interest correlation scores to identify one or more candidate regions of interest of the graph image, each of the one or more candidate regions of interest to include a portion of the graph image; retrieving a set of pattern models based on the set of candidate regions of interest of the graph image, each candidate region of interest in the set of candidate regions of interest associated with one pattern model in the set of pattern models, and each pattern model in the set of pattern models associated with one pattern in the set of patterns; comparing each candidate region of interest in the set of candidate regions of interest to an associated pattern model in the set of pattern models to determine a set of pattern model correlation scores, the set of pattern model correlation scores to include a pattern model correlation score for each candidate region of interest of the one or more candidate regions of interest; identifying one or more detected patterns based on the set of pattern model correlation scores; retrieving one or more text templates based on the one or more detected patterns, the one or more text templates to include at least one portion of text associated with each detected pattern of the one or more detected patterns, each text template of the one or more text templates associated with a priority level; arranging the one or more text templates in an order based on the priority level associated with each text template to generate a textual description of the graph image; and generating a personalized summary of the graph image, the summary of the graph image comprising the graph image and the textual description of the graph image. 12. The computer-implemented method of claim 11, comprising: detecting a portion of the graph image with contextual information; extracting a textual element from the portion of the graph image with contextual information; and inserting at least a portion of the textual element extracted from the portion of the graph image with contextual information into at least one text template of the one or more text templates to generate the textual description of the graph image. 13. The computer-implemented method of claim 11, comprising: identifying a component of the graph image based on the graph type; detecting a portion of the graph image with potential contextual information; and determining contextual information is absent from the portion of the graph image with potential contextual information based on the component of the graph image identified based on the graph type. 14. The computer-implemented method of claim 11, matching a pattern example of a pattern in the set of patterns with at least a portion of the graph image comprising: overlaying at least a portion of the pattern example on the graph image in a plurality of positions; and computing a region of interest correlation score in the set of region of interest correlation scores for each of the plurality of positions. 15. The computer-implemented method of claim 11, comprising: receiving an additional pattern example; and updating a pattern model in the set of pattern models based on the additional pattern example. 16. The computer-implemented method of claim 11, each pattern model correlation score to indicate a likelihood of a respective candidate region of interest of the one or more candidate regions of interest including an associated pattern. 17. The computer-implemented method of claim 11, comprising: presenting the one or more text templates arranged based on the priority level associated with each template sentence via a user interface; arranging the one or more text templates in an updated order based on input received via the user interface; altering a priority level of at least one of the one or more text templates based on the updated order; and generating the textual description of the graph image based on the priority level associated with each text template, the priority level associated with each text template to include the priority level of the at least one of the one or more text templates altered based on the updated order. 18. The computer-implemented method of claim 11, comprising: altering the priority level of a text template based on the input received via a user interface. 19. The computer-implemented method of claim 11, wherein at least one pattern in the set of patterns comprising a personalized pattern, and comprising creating the personalized pattern based on one or more example graph images and one or more pattern examples identified in the example graph images based on input received via a user interface. 20. The computer-implemented method of claim 19, comprising associating one or more of a priority level, a template sentence, or a graph type with the personalized pattern based on input received via the user interface. 21. A computer-program product tangibly embodied in a non-transitory machine-readable storage medium, the computer-program product including instructions operable to cause a processor to perform operations comprising: identify a data visualization comprising a graph image; determine a set of graph-type correlation scores for the graph image, the set of graph-type correlation scores to include a graph-type correlation score for each graph type of a plurality of graph types, each graph-type correlation score based on a comparison of at least a portion of the graph image with one or more graph-type models associated with each graph type of the plurality of graph types; evaluate the set of graph-type correlation scores to identify a graph type of the graph image; retrieve a set of patterns based on the graph type of the graph image, each pattern in the set of patterns to include one or more pattern examples; determine a set of region of interest correlation scores for the graph image based on matching the one or more pattern examples of each pattern in the set of patterns with at least a portion of the graph image, the set of region of interest correlation scores to include at least one region of interest correlation score for each pattern in the set of patterns; evaluate the set of region of interest correlation scores to identify one or more candidate regions of interest of the graph image, each of the one or more candidate regions of interest to include a portion of the graph image; retrieve a set of pattern models based on the set of candidate regions of interest of the graph image, each candidate region of interest in the set of candidate regions of interest associated with one pattern model in the set of pattern models, and each pattern model in the set of pattern models associated with one pattern in the set of patterns; compare each candidate region of interest in the set of candidate regions of interest to an associated pattern model in the set of pattern models to determine a set of pattern model correlation scores, the set of pattern model correlation scores to include a pattern model correlation score for each candidate region of interest of the one or more candidate regions of interest; identify one or more detected patterns based on the set of pattern model correlation scores; retrieve one or more text templates based on the one or more detected patterns, the one or more text templates to include at least one portion of text associated with each detected pattern of the one or more detected patterns, each text template of the one or more text templates associated with a priority level; arrange the one or more text templates in an order based on the priority level associated with each text template to generate a textual description of the graph image; and generate a personalized summary of the graph image, the summary of the graph image comprising the graph image and the textual description of the graph image. 22. The computer-program product of claim 21, including instructions operable to cause the processor to perform operations comprising: detect a portion of the graph image with contextual information; extract a textual element from the portion of the graph image with contextual information; and insert at least a portion of the textual element extracted from the portion of the graph image with contextual information into at least one text template of the one or more text templates to generate the textual description of the graph image. 23. The computer-program product of claim 21, including instructions operable to cause the processor to perform operations comprising: identify a component of the graph image based on the graph type; detect a portion of the graph image with potential contextual information; and determine contextual information is absent from the portion of the graph image with potential contextual information based on the component of the graph image identified based on the graph type. 24. The computer-program product of claim 21, wherein to match a pattern example of a pattern in the set of patterns with at least a portion of the graph image the computer-program product includes instructions operable to cause the processor to perform operations comprising: overlay at least a portion of the pattern example on the graph image in a plurality of positions; and compute a region of interest correlation score in the set of region of interest correlation scores for each of the plurality of positions. 25. The computer-program product of claim 21, including instructions operable to cause the processor to perform operations comprising: receive an additional pattern example; and update a pattern model in the set of pattern models based on the additional pattern example. 26. The computer-program product of claim 21, each pattern model correlation score to indicate a likelihood of a respective candidate region of interest of the one or more candidate regions of interest including an associated pattern. 27. The computer-program product of claim 21, including instructions operable to cause the processor to perform operations comprising: present the one or more text templates arranged based on the priority level associated with each template sentence via a user interface; arrange the one or more text templates in an updated order based on input received via the user interface; alter a priority level of at least one of the one or more text templates based on the updated order; and generate the textual description of the graph image based on the priority level associated with each text template, the priority level associated with each text template to include the priority level of the at least one of the one or more text templates altered based on the updated order. 28. The computer-program product of claim 21, including instructions operable to cause the processor to perform operations comprising: alter the priority level of a text template based on the input received via a user interface. 29. The computer-program product of claim 21, at least one pattern in the set of patterns comprising a personalized pattern, and the computer-program product including instructions operable to cause the processor to perform operations comprising: create the personalized pattern based on one or more example graph images and one or more pattern examples identified in the example graph images based on input received via a user interface. 30. The computer-program product of claim 29, including instructions operable to cause the processor to perform operations comprising: associate one or more of a priority level, a template sentence, or a graph type with the personalized pattern based on input received via the user interface.
Various embodiments are generally directed to systems for summarizing data visualizations (i.e., images of data visualizations), such as a graph image, for instance. Some embodiments are particularly directed to a personalized graph summarizer that analyzes a data visualization, or image, to detect pre-defined patterns within the data visualization, and produces a textual summary of the data visualization based on the pre-defined patterns detected within the data visualization. In various embodiments, the personalized graph summarizer may include features to adapt to the preferences of a user for generating an automated, personalized computer-generated narrative. For instance, additional pre-defined patterns may be created for detection and/or the textual summary may be tailored based on user preferences. In some such instances, one or more of the user preferences may be automatically determined by the personalized graph summarizer without requiring the user to explicitly indicate them. Embodiments may integrate machine learning and computer vision concepts.1. An apparatus comprising a processor and a storage to store instructions that, when executed by the processor, cause the processor to perform operations comprising: identify a data visualization comprising a graph image; determine a set of graph-type correlation scores for the graph image, the set of graph-type correlation scores to include a graph-type correlation score for each graph type of a plurality of graph types, each graph-type correlation score based on a comparison of at least a portion of the graph image with one or more graph-type models associated with each graph type of the plurality of graph types; evaluate the set of graph-type correlation scores to identify a graph type of the graph image; retrieve a set of patterns based on the graph type of the graph image, each pattern in the set of patterns to include one or more pattern examples; determine a set of region of interest correlation scores for the graph image based on matching the one or more pattern examples of each pattern in the set of patterns with at least a portion of the graph image, the set of region of interest correlation scores to include at least one region of interest correlation score for each pattern in the set of patterns; evaluate the set of region of interest correlation scores to identify one or more candidate regions of interest of the graph image, each of the one or more candidate regions of interest to include a portion of the graph image; retrieve a set of pattern models based on the set of candidate regions of interest of the graph image, each candidate region of interest in the set of candidate regions of interest associated with one pattern model in the set of pattern models, and each pattern model in the set of pattern models associated with one pattern in the set of patterns; compare each candidate region of interest in the set of candidate regions of interest to an associated pattern model in the set of pattern models to determine a set of pattern model correlation scores, the set of pattern model correlation scores to include a pattern model correlation score for each candidate region of interest of the one or more candidate regions of interest; identify one or more detected patterns based on the set of pattern model correlation scores; retrieve one or more text templates based on the one or more detected patterns, the one or more text templates to include at least one portion of text associated with each detected pattern of the one or more detected patterns, each text template of the one or more text templates associated with a priority level; arrange the one or more text templates in an order based on the priority level associated with each text template to generate a textual description of the graph image; and generate a personalized summary of the graph image, the summary of the graph image comprising the graph image and the textual description of the graph image. 2. The apparatus of claim 1, wherein the processor is caused to perform operations comprising: detect a portion of the graph image with contextual information; extract a textual element from the portion of the graph image with contextual information; and insert at least a portion of the textual element extracted from the portion of the graph image with contextual information into at least one text template of the one or more text templates to generate the textual description of the graph image. 3. The apparatus of claim 1, wherein the processor is caused to perform operations comprising: identify a component of the graph image based on the graph type; detect a portion of the graph image with potential contextual information; and determine contextual information is absent from the portion of the graph image with potential contextual information based on the component of the graph image identified based on the graph type. 4. The apparatus of claim 1, matching a pattern example of a pattern in the set of patterns with at least a portion of the graph image comprising: overlay at least a portion of the pattern example on the graph image in a plurality of positions; and compute a region of interest correlation score in the set of region of interest correlation scores for each of the plurality of positions. 5. The apparatus of claim 1, wherein the processor is caused to perform operations comprising: receive an additional pattern example; and update a pattern model in the set of pattern models based on the additional pattern example. 6. The apparatus of claim 1, each pattern model correlation score to indicate a likelihood of a respective candidate region of interest of the one or more candidate regions of interest including an associated pattern. 7. The apparatus of claim 1, wherein the processor is caused to perform operations comprising: present the one or more text templates arranged based on the priority level associated with each template sentence via a user interface; arrange the one or more text templates in an updated order based on input received via the user interface; alter a priority level of at least one of the one or more text templates based on the updated order; and generate the textual description of the graph image based on the priority level associated with each text template, the priority level associated with each text template to include the priority level of the at least one of the one or more text templates altered based on the updated order. 8. The apparatus of claim 1, wherein the processor is caused to perform operations comprising: alter the priority level of a text template based on the input received via a user interface. 9. The apparatus of claim 1, at least one pattern in the set of patterns comprising a personalized pattern, wherein the processor is caused to perform operations comprising: create the personalized pattern based on one or more example graph images and one or more pattern examples identified in the example graph images based on input received via a user interface. 10. The apparatus of claim 9, wherein the processor is caused to perform operations comprising: associate one or more of a priority level, a template sentence, or a graph type with the personalized pattern based on input received via the user interface. 11. A computer-implemented method, comprising: identifying a data visualization comprising a graph image; determining a set of graph-type correlation scores for the graph image, the set of graph-type correlation scores to include a graph-type correlation score for each graph type of a plurality of graph types, each graph-type correlation score based on a comparison of at least a portion of the graph image with one or more graph-type models associated with each graph type of the plurality of graph types; evaluating the set of graph-type correlation scores to identify a graph type of the graph image; retrieving a set of patterns based on the graph type of the graph image, each pattern in the set of patterns to include one or more pattern examples; determining a set of region of interest correlation scores for the graph image based on matching the one or more pattern examples of each pattern in the set of patterns with at least a portion of the graph image, the set of region of interest correlation scores to include at least one region of interest correlation score for each pattern in the set of patterns; evaluating the set of region of interest correlation scores to identify one or more candidate regions of interest of the graph image, each of the one or more candidate regions of interest to include a portion of the graph image; retrieving a set of pattern models based on the set of candidate regions of interest of the graph image, each candidate region of interest in the set of candidate regions of interest associated with one pattern model in the set of pattern models, and each pattern model in the set of pattern models associated with one pattern in the set of patterns; comparing each candidate region of interest in the set of candidate regions of interest to an associated pattern model in the set of pattern models to determine a set of pattern model correlation scores, the set of pattern model correlation scores to include a pattern model correlation score for each candidate region of interest of the one or more candidate regions of interest; identifying one or more detected patterns based on the set of pattern model correlation scores; retrieving one or more text templates based on the one or more detected patterns, the one or more text templates to include at least one portion of text associated with each detected pattern of the one or more detected patterns, each text template of the one or more text templates associated with a priority level; arranging the one or more text templates in an order based on the priority level associated with each text template to generate a textual description of the graph image; and generating a personalized summary of the graph image, the summary of the graph image comprising the graph image and the textual description of the graph image. 12. The computer-implemented method of claim 11, comprising: detecting a portion of the graph image with contextual information; extracting a textual element from the portion of the graph image with contextual information; and inserting at least a portion of the textual element extracted from the portion of the graph image with contextual information into at least one text template of the one or more text templates to generate the textual description of the graph image. 13. The computer-implemented method of claim 11, comprising: identifying a component of the graph image based on the graph type; detecting a portion of the graph image with potential contextual information; and determining contextual information is absent from the portion of the graph image with potential contextual information based on the component of the graph image identified based on the graph type. 14. The computer-implemented method of claim 11, matching a pattern example of a pattern in the set of patterns with at least a portion of the graph image comprising: overlaying at least a portion of the pattern example on the graph image in a plurality of positions; and computing a region of interest correlation score in the set of region of interest correlation scores for each of the plurality of positions. 15. The computer-implemented method of claim 11, comprising: receiving an additional pattern example; and updating a pattern model in the set of pattern models based on the additional pattern example. 16. The computer-implemented method of claim 11, each pattern model correlation score to indicate a likelihood of a respective candidate region of interest of the one or more candidate regions of interest including an associated pattern. 17. The computer-implemented method of claim 11, comprising: presenting the one or more text templates arranged based on the priority level associated with each template sentence via a user interface; arranging the one or more text templates in an updated order based on input received via the user interface; altering a priority level of at least one of the one or more text templates based on the updated order; and generating the textual description of the graph image based on the priority level associated with each text template, the priority level associated with each text template to include the priority level of the at least one of the one or more text templates altered based on the updated order. 18. The computer-implemented method of claim 11, comprising: altering the priority level of a text template based on the input received via a user interface. 19. The computer-implemented method of claim 11, wherein at least one pattern in the set of patterns comprising a personalized pattern, and comprising creating the personalized pattern based on one or more example graph images and one or more pattern examples identified in the example graph images based on input received via a user interface. 20. The computer-implemented method of claim 19, comprising associating one or more of a priority level, a template sentence, or a graph type with the personalized pattern based on input received via the user interface. 21. A computer-program product tangibly embodied in a non-transitory machine-readable storage medium, the computer-program product including instructions operable to cause a processor to perform operations comprising: identify a data visualization comprising a graph image; determine a set of graph-type correlation scores for the graph image, the set of graph-type correlation scores to include a graph-type correlation score for each graph type of a plurality of graph types, each graph-type correlation score based on a comparison of at least a portion of the graph image with one or more graph-type models associated with each graph type of the plurality of graph types; evaluate the set of graph-type correlation scores to identify a graph type of the graph image; retrieve a set of patterns based on the graph type of the graph image, each pattern in the set of patterns to include one or more pattern examples; determine a set of region of interest correlation scores for the graph image based on matching the one or more pattern examples of each pattern in the set of patterns with at least a portion of the graph image, the set of region of interest correlation scores to include at least one region of interest correlation score for each pattern in the set of patterns; evaluate the set of region of interest correlation scores to identify one or more candidate regions of interest of the graph image, each of the one or more candidate regions of interest to include a portion of the graph image; retrieve a set of pattern models based on the set of candidate regions of interest of the graph image, each candidate region of interest in the set of candidate regions of interest associated with one pattern model in the set of pattern models, and each pattern model in the set of pattern models associated with one pattern in the set of patterns; compare each candidate region of interest in the set of candidate regions of interest to an associated pattern model in the set of pattern models to determine a set of pattern model correlation scores, the set of pattern model correlation scores to include a pattern model correlation score for each candidate region of interest of the one or more candidate regions of interest; identify one or more detected patterns based on the set of pattern model correlation scores; retrieve one or more text templates based on the one or more detected patterns, the one or more text templates to include at least one portion of text associated with each detected pattern of the one or more detected patterns, each text template of the one or more text templates associated with a priority level; arrange the one or more text templates in an order based on the priority level associated with each text template to generate a textual description of the graph image; and generate a personalized summary of the graph image, the summary of the graph image comprising the graph image and the textual description of the graph image. 22. The computer-program product of claim 21, including instructions operable to cause the processor to perform operations comprising: detect a portion of the graph image with contextual information; extract a textual element from the portion of the graph image with contextual information; and insert at least a portion of the textual element extracted from the portion of the graph image with contextual information into at least one text template of the one or more text templates to generate the textual description of the graph image. 23. The computer-program product of claim 21, including instructions operable to cause the processor to perform operations comprising: identify a component of the graph image based on the graph type; detect a portion of the graph image with potential contextual information; and determine contextual information is absent from the portion of the graph image with potential contextual information based on the component of the graph image identified based on the graph type. 24. The computer-program product of claim 21, wherein to match a pattern example of a pattern in the set of patterns with at least a portion of the graph image the computer-program product includes instructions operable to cause the processor to perform operations comprising: overlay at least a portion of the pattern example on the graph image in a plurality of positions; and compute a region of interest correlation score in the set of region of interest correlation scores for each of the plurality of positions. 25. The computer-program product of claim 21, including instructions operable to cause the processor to perform operations comprising: receive an additional pattern example; and update a pattern model in the set of pattern models based on the additional pattern example. 26. The computer-program product of claim 21, each pattern model correlation score to indicate a likelihood of a respective candidate region of interest of the one or more candidate regions of interest including an associated pattern. 27. The computer-program product of claim 21, including instructions operable to cause the processor to perform operations comprising: present the one or more text templates arranged based on the priority level associated with each template sentence via a user interface; arrange the one or more text templates in an updated order based on input received via the user interface; alter a priority level of at least one of the one or more text templates based on the updated order; and generate the textual description of the graph image based on the priority level associated with each text template, the priority level associated with each text template to include the priority level of the at least one of the one or more text templates altered based on the updated order. 28. The computer-program product of claim 21, including instructions operable to cause the processor to perform operations comprising: alter the priority level of a text template based on the input received via a user interface. 29. The computer-program product of claim 21, at least one pattern in the set of patterns comprising a personalized pattern, and the computer-program product including instructions operable to cause the processor to perform operations comprising: create the personalized pattern based on one or more example graph images and one or more pattern examples identified in the example graph images based on input received via a user interface. 30. The computer-program product of claim 29, including instructions operable to cause the processor to perform operations comprising: associate one or more of a priority level, a template sentence, or a graph type with the personalized pattern based on input received via the user interface.
2,600
9,881
9,881
14,488,731
2,654
A feedback suppression system for detecting a feedback peak may include a controller configured to identity at least one peak of an audio input signal that includes audio data and acoustic feedback, apply at least one signature to the at least one peak, determine a response of the at least one peak to the at least one signature, identify the at least one peak as a feedback peak in response to the determined response, and set a notch filter at the identified frequency to eliminate the acoustic feedback of the audio input signal.
1. A feedback suppression system for detecting a feedback peak, comprising: a controller configured to: identity at least one peak of an audio input signal that includes audio data and acoustic feedback; apply at least one signature to the at least one peak; determine a response of the at least one peak to the at least one signature; identify the at least one peak as a feedback peak in response to the determined response; and set a notch filter at the identified frequency to eliminate the acoustic feedback of the audio input signal. 2. The system of claim 1, wherein the signature includes at least one test filter having a predefined gain. 3. The system of claim 1, wherein the signature includes at least one of a pitch shift and frequency shift. 4. The system of claim 1, wherein the response of the at least one peak includes a change in slope of a magnitude of the peak. 5. The system of claim 4, wherein the at least one peak is identified as a feedback peak in response to the change in slope exceeding a slope threshold. 6. The system of claim 1, wherein the response of the at least one peak includes determining a correlation coefficient between gain changes in the notch filter and slope changes in audio input signal. 7. The system of claim 6, wherein the at least one peak is identified as a feedback peak in response to the correlation coefficient exceeding a correlation threshold. 8. A feedback suppression system for detecting a feedback peak, comprising: a controller programmed to: identity at least one peak of an audio input signal that includes audio data and acoustic feedback; measure at least one feature of the at least one peak; determine a signature test classifier of the peak in response to the at least one feature; select a signature test based on the classifier; applying the signature test to the at least one peak; and identify the at least one peak as a feedback peak in response to the selected signature test. 9. The system of claim 8, where the signature test classifier selects between one of two distinct signature tests. 10. The system of claim 8, wherein the signature test classifier identifies the at least one peak as a feedback peak without applying a signature test. 11. The system of claim 8, wherein the at least one feature includes a magnitude of the at least one peak. 12. The system of claim 8, wherein the signature test includes at least one of a break point test and a correlation test. 13. The system of claim 12, wherein the at least one features includes a magnitude slope of the at least one peak. 14. The system of claim 13, wherein the signature test classifier selects the break point test in response to the magnitude slope exceeding a slope threshold and wherein the signature test classifier selects the correlation test in response to the magnitude slope not exceeding the slope threshold. 15. The system of claim 14, wherein a response of the at least one peak includes a change in slope of a magnitude of the at least one peak during the break point test. 16. The system of claim 15, wherein the at least one peak is identified as a feedback peak in response to the change in slope exceeding a slope threshold. 17. The system of claim 14, wherein a response of the at least one peak includes determining a correlation coefficient between gain changes in a notch filter and slope changes in a magnitude peak during the correlation test. 18. A feedback suppression system, comprising: a controller configured to: identity at least one peak of an audio input signal that includes audio data and acoustic feedback; apply at least one notch filter to the at least one peak; recognize a change in a slope of the at least one peak in response to the notch filter; compare the slope to a slope threshold; and adjust a gain of the notch filter in response to the slope threshold exceeding the slope. 19. The system of claim 18, wherein the change in slope is a post-detection slope recognized at a point of change in response to the application of the notch filter. 20. The system of claiml9, wherein the notch filter is applied during a testing period and the point of change and post-detection slope are each recognized within the testing period.
A feedback suppression system for detecting a feedback peak may include a controller configured to identity at least one peak of an audio input signal that includes audio data and acoustic feedback, apply at least one signature to the at least one peak, determine a response of the at least one peak to the at least one signature, identify the at least one peak as a feedback peak in response to the determined response, and set a notch filter at the identified frequency to eliminate the acoustic feedback of the audio input signal.1. A feedback suppression system for detecting a feedback peak, comprising: a controller configured to: identity at least one peak of an audio input signal that includes audio data and acoustic feedback; apply at least one signature to the at least one peak; determine a response of the at least one peak to the at least one signature; identify the at least one peak as a feedback peak in response to the determined response; and set a notch filter at the identified frequency to eliminate the acoustic feedback of the audio input signal. 2. The system of claim 1, wherein the signature includes at least one test filter having a predefined gain. 3. The system of claim 1, wherein the signature includes at least one of a pitch shift and frequency shift. 4. The system of claim 1, wherein the response of the at least one peak includes a change in slope of a magnitude of the peak. 5. The system of claim 4, wherein the at least one peak is identified as a feedback peak in response to the change in slope exceeding a slope threshold. 6. The system of claim 1, wherein the response of the at least one peak includes determining a correlation coefficient between gain changes in the notch filter and slope changes in audio input signal. 7. The system of claim 6, wherein the at least one peak is identified as a feedback peak in response to the correlation coefficient exceeding a correlation threshold. 8. A feedback suppression system for detecting a feedback peak, comprising: a controller programmed to: identity at least one peak of an audio input signal that includes audio data and acoustic feedback; measure at least one feature of the at least one peak; determine a signature test classifier of the peak in response to the at least one feature; select a signature test based on the classifier; applying the signature test to the at least one peak; and identify the at least one peak as a feedback peak in response to the selected signature test. 9. The system of claim 8, where the signature test classifier selects between one of two distinct signature tests. 10. The system of claim 8, wherein the signature test classifier identifies the at least one peak as a feedback peak without applying a signature test. 11. The system of claim 8, wherein the at least one feature includes a magnitude of the at least one peak. 12. The system of claim 8, wherein the signature test includes at least one of a break point test and a correlation test. 13. The system of claim 12, wherein the at least one features includes a magnitude slope of the at least one peak. 14. The system of claim 13, wherein the signature test classifier selects the break point test in response to the magnitude slope exceeding a slope threshold and wherein the signature test classifier selects the correlation test in response to the magnitude slope not exceeding the slope threshold. 15. The system of claim 14, wherein a response of the at least one peak includes a change in slope of a magnitude of the at least one peak during the break point test. 16. The system of claim 15, wherein the at least one peak is identified as a feedback peak in response to the change in slope exceeding a slope threshold. 17. The system of claim 14, wherein a response of the at least one peak includes determining a correlation coefficient between gain changes in a notch filter and slope changes in a magnitude peak during the correlation test. 18. A feedback suppression system, comprising: a controller configured to: identity at least one peak of an audio input signal that includes audio data and acoustic feedback; apply at least one notch filter to the at least one peak; recognize a change in a slope of the at least one peak in response to the notch filter; compare the slope to a slope threshold; and adjust a gain of the notch filter in response to the slope threshold exceeding the slope. 19. The system of claim 18, wherein the change in slope is a post-detection slope recognized at a point of change in response to the application of the notch filter. 20. The system of claiml9, wherein the notch filter is applied during a testing period and the point of change and post-detection slope are each recognized within the testing period.
2,600
9,882
9,882
15,464,230
2,684
A remote control device may be configured to be mounted over the toggle actuator of a light switch and to control a load control device. The remote control device may include a base portion and a rotating portion supported by the base portion so as to be rotatable about the base portion. The remote control device may include a control circuit, a wireless communication circuit, and a rotary encoder circuit. The rotary encoder circuit may be configured to translate a force applied to the rotating portion into input signals, and to operate as an antenna of the remote control device. The rotary encoder circuit may be configured to provide the input signals to the control circuit. The control circuit may be configured to translate the one or more input signals into control signals for transmission to the load control device via the wireless communication circuit.
1. A remote control device configured to be mounted over an installed light switch, the light switch having a switch actuator that extends through a faceplate of the light switch, the switch actuator operable between a first position and a second position to control whether power is delivered to an electrical load, the remote control device comprising: a base having a body that is configured to be mounted over the switch actuator of the light switch; a control interface that is configured to be attached to the base such that a rotatable portion of the control interface is rotatable around the base; a printed circuit board configured to be disposed in a cavity defined by the control interface; a wireless communication circuit; and a control circuit that is responsive to the control interface and is communicatively coupled to the wireless communication circuit, the control circuit configured to, in response to receiving an input signal from the control interface, cause the wireless communication circuit to transmit a control signal that causes an adjustment of an amount of power delivered to the electrical load, wherein the body of the base is further configured to, when the remote control device is mounted over the light switch with the switch actuator in the first position, receive a battery and a portion of the switch actuator such that the battery is disposed in a space vacated by the switch actuator when the switch actuator is operated from the second position to the first position. 2. The remote control device of claim 1, wherein the control circuit is configured to translate a force applied to the rotatable portion of the control interface into the control signal. 3. The remote control device of claim 2, wherein when the force is a rotational force, the control signal is indicative of a change in the amount of power delivered to the electrical load. 4. The remote control device of claim 3, wherein if the rotational force causes the rotatable portion to rotate a distance that does not exceed a predetermined distance, the control signal is indicative of changing the amount of power delivered to the electrical load by a predetermined amount. 5. The remote control device of claim 3, wherein if the rotational force causes the rotatable portion to rotate a distance that exceeds a predetermined distance, the control signal is indicative of continuously changing the amount of power delivered to the electrical load. 6. The remote control device of claim 2, wherein the control interface is configured to operably attach to the base such that the rotatable portion is resiliently biasable toward the base. 7. The remote control device of claim 6, wherein when the force causes the rotatable portion to be biased toward the base, the control signal is indicative of power being applied to, or power being removed from, the electrical load. 8. The remote control device of claim 6, wherein the control circuit is further configured to, when the force causes the rotatable portion to be biased toward the base for a predetermined amount of time, initiate a configuration procedure to associate the remote control device with a load control device that is configured to control the amount of power delivered to the electrical load. 9. The remote control device of claim 1, wherein the rotatable portion of the control interface comprises a disc-shaped front wall that defines a front surface, and an annular side wall that extends rearward relative to the front wall to define the cavity, the side wall configured to encircle the printed circuit board when the printed circuit board is disposed in the cavity. 10. The remote control device of claim 9, wherein when disposed in the cavity, the printed circuit board is located between the front wall and the battery. 11. The remote control device of claim 1, wherein the base is configured to engage with the switch actuator when the remote control device is mounted over the light switch. 12. The remote control device of claim 11, wherein the base defines an opening that is configured to receive the portion of the switch actuator, and wherein the base includes a deflectable arm that extends into the opening, the arm configured to engage with the switch actuator, and further includes a resilient strap that abuts the arm and is configured to bias the arm against the switch actuator. 13. The remote control device of claim 1, wherein the base defines a recess that is configured to at least partially receive the battery. 14. The remote control device of claim 13, wherein the base further defines an opening that extends therethrough, the opening configured to receive the portion of the switch actuator, the opening located adjacent to the recess such that the switch actuator does not interfere with the battery when the remote control device is mounted over the light switch. 15. The remote control device of claim 1, wherein the base is configured to, when the remote control device is mounted over the light switch, deter movement of the switch actuator when force is applied to the rotatable portion. 16. The remote control device of claim 1, wherein the control interface comprises an actuator, and wherein the control interface is configured to generate the input signal in response to actuation of the actuator. 17. The remote control device of claim 16, wherein the control circuit is further configured to cause the amount of power delivered to the electrical load to be adjusted in response to actuation of the actuator. 18. The remote control device of claim 16, wherein the control circuit is further configured to cause the electrical load to turn on in response to actuation of the actuator. 19. The remote control device of claim 16, wherein the control circuit is further configured to cause the electrical load to turn off in response to actuation of the actuator. 20. The remote control device of claim 1, wherein the printed circuit board is configured to be disposed in the cavity of the control interface such that the switch actuator of the light switch extends through a plane of the printed circuit board when the remote control device is mounted over the light switch. 21. The remote control device of claim 20, wherein the printed circuit board comprises an opening that extends therethrough, the opening configured to receive the switch actuator when the remote control device is mounted over the light switch. 22. The remote control device of claim 1, wherein a rear side of the printed circuit board is configured to removably retain the battery. 23. A remote control device configured to be mounted over an installed light switch, the light switch having a switch actuator that extends through a faceplate of the light switch, the switch actuator operable between a first position and a second position to control whether power is delivered to an electrical load, the remote control device comprising: a control interface configured to, when the remote control device is mounted over the light switch with the switch actuator in the first position, receive a battery and a portion of the switch actuator, such that the battery is disposed in a space vacated by the switch actuator when the switch actuator is operated from the second position to the first position and such that the switch actuator does not interfere with the battery; a wireless communication circuit; and a control circuit that is responsive to the control interface and communicatively coupled to the wireless communication circuit, the control circuit configured to, in response to receiving an input signal from the control interface, cause the wireless communication circuit to transmit a control signal that causes an adjustment of an amount of power delivered to the electrical load. 24. The remote control device of claim 23, further comprising: a base that is configured to at least partially receive the switch actuator when the remote control device is mounted over the light switch, wherein the control interface is configured to be operably coupled to the base and moveable relative to the base, and wherein the control interface is configured to generate the input signal in response to movement of the control interface relative to the base. 25. The remote control device of claim 24, wherein the control interface comprises a rotatable portion that is rotatable around the base, and wherein the control interface is configured to generate the input signal in response to rotation of the rotatable portion. 26. The remote control device of claim 25, further comprising a printed circuit board, wherein the printed circuit board defines an opening that extends therethrough, the opening configured to receive the switch actuator when the remote control device is mounted over the light switch. 27. The remote control device of claim 24, wherein the control interface comprises a slider that is configured to slide relative to the base, and wherein the control interface is configured to generate the input signal in response to translation of the slider. 28. The remote control device of claim 23, wherein the control interface comprises an actuator, and wherein the control interface is configured to generate the input signal in response to actuation of the actuator. 29. The remote control device of claim 28, wherein the control circuit is further configured to cause the amount of power delivered to the electrical load to be adjusted in response to actuation of the actuator. 30. A remote control device configured to be mounted over an installed light switch, the light switch having a switch actuator that extends through a faceplate of the light switch, the switch actuator configured to control whether power is delivered to an electrical load, the remote control device comprising: a control interface having a rotatable portion that includes a front wall and an annular side wall, the front wall and side wall defining a cavity; a base to which the control interface is configured to be operably coupled such that the rotatable portion is rotatable around the base, the base configured to, when the remote control device is mounted over the light switch, receive a battery and a portion of the switch actuator such that the switch actuator does not interfere with the battery. 31. The remote control device of claim 30, further comprising a printed circuit board that is configured to be disposed in the cavity of the rotatable portion such that the printed circuit board is located between the battery and the front wall. 32. The remote control device of claim 31, further comprising a control circuit that is mounted to the printed circuit board and operably coupled to the rotatable portion of the control interface, the control circuit configured to translate a force applied to the rotatable portion into a control signal that causes an adjustment of an amount of power delivered to the electrical load. 33. The remote control device of claim 32, further comprising a wireless communication circuit that is mounted to the printed circuit board and communicatively coupled to the control circuit, wherein the control circuit is further configured to cause the wireless communication circuit to transmit the control signal. 34. The remote control device of claim 32, wherein the control circuit is further configured to cause the amount of power delivered to the electrical load to be adjusted in response to rotation of the rotatable portion. 35. The remote control device of claim 31, wherein the printed circuit board is configured such that when the printed circuit board is disposed in the cavity, the switch actuator of the light switch extends through a plane of the printed circuit board when the remote control device is mounted over the light switch. 36. The remote control device of claim 35, wherein the printed circuit board defines an opening that extends therethrough, the opening configured to receive the switch actuator when the remote control device is mounted over the light switch. 37. The remote control device of claim 31, wherein a rear side of the printed circuit board is configured to removably retain the battery.
A remote control device may be configured to be mounted over the toggle actuator of a light switch and to control a load control device. The remote control device may include a base portion and a rotating portion supported by the base portion so as to be rotatable about the base portion. The remote control device may include a control circuit, a wireless communication circuit, and a rotary encoder circuit. The rotary encoder circuit may be configured to translate a force applied to the rotating portion into input signals, and to operate as an antenna of the remote control device. The rotary encoder circuit may be configured to provide the input signals to the control circuit. The control circuit may be configured to translate the one or more input signals into control signals for transmission to the load control device via the wireless communication circuit.1. A remote control device configured to be mounted over an installed light switch, the light switch having a switch actuator that extends through a faceplate of the light switch, the switch actuator operable between a first position and a second position to control whether power is delivered to an electrical load, the remote control device comprising: a base having a body that is configured to be mounted over the switch actuator of the light switch; a control interface that is configured to be attached to the base such that a rotatable portion of the control interface is rotatable around the base; a printed circuit board configured to be disposed in a cavity defined by the control interface; a wireless communication circuit; and a control circuit that is responsive to the control interface and is communicatively coupled to the wireless communication circuit, the control circuit configured to, in response to receiving an input signal from the control interface, cause the wireless communication circuit to transmit a control signal that causes an adjustment of an amount of power delivered to the electrical load, wherein the body of the base is further configured to, when the remote control device is mounted over the light switch with the switch actuator in the first position, receive a battery and a portion of the switch actuator such that the battery is disposed in a space vacated by the switch actuator when the switch actuator is operated from the second position to the first position. 2. The remote control device of claim 1, wherein the control circuit is configured to translate a force applied to the rotatable portion of the control interface into the control signal. 3. The remote control device of claim 2, wherein when the force is a rotational force, the control signal is indicative of a change in the amount of power delivered to the electrical load. 4. The remote control device of claim 3, wherein if the rotational force causes the rotatable portion to rotate a distance that does not exceed a predetermined distance, the control signal is indicative of changing the amount of power delivered to the electrical load by a predetermined amount. 5. The remote control device of claim 3, wherein if the rotational force causes the rotatable portion to rotate a distance that exceeds a predetermined distance, the control signal is indicative of continuously changing the amount of power delivered to the electrical load. 6. The remote control device of claim 2, wherein the control interface is configured to operably attach to the base such that the rotatable portion is resiliently biasable toward the base. 7. The remote control device of claim 6, wherein when the force causes the rotatable portion to be biased toward the base, the control signal is indicative of power being applied to, or power being removed from, the electrical load. 8. The remote control device of claim 6, wherein the control circuit is further configured to, when the force causes the rotatable portion to be biased toward the base for a predetermined amount of time, initiate a configuration procedure to associate the remote control device with a load control device that is configured to control the amount of power delivered to the electrical load. 9. The remote control device of claim 1, wherein the rotatable portion of the control interface comprises a disc-shaped front wall that defines a front surface, and an annular side wall that extends rearward relative to the front wall to define the cavity, the side wall configured to encircle the printed circuit board when the printed circuit board is disposed in the cavity. 10. The remote control device of claim 9, wherein when disposed in the cavity, the printed circuit board is located between the front wall and the battery. 11. The remote control device of claim 1, wherein the base is configured to engage with the switch actuator when the remote control device is mounted over the light switch. 12. The remote control device of claim 11, wherein the base defines an opening that is configured to receive the portion of the switch actuator, and wherein the base includes a deflectable arm that extends into the opening, the arm configured to engage with the switch actuator, and further includes a resilient strap that abuts the arm and is configured to bias the arm against the switch actuator. 13. The remote control device of claim 1, wherein the base defines a recess that is configured to at least partially receive the battery. 14. The remote control device of claim 13, wherein the base further defines an opening that extends therethrough, the opening configured to receive the portion of the switch actuator, the opening located adjacent to the recess such that the switch actuator does not interfere with the battery when the remote control device is mounted over the light switch. 15. The remote control device of claim 1, wherein the base is configured to, when the remote control device is mounted over the light switch, deter movement of the switch actuator when force is applied to the rotatable portion. 16. The remote control device of claim 1, wherein the control interface comprises an actuator, and wherein the control interface is configured to generate the input signal in response to actuation of the actuator. 17. The remote control device of claim 16, wherein the control circuit is further configured to cause the amount of power delivered to the electrical load to be adjusted in response to actuation of the actuator. 18. The remote control device of claim 16, wherein the control circuit is further configured to cause the electrical load to turn on in response to actuation of the actuator. 19. The remote control device of claim 16, wherein the control circuit is further configured to cause the electrical load to turn off in response to actuation of the actuator. 20. The remote control device of claim 1, wherein the printed circuit board is configured to be disposed in the cavity of the control interface such that the switch actuator of the light switch extends through a plane of the printed circuit board when the remote control device is mounted over the light switch. 21. The remote control device of claim 20, wherein the printed circuit board comprises an opening that extends therethrough, the opening configured to receive the switch actuator when the remote control device is mounted over the light switch. 22. The remote control device of claim 1, wherein a rear side of the printed circuit board is configured to removably retain the battery. 23. A remote control device configured to be mounted over an installed light switch, the light switch having a switch actuator that extends through a faceplate of the light switch, the switch actuator operable between a first position and a second position to control whether power is delivered to an electrical load, the remote control device comprising: a control interface configured to, when the remote control device is mounted over the light switch with the switch actuator in the first position, receive a battery and a portion of the switch actuator, such that the battery is disposed in a space vacated by the switch actuator when the switch actuator is operated from the second position to the first position and such that the switch actuator does not interfere with the battery; a wireless communication circuit; and a control circuit that is responsive to the control interface and communicatively coupled to the wireless communication circuit, the control circuit configured to, in response to receiving an input signal from the control interface, cause the wireless communication circuit to transmit a control signal that causes an adjustment of an amount of power delivered to the electrical load. 24. The remote control device of claim 23, further comprising: a base that is configured to at least partially receive the switch actuator when the remote control device is mounted over the light switch, wherein the control interface is configured to be operably coupled to the base and moveable relative to the base, and wherein the control interface is configured to generate the input signal in response to movement of the control interface relative to the base. 25. The remote control device of claim 24, wherein the control interface comprises a rotatable portion that is rotatable around the base, and wherein the control interface is configured to generate the input signal in response to rotation of the rotatable portion. 26. The remote control device of claim 25, further comprising a printed circuit board, wherein the printed circuit board defines an opening that extends therethrough, the opening configured to receive the switch actuator when the remote control device is mounted over the light switch. 27. The remote control device of claim 24, wherein the control interface comprises a slider that is configured to slide relative to the base, and wherein the control interface is configured to generate the input signal in response to translation of the slider. 28. The remote control device of claim 23, wherein the control interface comprises an actuator, and wherein the control interface is configured to generate the input signal in response to actuation of the actuator. 29. The remote control device of claim 28, wherein the control circuit is further configured to cause the amount of power delivered to the electrical load to be adjusted in response to actuation of the actuator. 30. A remote control device configured to be mounted over an installed light switch, the light switch having a switch actuator that extends through a faceplate of the light switch, the switch actuator configured to control whether power is delivered to an electrical load, the remote control device comprising: a control interface having a rotatable portion that includes a front wall and an annular side wall, the front wall and side wall defining a cavity; a base to which the control interface is configured to be operably coupled such that the rotatable portion is rotatable around the base, the base configured to, when the remote control device is mounted over the light switch, receive a battery and a portion of the switch actuator such that the switch actuator does not interfere with the battery. 31. The remote control device of claim 30, further comprising a printed circuit board that is configured to be disposed in the cavity of the rotatable portion such that the printed circuit board is located between the battery and the front wall. 32. The remote control device of claim 31, further comprising a control circuit that is mounted to the printed circuit board and operably coupled to the rotatable portion of the control interface, the control circuit configured to translate a force applied to the rotatable portion into a control signal that causes an adjustment of an amount of power delivered to the electrical load. 33. The remote control device of claim 32, further comprising a wireless communication circuit that is mounted to the printed circuit board and communicatively coupled to the control circuit, wherein the control circuit is further configured to cause the wireless communication circuit to transmit the control signal. 34. The remote control device of claim 32, wherein the control circuit is further configured to cause the amount of power delivered to the electrical load to be adjusted in response to rotation of the rotatable portion. 35. The remote control device of claim 31, wherein the printed circuit board is configured such that when the printed circuit board is disposed in the cavity, the switch actuator of the light switch extends through a plane of the printed circuit board when the remote control device is mounted over the light switch. 36. The remote control device of claim 35, wherein the printed circuit board defines an opening that extends therethrough, the opening configured to receive the switch actuator when the remote control device is mounted over the light switch. 37. The remote control device of claim 31, wherein a rear side of the printed circuit board is configured to removably retain the battery.
2,600
9,883
9,883
14,936,986
2,699
Aspects comprise systems implementing 3-D graphics processing functionality in a multiprocessing system. Control flow structures are used in scheduling instances of computation in the multiporcessing system, where different points in the control flow structure serve as points where deferral of some instances of computation can be performed in favor of scheduling other instances of computation. In some examples, the control flow structure identifies particular tasks, such as intersection testing of a particular portion of an acceleration structure, and a particular element of shading code. In some examples, the aspects are used in 3-D graphics processing systems that can perform ray tracing based rendering.
1. A method of rendering a plurality of images of a 3-D scene from a respective plurality of perspectives by ray tracing in a computer system, the method comprising: receiving shape data defining shapes to be rendered in the 3-D scene from said plurality of perspectives; defining rays for the plurality of perspectives to be tested for intersection in the 3-D scene; processing rays from the different perspectives together, the processing comprising at least one of: testing the rays against common geometric shapes within the 3-D scene, and performing shading operations using a common shader module. 2. The method of claim 1 further comprising collecting the rays from the different perspectives together for processing. 3. The method of claim 2 wherein the rays from the different perspectives are collected together with respect to commonality of intersection testing or shading to be performed. 4. The method of claim 1 wherein each perspective acts as an origin for at least some of the rays, and wherein the rays are processed without regard to their origins. 5. The method of claim 1 wherein the perspectives are attributed to a plurality of camera positions. 6. The method of claim 1 wherein the shape data comprises at least one of: (i) primitive data defining primitives in the 3-D scene, and (ii) elements of an acceleration structure each identifying respective selections of the primitives. 7. The method of claim 1 wherein the rays are defined by one or more shader modules, wherein a shader module can be invoked for each pixel element within an output buffer. 8. The method of claim 7 further comprising providing a shader invocation with information regarding each perspective. 9. A 3-D graphics processing system configured to render a plurality of images of a 3-D scene from a respective plurality of perspectives, comprising: an input configured to receive data defining shapes to be rendered in the 3-D scene from said plurality of perspectives; a memory configured to receive ray data defining rays for the plurality of perspectives to be tested for intersection in the 3-D scene; and a plurality of computation units, collectively capable of performing ray tracing operations to process rays from the different perspectives together, the ray tracing operations comprising at least one of: testing the rays against common geometric shapes within the 3-D scene, and performing shading operations using a common shader module. 10. The system of claim 9 further comprising a collection controller configured to collect the rays of the different perspectives together and to pass collections of rays to the computation units for processing. 11. The system of claim 10 wherein the collection controller is configured to collect the rays from the different perspectives together with respect to commonality of intersection testing or shading to be performed. 12. The system of claim 9 wherein the computation units are configured to apply shading by executing shading operations defined by portions of shader code, wherein portions of shader code are capable of branching. 13. The system of claim 12 further comprising a collection controller configured to collect the rays of the different perspectives together, wherein the collection controller is configured to collect rays at collection points defined based on branching within the portions of shader code. 14. The system of claim 9 further comprising an output buffer configured to store results of processing performed by the computation units. 15. The system of claim 9 wherein at least one of said computation units comprises a respective working memory which comprises registers configured to store data for use in ray tracing operations performed by that computation unit. 16. The system of claim 9 wherein the plurality of images are rendered for a holographic imaging system. 17. The system of claim 9 further comprising at least one shader module which can be invoked for each pixel element within an output buffer, the at least one shader module being configured to define the rays and to pass the ray data defining the rays to the memory. 18. Apparatus for rendering a plurality of images of a 3-D scene from a respective plurality of perspectives, the apparatus comprising: an input configured to receive data defining shapes to be rendered in the 3-D scene; a cache configured to receive ray data defining rays for the plurality of perspectives to be tested for intersection in the 3-D scene; and a plurality of processor cores configured to perform ray tracing operations to process rays from the different perspectives together, the ray tracing operations comprising at least one of: testing the rays against common geometric shapes within the 3-D scene, and performing shading operations using a common shader module. 19. The apparatus of claim 18 further comprising a collection controller configured to collect the rays of the different perspectives together and to pass collections of rays to the processor cores for processing. 20. The apparatus of claim 19 wherein the collection controller is configured to collect the rays from the different perspectives together with respect to commonality of intersection testing or shading to be performed.
Aspects comprise systems implementing 3-D graphics processing functionality in a multiprocessing system. Control flow structures are used in scheduling instances of computation in the multiporcessing system, where different points in the control flow structure serve as points where deferral of some instances of computation can be performed in favor of scheduling other instances of computation. In some examples, the control flow structure identifies particular tasks, such as intersection testing of a particular portion of an acceleration structure, and a particular element of shading code. In some examples, the aspects are used in 3-D graphics processing systems that can perform ray tracing based rendering.1. A method of rendering a plurality of images of a 3-D scene from a respective plurality of perspectives by ray tracing in a computer system, the method comprising: receiving shape data defining shapes to be rendered in the 3-D scene from said plurality of perspectives; defining rays for the plurality of perspectives to be tested for intersection in the 3-D scene; processing rays from the different perspectives together, the processing comprising at least one of: testing the rays against common geometric shapes within the 3-D scene, and performing shading operations using a common shader module. 2. The method of claim 1 further comprising collecting the rays from the different perspectives together for processing. 3. The method of claim 2 wherein the rays from the different perspectives are collected together with respect to commonality of intersection testing or shading to be performed. 4. The method of claim 1 wherein each perspective acts as an origin for at least some of the rays, and wherein the rays are processed without regard to their origins. 5. The method of claim 1 wherein the perspectives are attributed to a plurality of camera positions. 6. The method of claim 1 wherein the shape data comprises at least one of: (i) primitive data defining primitives in the 3-D scene, and (ii) elements of an acceleration structure each identifying respective selections of the primitives. 7. The method of claim 1 wherein the rays are defined by one or more shader modules, wherein a shader module can be invoked for each pixel element within an output buffer. 8. The method of claim 7 further comprising providing a shader invocation with information regarding each perspective. 9. A 3-D graphics processing system configured to render a plurality of images of a 3-D scene from a respective plurality of perspectives, comprising: an input configured to receive data defining shapes to be rendered in the 3-D scene from said plurality of perspectives; a memory configured to receive ray data defining rays for the plurality of perspectives to be tested for intersection in the 3-D scene; and a plurality of computation units, collectively capable of performing ray tracing operations to process rays from the different perspectives together, the ray tracing operations comprising at least one of: testing the rays against common geometric shapes within the 3-D scene, and performing shading operations using a common shader module. 10. The system of claim 9 further comprising a collection controller configured to collect the rays of the different perspectives together and to pass collections of rays to the computation units for processing. 11. The system of claim 10 wherein the collection controller is configured to collect the rays from the different perspectives together with respect to commonality of intersection testing or shading to be performed. 12. The system of claim 9 wherein the computation units are configured to apply shading by executing shading operations defined by portions of shader code, wherein portions of shader code are capable of branching. 13. The system of claim 12 further comprising a collection controller configured to collect the rays of the different perspectives together, wherein the collection controller is configured to collect rays at collection points defined based on branching within the portions of shader code. 14. The system of claim 9 further comprising an output buffer configured to store results of processing performed by the computation units. 15. The system of claim 9 wherein at least one of said computation units comprises a respective working memory which comprises registers configured to store data for use in ray tracing operations performed by that computation unit. 16. The system of claim 9 wherein the plurality of images are rendered for a holographic imaging system. 17. The system of claim 9 further comprising at least one shader module which can be invoked for each pixel element within an output buffer, the at least one shader module being configured to define the rays and to pass the ray data defining the rays to the memory. 18. Apparatus for rendering a plurality of images of a 3-D scene from a respective plurality of perspectives, the apparatus comprising: an input configured to receive data defining shapes to be rendered in the 3-D scene; a cache configured to receive ray data defining rays for the plurality of perspectives to be tested for intersection in the 3-D scene; and a plurality of processor cores configured to perform ray tracing operations to process rays from the different perspectives together, the ray tracing operations comprising at least one of: testing the rays against common geometric shapes within the 3-D scene, and performing shading operations using a common shader module. 19. The apparatus of claim 18 further comprising a collection controller configured to collect the rays of the different perspectives together and to pass collections of rays to the processor cores for processing. 20. The apparatus of claim 19 wherein the collection controller is configured to collect the rays from the different perspectives together with respect to commonality of intersection testing or shading to be performed.
2,600
9,884
9,884
14,036,728
2,675
An aspect provides a method, including: receiving image data from a visual sensor of an information handling device; receiving audio data from one or more microphones of the information handling device; identifying, using one or more processors, human speech in the audio data; identifying, using the one or more processors, a pattern of visual features in the image data associated with speaking; matching, using the one or more processors, the human speech in the audio data with the pattern of visual features in the image data associated with speaking; selecting, using the one or more processors, a primary speaker from among matched human speech; assigning control to the primary speaker; and performing one or more actions based on audio input of the primary speaker. Other aspects are described and claimed.
1. A method, comprising: receiving image data from a visual sensor of an information handling device; receiving audio data from one or more microphones of the information handling device; identifying, using one or more processors, human speech in the audio data; identifying, using the one or more processors, a pattern of visual features in the image data associated with speaking; matching, using the one or more processors, the human speech in the audio data with the pattern of visual features in the image data associated with speaking; selecting, using the one or more processors, a primary speaker from among matched human speech; assigning control to the primary speaker; and performing one or more actions based on audio input of the primary speaker. 2. The method of claim 1, wherein the one or more actions based on the primary speaker identified comprise providing a visual indication of the primary speaker identified. 3. The method of claim 1, further comprising: processing the matched human speech in a virtual assistant application; wherein the one or more actions based on the primary speaker identified comprise performing an action via the virtual assistant. 4. The method of claim 3, wherein the action performed via the virtual assistant comprises execution of a command derived from processing the matched human speech. 5. The method of claim 1, further comprising: activating a virtual assistant of the information handling device responsive to identifying a primary speaker; wherein the one or more actions based on the primary speaker identified comprises thereafter performing an action via the virtual assistant. 6. The method of claim 1, further comprising: identifying, using the one or more processors, newly matched human speech as a new primary speaker; and performing one or more actions based on the new primary speaker identified. 7. The method of claim 1, wherein the receiving audio data from one or more microphones of the information handling device comprises receiving audio data from two or more microphones of the information handling device; and wherein the identifying a pattern of visual features in the image data associated with speaking comprises utilizing directional information in the audio data received to identify the pattern of visual features associated with speaking. 8. The method of claim 1, wherein the identifying a pattern of visual features in the image data associated with speaking comprises utilizing pattern recognition to identify the pattern of visual features associated with speaking. 9. The method of claim 8, wherein the pattern of visual features in the image data associated with speaking comprise facial movement patterns. 10. The method of claim 9, wherein the identifying a pattern of visual features in the image data associated with speaking comprises filtering out facial movement patterns not associated with speaking. 11. An information handling device, comprising: a visual sensor; one or more microphones; one or more processors; and a memory storing code executable by the one or more processors to: receive image data from the visual sensor; receive audio data from the one or more microphones; identify human speech in the audio data; identify a pattern of visual features in the image data associated with speaking; match the human speech in the audio data with the pattern of visual features in the image data associated with speaking; select a primary speaker from among matched human speech; assign control to the primary speaker; and perform one or more actions based on audio input of the primary speaker. 12. The information handling device of claim 11, wherein the one or more actions based on the primary speaker identified comprise providing a visual indication of the primary speaker identified. 13. The information handling device of claim 11, wherein the code is further executable by the one or more processors to: process the matched human speech in a virtual assistant application; wherein the one or more actions based on the primary speaker identified comprise performing an action via the virtual assistant. 14. The information handling device of claim 13, wherein the action performed via the virtual assistant comprises execution of a command derived from processing the matched human speech. 15. The information handling device of claim 11, wherein the code is further executable by the one or more processors to: activate a virtual assistant of the information handling device responsive to identifying a primary speaker; wherein the one or more actions based on the primary speaker identified comprises thereafter performing an action via the virtual assistant. 16. The information handling device of claim 11, wherein the code is further executable by the one or more processors to: identify newly matched human speech as a new primary speaker; and perform one or more actions based on the new primary speaker identified. 17. The information handling device of claim 11, wherein to receive audio data from one or more microphones of the information handling device comprises receiving audio data from two or more microphones of the information handling device; and wherein to identify a pattern of visual features in the image data associated with speaking comprises utilizing directional information in the audio data received to identify the pattern of visual features associated with speaking. 18. The information handling device of claim 11, wherein to identify a pattern of visual features in the image data associated with speaking comprises utilizing pattern recognition to identify the pattern of visual features associated with speaking. 19. The information handling device of claim 18, wherein the pattern of visual features in the image data associated with speaking comprise facial movement patterns. 20. A program product, comprising: a computer readable storage medium storing instructions executable by one or more processors, the instructions comprising: computer readable program code configured to receive image data from a visual sensor of an information handling device; computer readable program code configured to receive audio data from one or more microphones of the information handling device; computer readable program code configured to identify, using one or more processors, human speech in the audio data; computer readable program code configured to identify, using the one or more processors, a pattern of visual features in the image data associated with speaking; computer readable program code configured to match, using the one or more processors, the human speech in the audio data with the pattern of visual features in the image data associated with speaking; computer readable program code configured to select, using the one or more processors, a primary speaker from among matched human speech; computer readable program code configured to assign control to the primary speaker; and computer readable program code configured to perform one or more actions based on audio input of the primary speaker. 21. An information handling device, comprising: a visual sensor; two or more microphones; one or more processors; and a memory storing code executable by the one or more processors to: receive image data from the visual sensor; receive audio data from the two or more microphones; identify human speech in the audio data; identify a pattern of visual features in the image data associated with speaking utilizing directional information in the audio data received to identify the pattern of visual features associated with speaking; match the human speech in the audio data with the pattern of visual features in the video data associated with speaking; identify matched human speech as a primary speaker; and perform one or more actions based on the primary speaker identified. 22. The information handling device of claim 21, wherein the code is further executable by the one or more processors to: identify newly matched human speech as a new primary speaker; and perform one or more actions based on the new primary speaker identified.
An aspect provides a method, including: receiving image data from a visual sensor of an information handling device; receiving audio data from one or more microphones of the information handling device; identifying, using one or more processors, human speech in the audio data; identifying, using the one or more processors, a pattern of visual features in the image data associated with speaking; matching, using the one or more processors, the human speech in the audio data with the pattern of visual features in the image data associated with speaking; selecting, using the one or more processors, a primary speaker from among matched human speech; assigning control to the primary speaker; and performing one or more actions based on audio input of the primary speaker. Other aspects are described and claimed.1. A method, comprising: receiving image data from a visual sensor of an information handling device; receiving audio data from one or more microphones of the information handling device; identifying, using one or more processors, human speech in the audio data; identifying, using the one or more processors, a pattern of visual features in the image data associated with speaking; matching, using the one or more processors, the human speech in the audio data with the pattern of visual features in the image data associated with speaking; selecting, using the one or more processors, a primary speaker from among matched human speech; assigning control to the primary speaker; and performing one or more actions based on audio input of the primary speaker. 2. The method of claim 1, wherein the one or more actions based on the primary speaker identified comprise providing a visual indication of the primary speaker identified. 3. The method of claim 1, further comprising: processing the matched human speech in a virtual assistant application; wherein the one or more actions based on the primary speaker identified comprise performing an action via the virtual assistant. 4. The method of claim 3, wherein the action performed via the virtual assistant comprises execution of a command derived from processing the matched human speech. 5. The method of claim 1, further comprising: activating a virtual assistant of the information handling device responsive to identifying a primary speaker; wherein the one or more actions based on the primary speaker identified comprises thereafter performing an action via the virtual assistant. 6. The method of claim 1, further comprising: identifying, using the one or more processors, newly matched human speech as a new primary speaker; and performing one or more actions based on the new primary speaker identified. 7. The method of claim 1, wherein the receiving audio data from one or more microphones of the information handling device comprises receiving audio data from two or more microphones of the information handling device; and wherein the identifying a pattern of visual features in the image data associated with speaking comprises utilizing directional information in the audio data received to identify the pattern of visual features associated with speaking. 8. The method of claim 1, wherein the identifying a pattern of visual features in the image data associated with speaking comprises utilizing pattern recognition to identify the pattern of visual features associated with speaking. 9. The method of claim 8, wherein the pattern of visual features in the image data associated with speaking comprise facial movement patterns. 10. The method of claim 9, wherein the identifying a pattern of visual features in the image data associated with speaking comprises filtering out facial movement patterns not associated with speaking. 11. An information handling device, comprising: a visual sensor; one or more microphones; one or more processors; and a memory storing code executable by the one or more processors to: receive image data from the visual sensor; receive audio data from the one or more microphones; identify human speech in the audio data; identify a pattern of visual features in the image data associated with speaking; match the human speech in the audio data with the pattern of visual features in the image data associated with speaking; select a primary speaker from among matched human speech; assign control to the primary speaker; and perform one or more actions based on audio input of the primary speaker. 12. The information handling device of claim 11, wherein the one or more actions based on the primary speaker identified comprise providing a visual indication of the primary speaker identified. 13. The information handling device of claim 11, wherein the code is further executable by the one or more processors to: process the matched human speech in a virtual assistant application; wherein the one or more actions based on the primary speaker identified comprise performing an action via the virtual assistant. 14. The information handling device of claim 13, wherein the action performed via the virtual assistant comprises execution of a command derived from processing the matched human speech. 15. The information handling device of claim 11, wherein the code is further executable by the one or more processors to: activate a virtual assistant of the information handling device responsive to identifying a primary speaker; wherein the one or more actions based on the primary speaker identified comprises thereafter performing an action via the virtual assistant. 16. The information handling device of claim 11, wherein the code is further executable by the one or more processors to: identify newly matched human speech as a new primary speaker; and perform one or more actions based on the new primary speaker identified. 17. The information handling device of claim 11, wherein to receive audio data from one or more microphones of the information handling device comprises receiving audio data from two or more microphones of the information handling device; and wherein to identify a pattern of visual features in the image data associated with speaking comprises utilizing directional information in the audio data received to identify the pattern of visual features associated with speaking. 18. The information handling device of claim 11, wherein to identify a pattern of visual features in the image data associated with speaking comprises utilizing pattern recognition to identify the pattern of visual features associated with speaking. 19. The information handling device of claim 18, wherein the pattern of visual features in the image data associated with speaking comprise facial movement patterns. 20. A program product, comprising: a computer readable storage medium storing instructions executable by one or more processors, the instructions comprising: computer readable program code configured to receive image data from a visual sensor of an information handling device; computer readable program code configured to receive audio data from one or more microphones of the information handling device; computer readable program code configured to identify, using one or more processors, human speech in the audio data; computer readable program code configured to identify, using the one or more processors, a pattern of visual features in the image data associated with speaking; computer readable program code configured to match, using the one or more processors, the human speech in the audio data with the pattern of visual features in the image data associated with speaking; computer readable program code configured to select, using the one or more processors, a primary speaker from among matched human speech; computer readable program code configured to assign control to the primary speaker; and computer readable program code configured to perform one or more actions based on audio input of the primary speaker. 21. An information handling device, comprising: a visual sensor; two or more microphones; one or more processors; and a memory storing code executable by the one or more processors to: receive image data from the visual sensor; receive audio data from the two or more microphones; identify human speech in the audio data; identify a pattern of visual features in the image data associated with speaking utilizing directional information in the audio data received to identify the pattern of visual features associated with speaking; match the human speech in the audio data with the pattern of visual features in the video data associated with speaking; identify matched human speech as a primary speaker; and perform one or more actions based on the primary speaker identified. 22. The information handling device of claim 21, wherein the code is further executable by the one or more processors to: identify newly matched human speech as a new primary speaker; and perform one or more actions based on the new primary speaker identified.
2,600
9,885
9,885
14,489,838
2,636
Systems and methods for fast restoration in a network using a control plane include detecting a failure on a link associated with the node; and providing failure information through in-band data path overhead of an affected connection, wherein the in-band data path overhead is sent over a fast path, wherein the failure information is received at an originating node of the affected connection via the fast path, prior to the originating node receiving control plane signaling via a slow path relative to the fast path.
1. A method, by a node in a network using a control plane, for fast restoration in the network, the method comprising: detecting a failure on a link associated with the node; and providing failure information through in-band data path overhead of an affected connection, wherein the in-band data path overhead is sent over a fast path, wherein the failure information is received at an originating node of the affected connection via the fast path, prior to the originating node receiving control plane signaling via a slow path relative to the fast path. 2. The method of claim 1, further comprising generating and forwarding the control plane signaling based on the failure, wherein the control plane signaling is sent over the slow path. 3. The method of claim 1, wherein a restoration procedure is initiated, at the originating node, in the control plane responsive to receipt of the failure information from the fast path, prior to the originating node receiving the control plane signaling via the slow path. 4. The method of claim 3, wherein the restoration procedure excludes a node and a link associated with the failure information, and wherein the node and the link are excluded since routing updates in the slow path are not available at the originating node, upon receiving the failure information in the fast path. 5. The method of claim 1, wherein one or more intermediate nodes of the affected connection are configured to receive the failure information via the fast path, to parse and pass the failure information to the control plane operating at each of the one or more intermediate nodes, and to perform a first action based on the received failure information. 6. The method of claim 5, wherein the first action is releasing local resources associated with the affected connection, and forwarding routing updates related to the released local resources via the slow path. 7. The method of claim 5, wherein the first action is releasing local resources associated with the affected connection at an expiration of a hold-off period prior to receiving information from the originating node, or performing a second action based on the information from the originating node, responsive to receiving the information from the originating node within the hold-off period. 8. The method of claim 5, wherein one or more intermediate nodes are configured to generate and forward the control plane signaling via the slow path, upon receiving the failure information, to adjacent nodes that do not support the fast path. 9. The method of claim 1, wherein the originating node is configured to squelch the failure information in the overhead. 10. The method of claim 1, wherein the fast path operates in real-time via injection of the failure information in the data path overhead upon detection of the failure and is negligibly impacted in its time delay by a number of intermediate nodes between the node and the originating node, and wherein the slow path operates in software based on processing and forwarding the control plane signaling sequentially through the intermediate nodes to the originating node and is delayed based on the number of the intermediate nodes. 11. The method of claim 1, wherein the affected connection utilizes Optical Transport Network (OTN). 12. The method of claim 1, wherein the affected connection utilizes Optical Transport Network (OTN) and the failure information is inserted in Fault Type and Fault Location (FTFL) reporting communication channel bytes of the overhead. 13. The method of claim 12, wherein the failure information is inserted in either forward or backward operator-specific fields in the FTFL reporting communication channel bytes of the overhead, based on a direction of the failure. 14. A node, in a network using a control plane, configured for providing fast restoration in the network, the node comprising: one or more line modules configured to inject information in overhead on connections; and a controller communicatively coupled to the one or more line modules, wherein the controller is configured to operate a distributed control plane through a communications channel in the overhead; wherein, responsive to a failure on a link, the one or more line modules are configured to inject failure information identifying the failure in the overhead of each one of affected connections, over a fast path, and wherein, responsive to the failure on the link, the controller is also configured to generate and forward control plane signaling towards originating nodes of the affected connections over a slow path relative to the fast path. 15. The node of claim 14, wherein a restoration procedure is initiated in the control plane, responsive to the fast path prior to the originating node receiving the control plane signaling via the slow path. 16. The node of claim 15, wherein the restoration procedure excludes a node and a link associated with the failure information, wherein the node and the link are excluded since routing updates in the slow path are not available at the originating node upon receiving the information in the fast path. 17. The node of claim 14, wherein the affected connection utilizes Optical Transport Network (OTN). 18. The node of claim 14, wherein the affected connection utilizes Optical Transport Network (OTN) and the failure information is inserted in Fault Type and Fault Location (FTFL) reporting communication channel bytes of the overhead. 19. A network, comprising: a plurality of nodes interconnected by a plurality of links; and a control plane operating between the plurality of nodes; wherein, responsive to detecting a failure on one link of the plurality of links, nodes associated with the link are configured to generate and forward control plane signaling based on the failure over a slow path and inject failure information based on the failure in overhead of affected connections over a fast path relative to the slow path, and wherein an originating node of the affected connection is configured to receive the failure information via the fast path prior to receiving the information via the slow path. 20. The network of claim 19, wherein the affected connection utilizes Optical Transport Network (OTN) and the information based on the failure is inserted in Fault Type and Fault Location (FTFL) reporting communication channel bytes of the overhead.
Systems and methods for fast restoration in a network using a control plane include detecting a failure on a link associated with the node; and providing failure information through in-band data path overhead of an affected connection, wherein the in-band data path overhead is sent over a fast path, wherein the failure information is received at an originating node of the affected connection via the fast path, prior to the originating node receiving control plane signaling via a slow path relative to the fast path.1. A method, by a node in a network using a control plane, for fast restoration in the network, the method comprising: detecting a failure on a link associated with the node; and providing failure information through in-band data path overhead of an affected connection, wherein the in-band data path overhead is sent over a fast path, wherein the failure information is received at an originating node of the affected connection via the fast path, prior to the originating node receiving control plane signaling via a slow path relative to the fast path. 2. The method of claim 1, further comprising generating and forwarding the control plane signaling based on the failure, wherein the control plane signaling is sent over the slow path. 3. The method of claim 1, wherein a restoration procedure is initiated, at the originating node, in the control plane responsive to receipt of the failure information from the fast path, prior to the originating node receiving the control plane signaling via the slow path. 4. The method of claim 3, wherein the restoration procedure excludes a node and a link associated with the failure information, and wherein the node and the link are excluded since routing updates in the slow path are not available at the originating node, upon receiving the failure information in the fast path. 5. The method of claim 1, wherein one or more intermediate nodes of the affected connection are configured to receive the failure information via the fast path, to parse and pass the failure information to the control plane operating at each of the one or more intermediate nodes, and to perform a first action based on the received failure information. 6. The method of claim 5, wherein the first action is releasing local resources associated with the affected connection, and forwarding routing updates related to the released local resources via the slow path. 7. The method of claim 5, wherein the first action is releasing local resources associated with the affected connection at an expiration of a hold-off period prior to receiving information from the originating node, or performing a second action based on the information from the originating node, responsive to receiving the information from the originating node within the hold-off period. 8. The method of claim 5, wherein one or more intermediate nodes are configured to generate and forward the control plane signaling via the slow path, upon receiving the failure information, to adjacent nodes that do not support the fast path. 9. The method of claim 1, wherein the originating node is configured to squelch the failure information in the overhead. 10. The method of claim 1, wherein the fast path operates in real-time via injection of the failure information in the data path overhead upon detection of the failure and is negligibly impacted in its time delay by a number of intermediate nodes between the node and the originating node, and wherein the slow path operates in software based on processing and forwarding the control plane signaling sequentially through the intermediate nodes to the originating node and is delayed based on the number of the intermediate nodes. 11. The method of claim 1, wherein the affected connection utilizes Optical Transport Network (OTN). 12. The method of claim 1, wherein the affected connection utilizes Optical Transport Network (OTN) and the failure information is inserted in Fault Type and Fault Location (FTFL) reporting communication channel bytes of the overhead. 13. The method of claim 12, wherein the failure information is inserted in either forward or backward operator-specific fields in the FTFL reporting communication channel bytes of the overhead, based on a direction of the failure. 14. A node, in a network using a control plane, configured for providing fast restoration in the network, the node comprising: one or more line modules configured to inject information in overhead on connections; and a controller communicatively coupled to the one or more line modules, wherein the controller is configured to operate a distributed control plane through a communications channel in the overhead; wherein, responsive to a failure on a link, the one or more line modules are configured to inject failure information identifying the failure in the overhead of each one of affected connections, over a fast path, and wherein, responsive to the failure on the link, the controller is also configured to generate and forward control plane signaling towards originating nodes of the affected connections over a slow path relative to the fast path. 15. The node of claim 14, wherein a restoration procedure is initiated in the control plane, responsive to the fast path prior to the originating node receiving the control plane signaling via the slow path. 16. The node of claim 15, wherein the restoration procedure excludes a node and a link associated with the failure information, wherein the node and the link are excluded since routing updates in the slow path are not available at the originating node upon receiving the information in the fast path. 17. The node of claim 14, wherein the affected connection utilizes Optical Transport Network (OTN). 18. The node of claim 14, wherein the affected connection utilizes Optical Transport Network (OTN) and the failure information is inserted in Fault Type and Fault Location (FTFL) reporting communication channel bytes of the overhead. 19. A network, comprising: a plurality of nodes interconnected by a plurality of links; and a control plane operating between the plurality of nodes; wherein, responsive to detecting a failure on one link of the plurality of links, nodes associated with the link are configured to generate and forward control plane signaling based on the failure over a slow path and inject failure information based on the failure in overhead of affected connections over a fast path relative to the slow path, and wherein an originating node of the affected connection is configured to receive the failure information via the fast path prior to receiving the information via the slow path. 20. The network of claim 19, wherein the affected connection utilizes Optical Transport Network (OTN) and the information based on the failure is inserted in Fault Type and Fault Location (FTFL) reporting communication channel bytes of the overhead.
2,600
9,886
9,886
14,202,722
2,613
Aspects relate to tracing rays in 3-D scenes that comprise objects that are defined by or with implicit geometry. In an example, a trapping element defines a portion of 3-D space in which implicit geometry exist. When a ray is found to intersect a trapping element, a trapping element procedure is executed. The trapping element procedure may comprise marching a ray through a 3-D volume and evaluating a function that defines the implicit geometry for each current 3-D position of the ray. An intersection detected with the implicit geometry may be found concurrently with intersections for the same ray with explicitly-defined geometry, and data describing these intersections may be stored with the ray and resolved.
1. A method of testing a ray for intersection with an implicit surface, comprising: entering a surface of a shell bounding a 3-D volume with a ray, the shell defining a maximum extent for implicitly-defined geometry within the shell; iteratively stepping a current 3-D position of the ray along its path through the 3-D volume defined by the shell; for each current 3-D position, projecting the current 3-D position of the ray to a current 2-D position on an explicitly-defined surface bounded in the shell, producing data for the implicitly-defined geometry using the current 2-D position on the explicitly-defined surface, and characterizing the ray as either hitting or missing the implicitly-defined geometry at the current 3-D position, using the produced data. 2. (canceled) 3. The method of testing a ray for intersection with an implicit surface of claim 1, wherein the producing of the data for the implicitly-defined geometry comprises executing a procedure to determine a height of implicitly-defined geometry for the current 2-D position. 4. The method of testing a ray for intersection with an implicit surface of claim 3, wherein the executing of the procedure comprises evaluating a function that accepts, as input, data relating to the current 2-D position. 5. The method of testing a ray for intersection with an implicit surface of claim 1, further comprising determining that the ray intersects a 3-D bounding volume that encloses the shell, and responsively initiating an implicit geometry intersection testing process. 6. The method of testing a ray for intersection with an implicit surface of claim 1, further comprising setting a step size used in the iterative stepping according to a level of detail indication. 7. The method of testing a ray for intersection with an implicit surface of claim 1, further comprising stepping over a pre-defined volumetric portion within the shell responsive to data indicating absence of implicit geometry in that pre-defined volumetric portion. 8. (canceled) 9. The method of testing a ray for intersection with an implicit surface of claim 1, further comprising determining that the ray intersects a 3-D bounding volume that encloses the shell, and projecting an entry point of the ray into the 3-D bounding volume to an entry point of the shell. 10. (canceled) 11. The method of testing a ray for intersection with an implicit surface of claim 10, wherein the interconnected primitives have respective corresponding primitives in a set of source geometry defining the explicitly-defined surface, and the shell comprises sub-volumes defined by surfaces connecting the corresponding primitive in the set of source geometry with its primitive in the mesh of interconnected primitives. 12. The method of testing a ray for intersection with an implicit surface of claim 11, wherein the surfaces connecting primitives in the set of source geometry with the primitives of the mesh are bilinear patches. 13. The method of testing a ray for intersection with an implicit surface of claim 10, wherein the interconnected primitives have respective corresponding primitives in a set of source geometry, and each primitive in the set of source geometry is associated with a respective implicit geometry definition function. 14. The method of testing a ray for intersection with an implicit surface of claim 1, wherein the iterative stepping is performed by skipping over portions of 3-D space within the shell that are enclosed within volume exclusion elements defined within the shell. 15. The method of testing a ray for intersection with an implicit surface of claim 1, further comprising passing a level of detail parameter within a data structure for the ray to a procedure that produces the data for the implicitly-defined geometry. 16. The method of testing a ray for intersection with an implicit surface of claim 1, wherein the producing data for the implicitly-defined geometry comprises using the current 2-D position as a parameter in evaluating an expression that determines an extent of implicit geometry at the current 3-D position. 17. The method of testing a ray for intersection with an implicit surface of claim 1, wherein the producing data for the implicitly-defined geometry comprises producing a height for implicit geometry above the current 2-D position, and comparing that height with a height of the ray at the current 3-D position. 18. The method of testing a ray for intersection with an implicit surface of claim 17, wherein the comparing comprising generating a sign bit for a difference between the height of the implicit geometry and the height of the ray, and the characterizing of the ray as either hitting or missing the implicitly-defined geometry at the current 3-D position comprises detecting when the sign bit changes. 19-22. (canceled) 23. The method of testing a ray for intersection with an implicit surface of claim 1, further comprising identifying a first 3-D point at which the ray is found to miss the implicitly defined geometry and a second 3-D point at which the ray is found to hit the implicitly defined geometry, and selecting one of the first 3-D point and the second 3-D point as an origin for a child ray, according to a ray type for the child ray. 24. A system for traversing a ray through a 3-D scene having implicitly-defined geometry, comprising: a non-transitory memory storing an acceleration structure comprising definitions of elements that each bound a respective selection of primitives located in the 3-D scene, wherein the definitions of one or more of the elements comprises a trapping element flag; and a ray traversal unit operable to traverse a ray through the acceleration structure by executing a process comprising receiving definitions of the acceleration structure elements, and for each acceleration structure having a definition indicating that it is a trapping element, initiating execution of a trapping element procedure associated with the trapping element, the trapping element procedure determining whether the ray intersects implicit geometry within a volume defined by the trapping element, and otherwise testing the ray for intersection with the acceleration structure element, and testing explicitly defined geometry within the acceleration structure element, if any. 25. The system for traversing a ray through a 3-D scene having displacement geometry of claim 24, wherein the trapping element procedure is executed on a programmable computation unit. 26. The system for traversing a ray through a 3-D scene having displacement geometry of claim 24, wherein the testing of the ray for intersection with the acceleration structure element is performed in a fixed-function test cell in the ray traversal unit. 27. (canceled) 28. The system for traversing a ray through a 3-D scene having displacement geometry of claim 24, the data comprising intersection location information and primitive identifying information, and selecting an intersection, from a plurality of intersections identified for the ray based first on intersection location and secondarily on the identifying information. 29. A system for testing a ray for intersection with an implicit surface, comprising: a cluster of programmable execution units, capable of executing shader code that emit rays for which a closest intersection in a 3-D scene is to be identified, if any; a plurality of test cells configured for respectively testing a ray for intersection with a shape; a plurality of local memories associated with respective test cells, the local memories storing data for rays that are being traversed in the 3-D scene; a non-transitory memory storing an acceleration structure comprising bounding elements, wherein some of the bounding elements are associated with a flag; and a controller operable to control traversal of a ray through the acceleration structure, the controller operable to identify bounding elements from the acceleration structure associated with the flag and allocate traversal of those bounding elements to the cluster of programmable execution units, and otherwise to allocate traversal of bounding elements to the plurality of test cells. 30. The system for testing a ray for intersection with an implicit surface of claim 29, wherein the controller is further operable to maintain collections of rays that await traversal within different acceleration structure elements, and to schedule traversal of each of the collections of rays. 31. The system for testing a ray for intersection with an implicit surface of claim 29, wherein the cluster of programmable execution units and the plurality of test cells each are operable to produce an update to a ray data structure that tracks a closest intersection for the ray. 32. The system for testing a ray for intersection with an implicit surface of claim 31, wherein the ray data structure tracks a plurality of candidate closest intersections, and after completing traversal of the ray, an intersection to be shaded is selected from among the candidate closest intersections. 33. The system for testing a ray for intersection with an implicit surface of claim 31, wherein the programmable execution units are operable to traverse the ray based on a referenced coordinate system and the update is expressed in the referenced coordinate system in the ray data structure. 34. A method of representing implicit surfaces for ray tracing based rendering, comprising: associating a description of an implicit surface with a primitive, wherein the description of the implicit surface can be evaluated for different coordinates on the surface of the primitive; marching a ray along a direction within a pre-defined 3-D volume defined based on an extent of the primitive; for each position of the ray within the 3-D volume, determining a surface coordinate for that position on the primitive; evaluating the description of the implicit surface for that surface coordinate; and indicating whether the ray hits or missing the implicit surface at that position within the 3-D volume based on the evaluating. 35-63. (canceled)
Aspects relate to tracing rays in 3-D scenes that comprise objects that are defined by or with implicit geometry. In an example, a trapping element defines a portion of 3-D space in which implicit geometry exist. When a ray is found to intersect a trapping element, a trapping element procedure is executed. The trapping element procedure may comprise marching a ray through a 3-D volume and evaluating a function that defines the implicit geometry for each current 3-D position of the ray. An intersection detected with the implicit geometry may be found concurrently with intersections for the same ray with explicitly-defined geometry, and data describing these intersections may be stored with the ray and resolved.1. A method of testing a ray for intersection with an implicit surface, comprising: entering a surface of a shell bounding a 3-D volume with a ray, the shell defining a maximum extent for implicitly-defined geometry within the shell; iteratively stepping a current 3-D position of the ray along its path through the 3-D volume defined by the shell; for each current 3-D position, projecting the current 3-D position of the ray to a current 2-D position on an explicitly-defined surface bounded in the shell, producing data for the implicitly-defined geometry using the current 2-D position on the explicitly-defined surface, and characterizing the ray as either hitting or missing the implicitly-defined geometry at the current 3-D position, using the produced data. 2. (canceled) 3. The method of testing a ray for intersection with an implicit surface of claim 1, wherein the producing of the data for the implicitly-defined geometry comprises executing a procedure to determine a height of implicitly-defined geometry for the current 2-D position. 4. The method of testing a ray for intersection with an implicit surface of claim 3, wherein the executing of the procedure comprises evaluating a function that accepts, as input, data relating to the current 2-D position. 5. The method of testing a ray for intersection with an implicit surface of claim 1, further comprising determining that the ray intersects a 3-D bounding volume that encloses the shell, and responsively initiating an implicit geometry intersection testing process. 6. The method of testing a ray for intersection with an implicit surface of claim 1, further comprising setting a step size used in the iterative stepping according to a level of detail indication. 7. The method of testing a ray for intersection with an implicit surface of claim 1, further comprising stepping over a pre-defined volumetric portion within the shell responsive to data indicating absence of implicit geometry in that pre-defined volumetric portion. 8. (canceled) 9. The method of testing a ray for intersection with an implicit surface of claim 1, further comprising determining that the ray intersects a 3-D bounding volume that encloses the shell, and projecting an entry point of the ray into the 3-D bounding volume to an entry point of the shell. 10. (canceled) 11. The method of testing a ray for intersection with an implicit surface of claim 10, wherein the interconnected primitives have respective corresponding primitives in a set of source geometry defining the explicitly-defined surface, and the shell comprises sub-volumes defined by surfaces connecting the corresponding primitive in the set of source geometry with its primitive in the mesh of interconnected primitives. 12. The method of testing a ray for intersection with an implicit surface of claim 11, wherein the surfaces connecting primitives in the set of source geometry with the primitives of the mesh are bilinear patches. 13. The method of testing a ray for intersection with an implicit surface of claim 10, wherein the interconnected primitives have respective corresponding primitives in a set of source geometry, and each primitive in the set of source geometry is associated with a respective implicit geometry definition function. 14. The method of testing a ray for intersection with an implicit surface of claim 1, wherein the iterative stepping is performed by skipping over portions of 3-D space within the shell that are enclosed within volume exclusion elements defined within the shell. 15. The method of testing a ray for intersection with an implicit surface of claim 1, further comprising passing a level of detail parameter within a data structure for the ray to a procedure that produces the data for the implicitly-defined geometry. 16. The method of testing a ray for intersection with an implicit surface of claim 1, wherein the producing data for the implicitly-defined geometry comprises using the current 2-D position as a parameter in evaluating an expression that determines an extent of implicit geometry at the current 3-D position. 17. The method of testing a ray for intersection with an implicit surface of claim 1, wherein the producing data for the implicitly-defined geometry comprises producing a height for implicit geometry above the current 2-D position, and comparing that height with a height of the ray at the current 3-D position. 18. The method of testing a ray for intersection with an implicit surface of claim 17, wherein the comparing comprising generating a sign bit for a difference between the height of the implicit geometry and the height of the ray, and the characterizing of the ray as either hitting or missing the implicitly-defined geometry at the current 3-D position comprises detecting when the sign bit changes. 19-22. (canceled) 23. The method of testing a ray for intersection with an implicit surface of claim 1, further comprising identifying a first 3-D point at which the ray is found to miss the implicitly defined geometry and a second 3-D point at which the ray is found to hit the implicitly defined geometry, and selecting one of the first 3-D point and the second 3-D point as an origin for a child ray, according to a ray type for the child ray. 24. A system for traversing a ray through a 3-D scene having implicitly-defined geometry, comprising: a non-transitory memory storing an acceleration structure comprising definitions of elements that each bound a respective selection of primitives located in the 3-D scene, wherein the definitions of one or more of the elements comprises a trapping element flag; and a ray traversal unit operable to traverse a ray through the acceleration structure by executing a process comprising receiving definitions of the acceleration structure elements, and for each acceleration structure having a definition indicating that it is a trapping element, initiating execution of a trapping element procedure associated with the trapping element, the trapping element procedure determining whether the ray intersects implicit geometry within a volume defined by the trapping element, and otherwise testing the ray for intersection with the acceleration structure element, and testing explicitly defined geometry within the acceleration structure element, if any. 25. The system for traversing a ray through a 3-D scene having displacement geometry of claim 24, wherein the trapping element procedure is executed on a programmable computation unit. 26. The system for traversing a ray through a 3-D scene having displacement geometry of claim 24, wherein the testing of the ray for intersection with the acceleration structure element is performed in a fixed-function test cell in the ray traversal unit. 27. (canceled) 28. The system for traversing a ray through a 3-D scene having displacement geometry of claim 24, the data comprising intersection location information and primitive identifying information, and selecting an intersection, from a plurality of intersections identified for the ray based first on intersection location and secondarily on the identifying information. 29. A system for testing a ray for intersection with an implicit surface, comprising: a cluster of programmable execution units, capable of executing shader code that emit rays for which a closest intersection in a 3-D scene is to be identified, if any; a plurality of test cells configured for respectively testing a ray for intersection with a shape; a plurality of local memories associated with respective test cells, the local memories storing data for rays that are being traversed in the 3-D scene; a non-transitory memory storing an acceleration structure comprising bounding elements, wherein some of the bounding elements are associated with a flag; and a controller operable to control traversal of a ray through the acceleration structure, the controller operable to identify bounding elements from the acceleration structure associated with the flag and allocate traversal of those bounding elements to the cluster of programmable execution units, and otherwise to allocate traversal of bounding elements to the plurality of test cells. 30. The system for testing a ray for intersection with an implicit surface of claim 29, wherein the controller is further operable to maintain collections of rays that await traversal within different acceleration structure elements, and to schedule traversal of each of the collections of rays. 31. The system for testing a ray for intersection with an implicit surface of claim 29, wherein the cluster of programmable execution units and the plurality of test cells each are operable to produce an update to a ray data structure that tracks a closest intersection for the ray. 32. The system for testing a ray for intersection with an implicit surface of claim 31, wherein the ray data structure tracks a plurality of candidate closest intersections, and after completing traversal of the ray, an intersection to be shaded is selected from among the candidate closest intersections. 33. The system for testing a ray for intersection with an implicit surface of claim 31, wherein the programmable execution units are operable to traverse the ray based on a referenced coordinate system and the update is expressed in the referenced coordinate system in the ray data structure. 34. A method of representing implicit surfaces for ray tracing based rendering, comprising: associating a description of an implicit surface with a primitive, wherein the description of the implicit surface can be evaluated for different coordinates on the surface of the primitive; marching a ray along a direction within a pre-defined 3-D volume defined based on an extent of the primitive; for each position of the ray within the 3-D volume, determining a surface coordinate for that position on the primitive; evaluating the description of the implicit surface for that surface coordinate; and indicating whether the ray hits or missing the implicit surface at that position within the 3-D volume based on the evaluating. 35-63. (canceled)
2,600
9,887
9,887
15,723,068
2,613
A method and system for transforming simple user input into customizable animated images for use in text-messaging applications.
1. A computer comprising a processor and a memory, the memory comprises instructions, which, when executed by the processor, perform a method of animating a sequence of characters or images in a text message, which method comprises: receiving an input sequence, which input sequence comprises a coding language sequence comprising at least one command in a set of commands; converting the input sequence into a message in a format that can be rendered by a text messaging software according to the at least one command; and transmitting the message to a recipient. 2. The method according to 0, wherein converting the input sequence into a message in a format that can be rendered by a text messaging software comprises generating a set of output image frames. 3. The method according to 0, further comprising combining at least a subset of the set of output image frames into an animated GIF and wherein the message comprises the GIF. 4. The method according to 0, wherein the input sequence further comprises at least one of a character string and an emoji. 5. The method according to 0, wherein the coding language sequence comprises an emoji. 6. The method according to 0, wherein the coding language sequence comprises a syntax, which syntax comprises the set of commands. 7-20. (canceled)
A method and system for transforming simple user input into customizable animated images for use in text-messaging applications.1. A computer comprising a processor and a memory, the memory comprises instructions, which, when executed by the processor, perform a method of animating a sequence of characters or images in a text message, which method comprises: receiving an input sequence, which input sequence comprises a coding language sequence comprising at least one command in a set of commands; converting the input sequence into a message in a format that can be rendered by a text messaging software according to the at least one command; and transmitting the message to a recipient. 2. The method according to 0, wherein converting the input sequence into a message in a format that can be rendered by a text messaging software comprises generating a set of output image frames. 3. The method according to 0, further comprising combining at least a subset of the set of output image frames into an animated GIF and wherein the message comprises the GIF. 4. The method according to 0, wherein the input sequence further comprises at least one of a character string and an emoji. 5. The method according to 0, wherein the coding language sequence comprises an emoji. 6. The method according to 0, wherein the coding language sequence comprises a syntax, which syntax comprises the set of commands. 7-20. (canceled)
2,600
9,888
9,888
15,000,197
2,612
Graphics processing systems may render multiple views of a scene (e.g. a sequence of frames) in a tile-based manner. Groups of views may be rendered together such that tiles from a group of views are rendered in an interspersed order such that at least one tile from each of the views in the group is rendered before any of the views of the scene in the group are fully rendered. In this way similar tiles from different views within a group may be rendered sequentially. If a particular rendered tile is similar to the next tile to be rendered then data stored in a cache for rendering the particular tile is likely to be useful for rendering the next tile. Therefore, when rendering the next tile less data needs to be fetched from the system memory which can significantly improve the efficiency of the graphics processing system.
1. A method of rendering views of a scene in a graphics processing unit which is configured to use a rendering space which is subdivided into a plurality of tiles, the method comprising: rendering tiles of the views of the scene in an interspersed order such that, for each group of a plurality of groups of views of the scene, at least one tile from each of the views of the scene in the group is rendered before any of the views of the scene in the group are fully rendered. 2. A graphics processing unit configured to render views of a scene, wherein the graphics processing unit is configured to use a rendering space which is subdivided into a plurality of tiles, the graphics processing unit comprising: a rendering unit configured to render tiles of the views of the scene in an interspersed order such that, for each group of a plurality of groups of views of the scene, at least one tile is rendered from each of the views of the scene in the group before any of the views of the scene in the group are fully rendered. 3. The graphics processing unit of claim 2 further comprising: a tiling unit configured to, for each of the views of the scene, process primitives of the view of the scene to determine, for each of the tiles of the view of the scene, which of the primitives are relevant for rendering the tile; wherein the rendering unit is configured to use the determinations of which of the primitives are relevant for rendering tiles of views of the scene for rendering tiles of the views of the scene. 4. The graphics processing unit of claim 3 wherein the tiling unit is configured to, for each of the views of the scene, perform tiling on the primitives of the view of the scene to determine display lists for tiles of the view of the scene, wherein the display list for a tile indicates which of the primitives are relevant for rendering the tile; wherein the rendering unit is configured to use the determined display lists for said rendering tiles of the views of the scene. 5. The graphics processing unit of claim 2 wherein at least some of the views of the scene in the group are frames representing images of the scene at a sequence of time instances. 6. The graphics processing unit of claim 2 wherein at least two of the views of the scene in the group are images of the scene from respective different viewpoints. 7. The graphics processing unit of claim 2 wherein at least one of the views of the scene in the group represents a sub-rendering for use in rendering another view of the scene. 8. The graphics processing unit of claim 7, wherein said sub-rendering and said another view of the scene relate to the same time instance. 9. The graphics processing unit of claim 7 wherein the sub-rendering is a shadow map, an environment map or a texture for use in rendering the other view of the scene. 10. The graphics processing unit of claim 2 wherein said interspersed order is such that the rendering unit is configured to render a tile at a first tile position from each of the views of the scene in a group, and to subsequently render a tile at a second tile position from each of the views of the scene in the group. 11. The graphics processing unit of claim 2 wherein the rendering unit is configured to use at least one transformation indicating tiles which are likely to be similar in different ones of the views of the scene in a group, wherein the interspersed order is based on the at least one transformation such that the rendering unit is configured to render similar tiles from the views of the scene in the group sequentially. 12. The graphics processing unit of claim 2 wherein the rendering unit comprises control logic configured to determine which tile is to be rendered after a particular tile has been rendered. 13. The graphics processing unit of claim 12 wherein the control logic is configured to determine which tile is to be rendered after a particular tile has been rendered by: obtaining at least one motion vector indicating motion between the particular tile of a particular view and regions of a different view; and selecting a tile to be rendered after the particular tile based on the obtained at least one motion vector. 14. The graphics processing unit of claim 13 wherein the control logic is configured to determine which tile is to be rendered after a particular tile has been rendered by: analysing at least one previous view of the scene to determine respective measures of similarity between a tile at the tile position of the particular tile and tiles at other tile positions; and selecting a tile to be rendered after the particular tile based on the similarity measures. 15. The graphics processing unit of claim 2 further comprising at least one cache, wherein the graphics processing unit is configured to: fetch data from a memory for use by the rendering unit in rendering a tile; and store fetched data in the at least one cache. 16. The graphics processing unit of claim 2 wherein the rendering unit comprises: a hidden surface removal module configured to perform hidden surface removal on at least some fragments of primitives that are relevant for rendering a tile; and a processing module configured to perform at least one of texturing and shading on at least some of the fragments of primitives that are relevant for rendering the tile. 17. The graphics processing unit of claim 4 wherein the tiling unit is configured to transform primitives for a view of the scene into the rendering space for the view of the scene to determine which primitives are relevant for rendering the tiles of the view of the scene, and cause data relating to the transformed primitives to be stored in a primitive store, wherein the display list for a tile of the view of the scene indicates which of the transformed primitives are relevant for rendering the tile, and wherein the rendering unit is configured to retrieve data relating to the transformed primitives which are relevant for rendering a tile from the primitive store for use in rendering the tile. 18. The graphics processing unit of claim 4 wherein the tiling unit is configured to transform primitives for a view of the scene into the rendering space for the view of the scene to determine which primitives are relevant for rendering the tiles of the view of the scene, wherein the display list for a tile of the view of the scene indicates which of the primitives are relevant for rendering the tile, and wherein the rendering unit is configured to transform primitives for a view of the scene into the rendering space for the view of the scene for the purpose of rendering the tiles of the view of the scene. 19. A non-transitory computer readable storage medium having stored thereon processor executable instructions that when executed cause at least one processor to render views of a scene in a graphics processing unit which is configured to use a rendering space which is subdivided into a plurality of tiles, the rendering of the views of a scene comprising: rendering tiles of the views of the scene in an interspersed order such that, for each group of a plurality of groups of views of the scene, at least one tile from each of the views of the scene in the group is rendered before any of the views of the scene in the group are fully rendered. 20. A non-transitory computer readable storage medium having stored thereon an integrated circuit definition dataset that, when processed in an integrated circuit manufacturing system, configures the system to manufacture a graphics processing unit configured to render views of a scene, wherein the graphics processing unit is configured to use a rendering space which is subdivided into a plurality of tiles, the graphics processing unit comprising: a rendering unit configured to render tiles of the views of the scene in an interspersed order such that, for each group of a plurality of groups of views of the scene, at least one tile is rendered from each of the views of the scene in the group before any of the views of the scene in the group are fully rendered.
Graphics processing systems may render multiple views of a scene (e.g. a sequence of frames) in a tile-based manner. Groups of views may be rendered together such that tiles from a group of views are rendered in an interspersed order such that at least one tile from each of the views in the group is rendered before any of the views of the scene in the group are fully rendered. In this way similar tiles from different views within a group may be rendered sequentially. If a particular rendered tile is similar to the next tile to be rendered then data stored in a cache for rendering the particular tile is likely to be useful for rendering the next tile. Therefore, when rendering the next tile less data needs to be fetched from the system memory which can significantly improve the efficiency of the graphics processing system.1. A method of rendering views of a scene in a graphics processing unit which is configured to use a rendering space which is subdivided into a plurality of tiles, the method comprising: rendering tiles of the views of the scene in an interspersed order such that, for each group of a plurality of groups of views of the scene, at least one tile from each of the views of the scene in the group is rendered before any of the views of the scene in the group are fully rendered. 2. A graphics processing unit configured to render views of a scene, wherein the graphics processing unit is configured to use a rendering space which is subdivided into a plurality of tiles, the graphics processing unit comprising: a rendering unit configured to render tiles of the views of the scene in an interspersed order such that, for each group of a plurality of groups of views of the scene, at least one tile is rendered from each of the views of the scene in the group before any of the views of the scene in the group are fully rendered. 3. The graphics processing unit of claim 2 further comprising: a tiling unit configured to, for each of the views of the scene, process primitives of the view of the scene to determine, for each of the tiles of the view of the scene, which of the primitives are relevant for rendering the tile; wherein the rendering unit is configured to use the determinations of which of the primitives are relevant for rendering tiles of views of the scene for rendering tiles of the views of the scene. 4. The graphics processing unit of claim 3 wherein the tiling unit is configured to, for each of the views of the scene, perform tiling on the primitives of the view of the scene to determine display lists for tiles of the view of the scene, wherein the display list for a tile indicates which of the primitives are relevant for rendering the tile; wherein the rendering unit is configured to use the determined display lists for said rendering tiles of the views of the scene. 5. The graphics processing unit of claim 2 wherein at least some of the views of the scene in the group are frames representing images of the scene at a sequence of time instances. 6. The graphics processing unit of claim 2 wherein at least two of the views of the scene in the group are images of the scene from respective different viewpoints. 7. The graphics processing unit of claim 2 wherein at least one of the views of the scene in the group represents a sub-rendering for use in rendering another view of the scene. 8. The graphics processing unit of claim 7, wherein said sub-rendering and said another view of the scene relate to the same time instance. 9. The graphics processing unit of claim 7 wherein the sub-rendering is a shadow map, an environment map or a texture for use in rendering the other view of the scene. 10. The graphics processing unit of claim 2 wherein said interspersed order is such that the rendering unit is configured to render a tile at a first tile position from each of the views of the scene in a group, and to subsequently render a tile at a second tile position from each of the views of the scene in the group. 11. The graphics processing unit of claim 2 wherein the rendering unit is configured to use at least one transformation indicating tiles which are likely to be similar in different ones of the views of the scene in a group, wherein the interspersed order is based on the at least one transformation such that the rendering unit is configured to render similar tiles from the views of the scene in the group sequentially. 12. The graphics processing unit of claim 2 wherein the rendering unit comprises control logic configured to determine which tile is to be rendered after a particular tile has been rendered. 13. The graphics processing unit of claim 12 wherein the control logic is configured to determine which tile is to be rendered after a particular tile has been rendered by: obtaining at least one motion vector indicating motion between the particular tile of a particular view and regions of a different view; and selecting a tile to be rendered after the particular tile based on the obtained at least one motion vector. 14. The graphics processing unit of claim 13 wherein the control logic is configured to determine which tile is to be rendered after a particular tile has been rendered by: analysing at least one previous view of the scene to determine respective measures of similarity between a tile at the tile position of the particular tile and tiles at other tile positions; and selecting a tile to be rendered after the particular tile based on the similarity measures. 15. The graphics processing unit of claim 2 further comprising at least one cache, wherein the graphics processing unit is configured to: fetch data from a memory for use by the rendering unit in rendering a tile; and store fetched data in the at least one cache. 16. The graphics processing unit of claim 2 wherein the rendering unit comprises: a hidden surface removal module configured to perform hidden surface removal on at least some fragments of primitives that are relevant for rendering a tile; and a processing module configured to perform at least one of texturing and shading on at least some of the fragments of primitives that are relevant for rendering the tile. 17. The graphics processing unit of claim 4 wherein the tiling unit is configured to transform primitives for a view of the scene into the rendering space for the view of the scene to determine which primitives are relevant for rendering the tiles of the view of the scene, and cause data relating to the transformed primitives to be stored in a primitive store, wherein the display list for a tile of the view of the scene indicates which of the transformed primitives are relevant for rendering the tile, and wherein the rendering unit is configured to retrieve data relating to the transformed primitives which are relevant for rendering a tile from the primitive store for use in rendering the tile. 18. The graphics processing unit of claim 4 wherein the tiling unit is configured to transform primitives for a view of the scene into the rendering space for the view of the scene to determine which primitives are relevant for rendering the tiles of the view of the scene, wherein the display list for a tile of the view of the scene indicates which of the primitives are relevant for rendering the tile, and wherein the rendering unit is configured to transform primitives for a view of the scene into the rendering space for the view of the scene for the purpose of rendering the tiles of the view of the scene. 19. A non-transitory computer readable storage medium having stored thereon processor executable instructions that when executed cause at least one processor to render views of a scene in a graphics processing unit which is configured to use a rendering space which is subdivided into a plurality of tiles, the rendering of the views of a scene comprising: rendering tiles of the views of the scene in an interspersed order such that, for each group of a plurality of groups of views of the scene, at least one tile from each of the views of the scene in the group is rendered before any of the views of the scene in the group are fully rendered. 20. A non-transitory computer readable storage medium having stored thereon an integrated circuit definition dataset that, when processed in an integrated circuit manufacturing system, configures the system to manufacture a graphics processing unit configured to render views of a scene, wherein the graphics processing unit is configured to use a rendering space which is subdivided into a plurality of tiles, the graphics processing unit comprising: a rendering unit configured to render tiles of the views of the scene in an interspersed order such that, for each group of a plurality of groups of views of the scene, at least one tile is rendered from each of the views of the scene in the group before any of the views of the scene in the group are fully rendered.
2,600
9,889
9,889
15,290,300
2,632
A communication system receives an input signal along a signal processing path and generates a converted output signal via a digital-to-analog converter (DAC). The signal processing path branches into two different branches, a magnitude branch and a sign branch for different components of the baseband signal. A local oscillator (LO) provides a carrier signal to the signal processing path at the DAC and further generates an LO leakage signal comprising a signed LO leakage and an unsigned LO leakage during the up-conversion of signals of the sign branch with a carrier. An unsigned LO suppression component is configured to reduce or eliminate the unsigned LO leakage and a signed LO suppression component is configured to reduce or eliminate the signed LO leakage form a baseband signal of the signal processing path.
1. An apparatus of a mobile communication device comprising: an radio frequency (RF) frontend; a radio frequency digital-to-analog converter (RFDAC) configured to up-convert a digital baseband signal to a radio frequency, and convert the digital baseband signal from a signal processing chain to an analog signal, wherein the RFDAC is coupled to a local oscillator configured to provide an oscillator signal to the digital baseband signal of the signal processing chain, and generate a local oscillator (LO) leakage comprising a signed LO leakage and an unsigned LO leakage at an output of the RFDAC; an unsigned local oscillator LO leakage suppression component configured to suppress the unsigned LO leakage from the digital baseband signal; and a signed LO leakage suppression component configured to suppress the signed LO leakage. 2. The apparatus of claim 1, wherein the unsigned LO leakage suppression component is configured to provide a first offset at a first location along the signal processing chain, and the signed LO leakage suppression component is further configured to provide a second offset at a second location along the signal processing chain to the RFDAC to suppress the signed LO leakage separately or independently from the unsigned LO leakage. 3. The apparatus of claim 1, further comprising: a magnitude component, coupled between magnitude branch of the signal processing chain and the RFDAC, configured to receive the digital baseband signal at a split node of the signal processing chain and generate a magnitude signal of the digital baseband signal along the magnitude branch to the RFDAC; and a sign component, coupled between a sign branch of the signal processing chain and the RFDAC, configured to receive the digital baseband signal at the split node and generate a sign signal of the digital baseband signal along the sign branch to the RFDAC. 4. The apparatus of claim 3, wherein the signed LO leakage suppression component is coupled between the magnitude component and the RFDAC after a split of the digital baseband signal into the magnitude signal of the magnitude branch and the sign signal of the sign branch, and the unsigned LO leakage suppression is coupled between the signal processing chain and the split node before the splitting of the digital baseband signal into the magnitude signal of the magnitude branch and the sign signal of the sign branch. 5. The apparatus of claim 1, wherein the unsigned LO leakage suppression component is further configured to receive the digital baseband signal and generate a first offset in the signal processing chain to suppress the unsigned LO leakage, and the signed LO leakage suppression component is further configured to receive a magnitude signal component of the digital baseband signal and generate a second offset in the signal processing chain to suppress the signed LO leakage from the magnitude signal component. 6. The apparatus of claim 5, wherein the first offset comprises a common bias and the second offset comprises a modification of the common bias by a modification component configured to modify the second offset by a mathematical operation based on whether the magnitude signal component is a negative value or a positive value. 7. The apparatus of claim 6, wherein the unsigned LO leakage suppression component is further configured to generate the first offset independently from the second offset generated from the signed LO leakage suppression component, and the signed LO leakage suppression component with the modification component is configured to generate different second offsets based on whether the magnitude signal component comprises the negative value or the positive value, wherein the first offset and the different second offsets. 8. The apparatus of claim 1, wherein the unsigned LO leakage suppression component provides an output signal to a magnitude branch and a sign branch that are arranged in parallel to one another and coupled to inputs of the RFDAC, wherein the magnitude branch is configured to generate an absolute value signal of the digital baseband signal and the sign branch is configured to generate a sign value signal to a mixer component. 9. The apparatus of claim 8, wherein the mixer component, coupled to the local oscillator and the RFDAC, is configured to mix the sign value signal with a local oscillator signal of the local oscillator and provide an output to the RFDAC concurrent to the magnitude branch providing the absolute value signal to the RFDAC. 10. The apparatus of claim 1, wherein the signal processing chain comprises a quadrature path and an in-phase path coupled to a plurality of magnitude component inputs to the RFDAC and a plurality of sign component inputs to the RFDAC. 11. A system of a mobile communication device comprising: a signal processing chain; a radio frequency digital-to-analog converter (RFDAC) coupled to the signal processing chain and configured to receive a baseband signal from the signal processing chain, up-convert the baseband signal to a radio frequency, and convert the baseband signal to an analog signal along a transmission path; a local oscillator configured to provide a carrier signal to the baseband signal to generate a local oscillator (LO) leakage comprising a signed LO leakage and an unsigned LO leakage; a unsigned LO leakage suppression component, configured to suppress the unsigned LO leakage from the baseband signal; and a signed LO leakage suppression component configured to suppress the signed LO leakage. 12. The system of claim 11, further comprising a magnitude signal branch and a sign signal branch in parallel to the magnitude signal branch, configured to provide a magnitude signal component of the baseband signal via a magnitude component and a sign signal component of the baseband signal via a sign component, respectively, to the RFDAC. 13. The system of claim 12, wherein the signed LO leakage suppression component is located within the magnitude signal branch of the signal processing chain and the unsigned LO leakage suppression component is located along the signal processing chain before the magnitude signal branch and the sign signal branch. 14. The system of claim 12, wherein the magnitude signal branch comprises an in-phase magnitude path configured to provide an in-phase magnitude signal to the RFDAC and a quadrature magnitude path configured to provide a quadrature magnitude signal to the RFDAC, and wherein the sign signal branch comprises an in-phase sign path configured to provide an in-phase sign signal to a first mixer coupled to the RFDAC and the local oscillator, and a quadrature sign path configured to provide a quadrature sign signal to a second mixer coupled to the RFDAC and the local oscillator. 15. The system of claim 11, wherein the unsigned LO leakage suppression component is further configured to generate a first offset to the signal processing chain, and provide an output along a magnitude signal branch and a sign signal branch that splits from the signal processing chain at a split node, and wherein the signed LO leakage suppression component is configured to receive an input signal along the magnitude signal branch and provide a second offset to a second output along the magnitude signal branch that provides a magnitude signal to the RFDAC. 16. The system of claim 15, wherein the sign signal branch is configured to provide a sign signal component to a mixer that combines the sign signal component with a local oscillator signal and provides a combined signal to the RFDAC. 17. The system of claim 15, wherein the first offset comprises a first DC bias, and the second offset, corresponding to the signed LO leakage, comprises a second DC bias or a third DC bias based on the magnitude signal. 18. The system of claim 17, wherein the unsigned LO leakage suppression component is further configured to provide the first DC bias from a common bias, and the signed LO leakage suppression component is further configured to provide the second DC bias or the third DC bias based on a modification of the common bias by a 2's complement or an added value to suppress the signed LO leakage independently of the unsigned LO leakage, wherein the modification comprises a function of whether the baseband signal comprises a positive signal value or a negative signal value and wherein the first DC bias, the second DC bias and the third DC bias are configured independently of one another unsigned LO leakage suppression. 19. An apparatus of a communication device comprising: a signal processing chain comprising a an in-phase path configured to process an in-phase signal of a baseband signal, and a quadrature path configured to process a quadrature signal of the baseband signal; a magnitude branch, coupled to the signal processing chain at a split node and to a radio frequency digital-to-analog converter (RFDAC), comprising an in-phase magnitude path configured to provide an in-phase magnitude signal to an in-phase magnitude input of the RFDAC, and a quadrature magnitude signal path configured to provide a quadrature magnitude signal to a quadrature magnitude input of the RFDAC; a sign branch, coupled to the signal processing chain at the split node and the RFDAC, comprising an in-phase sign path configured to provide an in-phase sign signal to an in-phase sign input of the RFDAC, and a quadrature sign path configured to provide a quadrature sign signal to a quadrature sign input of the RFDAC; an unsigned LO leakage suppression component configured to suppress an unsigned LO leakage from an output of the RFDAC; and a signed LO leakage suppression component, coupled to the RFDAC, configured to suppress a signed LO leakage along the signal processing chain. 20. The apparatus of claim 19, further comprising: a local oscillator configured to provide a carrier signal to the sign branch to generate an analog signal at the RFDAC, and generate the signed LO leakage and the unsigned LO leakage at the output of the RFDAC in response to an up-conversion of the in-phase sign signal and the quadrature sign signal of the sign branch. 21. The apparatus of claim 20, further comprising: a first mixer and a second mixer configured to mix the in-phase sign signal and the quadrature sign signal with a first local oscillator signal and a second local oscillator signal, respectively, and provide mixed sign signal outputs to different sign inputs of the sign branch to the RFDAC. 22. The apparatus of claim 19, wherein the unsigned LO leakage suppression component is configured to generate a first DC bias and the signed LO leakage suppression component is configured to generate a second DC bias that is different and independent from the first DC bias. 23. The apparatus of claim 22, further comprising a modification component configured to provide a mathematical operation to the signed LO leakage suppression component to modify the second DC bias based on a sign value of the baseband signal component. 24. The apparatus of claim 19, wherein the unsigned LO leakage suppression component is further configured to provide a first offset to the signal processing chain before the split node that splits the signal processing chain into the magnitude branch and the sign branch. 25. The apparatus of claim 19, wherein the signed LO leakage suppression component is configured to provide an offset along the magnitude branch after the split node that splits the signal processing chain into the magnitude branch and the sign branch to modify the in-phase magnitude signal and the quadrature magnitude signal to the RFDAC.
A communication system receives an input signal along a signal processing path and generates a converted output signal via a digital-to-analog converter (DAC). The signal processing path branches into two different branches, a magnitude branch and a sign branch for different components of the baseband signal. A local oscillator (LO) provides a carrier signal to the signal processing path at the DAC and further generates an LO leakage signal comprising a signed LO leakage and an unsigned LO leakage during the up-conversion of signals of the sign branch with a carrier. An unsigned LO suppression component is configured to reduce or eliminate the unsigned LO leakage and a signed LO suppression component is configured to reduce or eliminate the signed LO leakage form a baseband signal of the signal processing path.1. An apparatus of a mobile communication device comprising: an radio frequency (RF) frontend; a radio frequency digital-to-analog converter (RFDAC) configured to up-convert a digital baseband signal to a radio frequency, and convert the digital baseband signal from a signal processing chain to an analog signal, wherein the RFDAC is coupled to a local oscillator configured to provide an oscillator signal to the digital baseband signal of the signal processing chain, and generate a local oscillator (LO) leakage comprising a signed LO leakage and an unsigned LO leakage at an output of the RFDAC; an unsigned local oscillator LO leakage suppression component configured to suppress the unsigned LO leakage from the digital baseband signal; and a signed LO leakage suppression component configured to suppress the signed LO leakage. 2. The apparatus of claim 1, wherein the unsigned LO leakage suppression component is configured to provide a first offset at a first location along the signal processing chain, and the signed LO leakage suppression component is further configured to provide a second offset at a second location along the signal processing chain to the RFDAC to suppress the signed LO leakage separately or independently from the unsigned LO leakage. 3. The apparatus of claim 1, further comprising: a magnitude component, coupled between magnitude branch of the signal processing chain and the RFDAC, configured to receive the digital baseband signal at a split node of the signal processing chain and generate a magnitude signal of the digital baseband signal along the magnitude branch to the RFDAC; and a sign component, coupled between a sign branch of the signal processing chain and the RFDAC, configured to receive the digital baseband signal at the split node and generate a sign signal of the digital baseband signal along the sign branch to the RFDAC. 4. The apparatus of claim 3, wherein the signed LO leakage suppression component is coupled between the magnitude component and the RFDAC after a split of the digital baseband signal into the magnitude signal of the magnitude branch and the sign signal of the sign branch, and the unsigned LO leakage suppression is coupled between the signal processing chain and the split node before the splitting of the digital baseband signal into the magnitude signal of the magnitude branch and the sign signal of the sign branch. 5. The apparatus of claim 1, wherein the unsigned LO leakage suppression component is further configured to receive the digital baseband signal and generate a first offset in the signal processing chain to suppress the unsigned LO leakage, and the signed LO leakage suppression component is further configured to receive a magnitude signal component of the digital baseband signal and generate a second offset in the signal processing chain to suppress the signed LO leakage from the magnitude signal component. 6. The apparatus of claim 5, wherein the first offset comprises a common bias and the second offset comprises a modification of the common bias by a modification component configured to modify the second offset by a mathematical operation based on whether the magnitude signal component is a negative value or a positive value. 7. The apparatus of claim 6, wherein the unsigned LO leakage suppression component is further configured to generate the first offset independently from the second offset generated from the signed LO leakage suppression component, and the signed LO leakage suppression component with the modification component is configured to generate different second offsets based on whether the magnitude signal component comprises the negative value or the positive value, wherein the first offset and the different second offsets. 8. The apparatus of claim 1, wherein the unsigned LO leakage suppression component provides an output signal to a magnitude branch and a sign branch that are arranged in parallel to one another and coupled to inputs of the RFDAC, wherein the magnitude branch is configured to generate an absolute value signal of the digital baseband signal and the sign branch is configured to generate a sign value signal to a mixer component. 9. The apparatus of claim 8, wherein the mixer component, coupled to the local oscillator and the RFDAC, is configured to mix the sign value signal with a local oscillator signal of the local oscillator and provide an output to the RFDAC concurrent to the magnitude branch providing the absolute value signal to the RFDAC. 10. The apparatus of claim 1, wherein the signal processing chain comprises a quadrature path and an in-phase path coupled to a plurality of magnitude component inputs to the RFDAC and a plurality of sign component inputs to the RFDAC. 11. A system of a mobile communication device comprising: a signal processing chain; a radio frequency digital-to-analog converter (RFDAC) coupled to the signal processing chain and configured to receive a baseband signal from the signal processing chain, up-convert the baseband signal to a radio frequency, and convert the baseband signal to an analog signal along a transmission path; a local oscillator configured to provide a carrier signal to the baseband signal to generate a local oscillator (LO) leakage comprising a signed LO leakage and an unsigned LO leakage; a unsigned LO leakage suppression component, configured to suppress the unsigned LO leakage from the baseband signal; and a signed LO leakage suppression component configured to suppress the signed LO leakage. 12. The system of claim 11, further comprising a magnitude signal branch and a sign signal branch in parallel to the magnitude signal branch, configured to provide a magnitude signal component of the baseband signal via a magnitude component and a sign signal component of the baseband signal via a sign component, respectively, to the RFDAC. 13. The system of claim 12, wherein the signed LO leakage suppression component is located within the magnitude signal branch of the signal processing chain and the unsigned LO leakage suppression component is located along the signal processing chain before the magnitude signal branch and the sign signal branch. 14. The system of claim 12, wherein the magnitude signal branch comprises an in-phase magnitude path configured to provide an in-phase magnitude signal to the RFDAC and a quadrature magnitude path configured to provide a quadrature magnitude signal to the RFDAC, and wherein the sign signal branch comprises an in-phase sign path configured to provide an in-phase sign signal to a first mixer coupled to the RFDAC and the local oscillator, and a quadrature sign path configured to provide a quadrature sign signal to a second mixer coupled to the RFDAC and the local oscillator. 15. The system of claim 11, wherein the unsigned LO leakage suppression component is further configured to generate a first offset to the signal processing chain, and provide an output along a magnitude signal branch and a sign signal branch that splits from the signal processing chain at a split node, and wherein the signed LO leakage suppression component is configured to receive an input signal along the magnitude signal branch and provide a second offset to a second output along the magnitude signal branch that provides a magnitude signal to the RFDAC. 16. The system of claim 15, wherein the sign signal branch is configured to provide a sign signal component to a mixer that combines the sign signal component with a local oscillator signal and provides a combined signal to the RFDAC. 17. The system of claim 15, wherein the first offset comprises a first DC bias, and the second offset, corresponding to the signed LO leakage, comprises a second DC bias or a third DC bias based on the magnitude signal. 18. The system of claim 17, wherein the unsigned LO leakage suppression component is further configured to provide the first DC bias from a common bias, and the signed LO leakage suppression component is further configured to provide the second DC bias or the third DC bias based on a modification of the common bias by a 2's complement or an added value to suppress the signed LO leakage independently of the unsigned LO leakage, wherein the modification comprises a function of whether the baseband signal comprises a positive signal value or a negative signal value and wherein the first DC bias, the second DC bias and the third DC bias are configured independently of one another unsigned LO leakage suppression. 19. An apparatus of a communication device comprising: a signal processing chain comprising a an in-phase path configured to process an in-phase signal of a baseband signal, and a quadrature path configured to process a quadrature signal of the baseband signal; a magnitude branch, coupled to the signal processing chain at a split node and to a radio frequency digital-to-analog converter (RFDAC), comprising an in-phase magnitude path configured to provide an in-phase magnitude signal to an in-phase magnitude input of the RFDAC, and a quadrature magnitude signal path configured to provide a quadrature magnitude signal to a quadrature magnitude input of the RFDAC; a sign branch, coupled to the signal processing chain at the split node and the RFDAC, comprising an in-phase sign path configured to provide an in-phase sign signal to an in-phase sign input of the RFDAC, and a quadrature sign path configured to provide a quadrature sign signal to a quadrature sign input of the RFDAC; an unsigned LO leakage suppression component configured to suppress an unsigned LO leakage from an output of the RFDAC; and a signed LO leakage suppression component, coupled to the RFDAC, configured to suppress a signed LO leakage along the signal processing chain. 20. The apparatus of claim 19, further comprising: a local oscillator configured to provide a carrier signal to the sign branch to generate an analog signal at the RFDAC, and generate the signed LO leakage and the unsigned LO leakage at the output of the RFDAC in response to an up-conversion of the in-phase sign signal and the quadrature sign signal of the sign branch. 21. The apparatus of claim 20, further comprising: a first mixer and a second mixer configured to mix the in-phase sign signal and the quadrature sign signal with a first local oscillator signal and a second local oscillator signal, respectively, and provide mixed sign signal outputs to different sign inputs of the sign branch to the RFDAC. 22. The apparatus of claim 19, wherein the unsigned LO leakage suppression component is configured to generate a first DC bias and the signed LO leakage suppression component is configured to generate a second DC bias that is different and independent from the first DC bias. 23. The apparatus of claim 22, further comprising a modification component configured to provide a mathematical operation to the signed LO leakage suppression component to modify the second DC bias based on a sign value of the baseband signal component. 24. The apparatus of claim 19, wherein the unsigned LO leakage suppression component is further configured to provide a first offset to the signal processing chain before the split node that splits the signal processing chain into the magnitude branch and the sign branch. 25. The apparatus of claim 19, wherein the signed LO leakage suppression component is configured to provide an offset along the magnitude branch after the split node that splits the signal processing chain into the magnitude branch and the sign branch to modify the in-phase magnitude signal and the quadrature magnitude signal to the RFDAC.
2,600
9,890
9,890
14,560,487
2,684
A seat assembly is provided with a seat cushion, a seat back, and a head restraint. A plurality of sensors is operably connected to at least one of the seat cushion and the seat back to detect a seating position of an occupant. A media device is provided. A controller is in electrical communication with the plurality of sensors and the media device, and is configured to receive data from the plurality sensors, compare the data to determine if the occupant is seated evenly, and operate the media device to inform the occupant of an uneven posture seating position. A computer-program product is programmed for automatically displaying a pressure distribution upon a seat assembly. The displayed pressure distribution of the seat assembly is from measured pressure values from a plurality of sensors in a plurality of zones of the seat assembly.
1. A seat assembly comprising: a seat cushion; a seat back adapted to be pivotally mounted adjacent the seat cushion; a plurality of sensors operably connected to at least one of the seat cushion and the seat back to detect a seating position of an occupant; a media device; and a controller in electrical communication with the plurality of sensors and the media device, configured to receive data from the plurality of sensors, compare the data to determine if the occupant is seated evenly, and operate the media device to inform the occupant of an uneven posture seating position. 2. The seat assembly of claim 1 wherein the plurality of sensors comprise: at least one left side sensor; and at least one right side sensor. 3. The seat assembly of claim 2 wherein the controller determines if the occupant is seated evenly left to right, and operates the media device to inform the occupant of an uneven left to right seating position. 4. The seat assembly of claim 1 wherein the media device comprises a display. 5. The seat assembly of claim 4 wherein the controller is configured to incrementally update the display to provide a visual feedback of adjustment of a seating position. 6. The seat assembly of claim 4 wherein the display comprises indicia indicative of the seat assembly with a plurality of zones corresponding to a plurality of zones of the seat assembly, each zone provided with one of the plurality of sensors. 7. The seat assembly of claim 6 wherein the zones are displayed ranging in color to represent a range of pressures measured by the plurality of sensors. 8. The seat assembly of claim 1 wherein the media device comprises an interactive user interface configured to receive input from the occupant. 9. The seat assembly of claim 8 wherein the media device is located remotely from the seat assembly. 10. The seat assembly of claim 9 wherein the media device is configured to communicate with the controller via wireless communication. 11. The seat assembly of claim 10 wherein the media device comprises a portable electronic device. 12. The seat assembly of claim 10 wherein the media device comprises at least one of a personal data assistant, a smartphone and a tablet. 13. A computer-program product embodied in a non-transitory computer readable medium that is programmed for automatically displaying a pressure distribution upon a seat assembly, the computer-program product comprising instructions for: receiving input indicative of measured pressure values from a plurality of sensors in a plurality of zones of a seat assembly; assigning a range of colors corresponding to a range of measured pressure values; and providing signals to a display indicative of a color distribution to the plurality of zones of the seat assembly as a visual representation of pressure distribution upon the seat assembly. 14. The computer-program product of claim 13 further comprising instructions for repeating the instructions of claim 13 incrementally. 15. The computer-program product of claim 13 wherein the plurality of zones comprise a plurality of left and right zones of the seat assembly. 16. The computer-program product of claim 15 further comprising instructions for providing signals to the display indicative of an uneven pressure distribution across the plurality of left and right zones of the seat assembly. 17. A method for displaying pressure distribution of a seat assembly comprising steps of: measuring pressure values from a plurality of sensors in a plurality of zones of a seat assembly; determining a range of the measured pressure values; assigning a range of colors to the range of the measured pressure values; and displaying the seat assembly with the plurality of zones colored by the range of colors as a visual representation of pressure distribution upon the seat assembly. 18. A method comprising repeating the steps of claim 17 incrementally. 19. The method of claim 17 further comprising a step of measuring pressure values from a plurality of left and right sensors in a plurality of left and right zones of the seat assembly. 20. The method of claim 19 further comprising a step of displaying the seat assembly with the plurality of left and right zones colored by the range of colors.
A seat assembly is provided with a seat cushion, a seat back, and a head restraint. A plurality of sensors is operably connected to at least one of the seat cushion and the seat back to detect a seating position of an occupant. A media device is provided. A controller is in electrical communication with the plurality of sensors and the media device, and is configured to receive data from the plurality sensors, compare the data to determine if the occupant is seated evenly, and operate the media device to inform the occupant of an uneven posture seating position. A computer-program product is programmed for automatically displaying a pressure distribution upon a seat assembly. The displayed pressure distribution of the seat assembly is from measured pressure values from a plurality of sensors in a plurality of zones of the seat assembly.1. A seat assembly comprising: a seat cushion; a seat back adapted to be pivotally mounted adjacent the seat cushion; a plurality of sensors operably connected to at least one of the seat cushion and the seat back to detect a seating position of an occupant; a media device; and a controller in electrical communication with the plurality of sensors and the media device, configured to receive data from the plurality of sensors, compare the data to determine if the occupant is seated evenly, and operate the media device to inform the occupant of an uneven posture seating position. 2. The seat assembly of claim 1 wherein the plurality of sensors comprise: at least one left side sensor; and at least one right side sensor. 3. The seat assembly of claim 2 wherein the controller determines if the occupant is seated evenly left to right, and operates the media device to inform the occupant of an uneven left to right seating position. 4. The seat assembly of claim 1 wherein the media device comprises a display. 5. The seat assembly of claim 4 wherein the controller is configured to incrementally update the display to provide a visual feedback of adjustment of a seating position. 6. The seat assembly of claim 4 wherein the display comprises indicia indicative of the seat assembly with a plurality of zones corresponding to a plurality of zones of the seat assembly, each zone provided with one of the plurality of sensors. 7. The seat assembly of claim 6 wherein the zones are displayed ranging in color to represent a range of pressures measured by the plurality of sensors. 8. The seat assembly of claim 1 wherein the media device comprises an interactive user interface configured to receive input from the occupant. 9. The seat assembly of claim 8 wherein the media device is located remotely from the seat assembly. 10. The seat assembly of claim 9 wherein the media device is configured to communicate with the controller via wireless communication. 11. The seat assembly of claim 10 wherein the media device comprises a portable electronic device. 12. The seat assembly of claim 10 wherein the media device comprises at least one of a personal data assistant, a smartphone and a tablet. 13. A computer-program product embodied in a non-transitory computer readable medium that is programmed for automatically displaying a pressure distribution upon a seat assembly, the computer-program product comprising instructions for: receiving input indicative of measured pressure values from a plurality of sensors in a plurality of zones of a seat assembly; assigning a range of colors corresponding to a range of measured pressure values; and providing signals to a display indicative of a color distribution to the plurality of zones of the seat assembly as a visual representation of pressure distribution upon the seat assembly. 14. The computer-program product of claim 13 further comprising instructions for repeating the instructions of claim 13 incrementally. 15. The computer-program product of claim 13 wherein the plurality of zones comprise a plurality of left and right zones of the seat assembly. 16. The computer-program product of claim 15 further comprising instructions for providing signals to the display indicative of an uneven pressure distribution across the plurality of left and right zones of the seat assembly. 17. A method for displaying pressure distribution of a seat assembly comprising steps of: measuring pressure values from a plurality of sensors in a plurality of zones of a seat assembly; determining a range of the measured pressure values; assigning a range of colors to the range of the measured pressure values; and displaying the seat assembly with the plurality of zones colored by the range of colors as a visual representation of pressure distribution upon the seat assembly. 18. A method comprising repeating the steps of claim 17 incrementally. 19. The method of claim 17 further comprising a step of measuring pressure values from a plurality of left and right sensors in a plurality of left and right zones of the seat assembly. 20. The method of claim 19 further comprising a step of displaying the seat assembly with the plurality of left and right zones colored by the range of colors.
2,600
9,891
9,891
15,282,475
2,613
Methods, apparatus, systems are disclosed for altering displayed content on a display device responsive to a user's proximity. In accord with one example, a computing system includes a memory, a sensor to collect data representative of a viewing distance between a display and a user of the display, and a scaler to adjust a size of at least one object displayed by the display based on the viewing distance from the display.
1. A computing system, comprising: a sensor to collect data representative of a viewing distance between a display and a user of the display; and a scaler to adjust a size of at least one object displayed by the display based on the viewing distance from the display. 2. The computing system of claim 1, further including a display. 3. The computing system of claim 2, wherein the at least one object includes a graphical element, a navigation element, text, an image, a font, or a combination thereof. 4. The computing system of claim 3, wherein the sensor includes at least one of a camera, an infrared camera, an infrared laser projector, a range camera, an infrared distance sensor, a laser rangefinder, an infrared sensor, or a combination thereof. 5. The computing system of claim 4, wherein the range camera includes a stereo camera or a time-of-flight camera. 6. The computing system of claim 3, wherein the sensor includes an ultrasonic sensor. 7. The computing system of claim 3, wherein the scaler is to increase the size of the at least one object responsive to an increase in the viewing distance and to decrease the size of the at least one object responsive to a decrease in the viewing distance. 8. The computing system of claim 3, wherein the navigation element includes at least one of a menu, a window, a selectable element, a soft key, an icon, a widget, a graphical control element, a tab, a button, a pointer, or a cursor. 9. The computing system of claim 3, wherein the scaler is to alter a display mode to increase a display resolution, relative to a native display resolution, responsive to a decrease in the viewing distance and to decrease the display resolution responsive to an increase in the viewing distance. 10. The computing system of claim 3, wherein the scaler is to adjust the size of the at least one object while maintaining a native display resolution. 11. The computing system of claim 3, wherein the sensor is to collect data representative of a facial landmark or a facial feature of a user. 12.-23. (cancelled) 24. A method of altering content displayed on a display, comprising: collecting, via a sensor, data representative of a viewing distance between a display and a user of the display; and adjusting a size of at least one object displayed by the display, via a scaler, based on the viewing distance, wherein the at least one object includes an icon, a navigation element, text, an image, a font, or a combination thereof. 25. The method of altering the display of claim 24, further including: determining, via a scaler, a pixels-per-degree value corresponding to the viewing distance; and conditioning the adjusting of the size of the at least one object on the pixels-per-degree value determined via the scaler. 26. The method of altering the display of claim 25, wherein the data representative of a viewing distance includes a plurality of distance measurements performed within a predetermined period of time. 27. The method of altering the display of claim 26, wherein the determining of the pixels-per-degree value includes determining an average pixels-per-degree value using valid ones of the plurality of distance measurements. 28. The method of altering the display of claim 24, wherein the adjusting of the size of at least one object displayed by the display includes using the scaler to alter a display mode to adjust a display resolution relative to a native display resolution. 29. The method of altering the display of claim 24, wherein the viewing distance is representative of a distance between the display and a user's eyes. 30. The method of altering the display of claim 24, wherein the sensor includes at least one of a camera, an infrared camera, an infrared laser projector, a range camera, or an ultrasonic sensor. 31. (canceled) 32. The method of altering the display of claim 24, including: increasing, via the scaler, the size of the at least one object responsive to an increase in the viewing distance, and decreasing, via the scaler, the size of the at least one object responsive to a decrease in the viewing distance. 33. The method of altering the display of claim 24, wherein the navigation element includes at least one of a menu, a window, a selectable element, a soft key, an icon, a widget, a graphical control element, a tab, a button, a pointer, or a cursor. 34.-58. (canceled)
Methods, apparatus, systems are disclosed for altering displayed content on a display device responsive to a user's proximity. In accord with one example, a computing system includes a memory, a sensor to collect data representative of a viewing distance between a display and a user of the display, and a scaler to adjust a size of at least one object displayed by the display based on the viewing distance from the display.1. A computing system, comprising: a sensor to collect data representative of a viewing distance between a display and a user of the display; and a scaler to adjust a size of at least one object displayed by the display based on the viewing distance from the display. 2. The computing system of claim 1, further including a display. 3. The computing system of claim 2, wherein the at least one object includes a graphical element, a navigation element, text, an image, a font, or a combination thereof. 4. The computing system of claim 3, wherein the sensor includes at least one of a camera, an infrared camera, an infrared laser projector, a range camera, an infrared distance sensor, a laser rangefinder, an infrared sensor, or a combination thereof. 5. The computing system of claim 4, wherein the range camera includes a stereo camera or a time-of-flight camera. 6. The computing system of claim 3, wherein the sensor includes an ultrasonic sensor. 7. The computing system of claim 3, wherein the scaler is to increase the size of the at least one object responsive to an increase in the viewing distance and to decrease the size of the at least one object responsive to a decrease in the viewing distance. 8. The computing system of claim 3, wherein the navigation element includes at least one of a menu, a window, a selectable element, a soft key, an icon, a widget, a graphical control element, a tab, a button, a pointer, or a cursor. 9. The computing system of claim 3, wherein the scaler is to alter a display mode to increase a display resolution, relative to a native display resolution, responsive to a decrease in the viewing distance and to decrease the display resolution responsive to an increase in the viewing distance. 10. The computing system of claim 3, wherein the scaler is to adjust the size of the at least one object while maintaining a native display resolution. 11. The computing system of claim 3, wherein the sensor is to collect data representative of a facial landmark or a facial feature of a user. 12.-23. (cancelled) 24. A method of altering content displayed on a display, comprising: collecting, via a sensor, data representative of a viewing distance between a display and a user of the display; and adjusting a size of at least one object displayed by the display, via a scaler, based on the viewing distance, wherein the at least one object includes an icon, a navigation element, text, an image, a font, or a combination thereof. 25. The method of altering the display of claim 24, further including: determining, via a scaler, a pixels-per-degree value corresponding to the viewing distance; and conditioning the adjusting of the size of the at least one object on the pixels-per-degree value determined via the scaler. 26. The method of altering the display of claim 25, wherein the data representative of a viewing distance includes a plurality of distance measurements performed within a predetermined period of time. 27. The method of altering the display of claim 26, wherein the determining of the pixels-per-degree value includes determining an average pixels-per-degree value using valid ones of the plurality of distance measurements. 28. The method of altering the display of claim 24, wherein the adjusting of the size of at least one object displayed by the display includes using the scaler to alter a display mode to adjust a display resolution relative to a native display resolution. 29. The method of altering the display of claim 24, wherein the viewing distance is representative of a distance between the display and a user's eyes. 30. The method of altering the display of claim 24, wherein the sensor includes at least one of a camera, an infrared camera, an infrared laser projector, a range camera, or an ultrasonic sensor. 31. (canceled) 32. The method of altering the display of claim 24, including: increasing, via the scaler, the size of the at least one object responsive to an increase in the viewing distance, and decreasing, via the scaler, the size of the at least one object responsive to a decrease in the viewing distance. 33. The method of altering the display of claim 24, wherein the navigation element includes at least one of a menu, a window, a selectable element, a soft key, an icon, a widget, a graphical control element, a tab, a button, a pointer, or a cursor. 34.-58. (canceled)
2,600
9,892
9,892
15,085,329
2,684
An electronic device is operated by detecting a tag having a sensor associated therewith and being configured to transmit information over a defined distance using a short range wireless protocol, and receiving sensor information transmitted by the tag over the communication link.
1. A method of operating an electronic device, comprising: detecting, using a tag reader circuit, a tag having a sensor associated with the tag, the tag being configured to communicate with the sensor to receive sensor information from the sensor and being further configured to transmit information over a defined distance using a short range wireless protocol via a communication link; receiving, via the tag reader circuit, the sensor information transmitted by the tag over the communication link; and sending the sensor information to an application server periodically using a defined schedule. 2. The method of claim 1, further comprising: receiving a message from the application server providing needed information for interacting with an application residing on the electronic device. 3. The method of claim 1, further comprising: receiving a message from the application server providing needed information for interacting with another device. 4. The method of claim 1, wherein the sensor is configured to generate the sensor information for an event and wherein sending the sensor information to the application server comprises: sending the sensor information to the application server responsive to receipt of the sensor information for a single event at the electronic device without delaying to accumulate sensor information for additional events. 5. The method of claim 1, wherein the sensor is configured to generate the sensor information for an event and wherein sending the sensor information to the application server comprises: buffering the sensor information for a plurality of events at the electronic device; and sending the buffered sensor information to the application server. 6. The method of claim 1, wherein the sensor is configured to generate the sensor information for an event and the sensor is further configured to buffer the sensor information for a plurality of events, and wherein sending the sensor information to the application server comprises: receiving the buffered sensor information transmitted by the tag over the communication link; and sending the buffered sensor information to the application server. 7. The method of claim 1, wherein sending the sensor information to the application server comprises: sending the sensor information to the application server based on a transmission bandwidth allocation assigned to the electronic device for communication with the application server. 8. The method of claim 1, wherein sending the sensor information to the application server comprises: receiving metadata transmitted by the tag over the communication link; and sending the sensor information to the application server based on the metadata. 9. The method of claim 1, wherein receiving the sensor information comprises: receiving the sensor information based on movement of the electronic device, time of day, sensor activity, and/or power availability. 10. The method of claim 1, further comprising: sending a message to the tag to change operational behavior of the tag responsive to the tag being placed in a bi-directional communication mode. 11. The method of claim 10, wherein the operational behavior comprises operation of the sensor associated with the tag and/or transmission behavior of the tag. 12. The method of claim 10, wherein sending the message to the tag comprises: updating firmware associated with the tag. 13. The method of claim 10, wherein the message is received from an application server. 14. The method of claim 1, wherein the electronic device is a mobile terminal. 15. The method of claim 1, wherein the sensor information comprises authentication information that identifies a person. 16. The method of claim 15, wherein the authentication information comprises biometric information. 17. The method of claim 15, further comprising: sending a message to the tag, the message used to operate a device responsive to authenticating an identity of the person. 18. The method of claim 1, wherein sensing operation of the sensor and/or transmission operation of the tag are changed based on an event sensed by the sensor.
An electronic device is operated by detecting a tag having a sensor associated therewith and being configured to transmit information over a defined distance using a short range wireless protocol, and receiving sensor information transmitted by the tag over the communication link.1. A method of operating an electronic device, comprising: detecting, using a tag reader circuit, a tag having a sensor associated with the tag, the tag being configured to communicate with the sensor to receive sensor information from the sensor and being further configured to transmit information over a defined distance using a short range wireless protocol via a communication link; receiving, via the tag reader circuit, the sensor information transmitted by the tag over the communication link; and sending the sensor information to an application server periodically using a defined schedule. 2. The method of claim 1, further comprising: receiving a message from the application server providing needed information for interacting with an application residing on the electronic device. 3. The method of claim 1, further comprising: receiving a message from the application server providing needed information for interacting with another device. 4. The method of claim 1, wherein the sensor is configured to generate the sensor information for an event and wherein sending the sensor information to the application server comprises: sending the sensor information to the application server responsive to receipt of the sensor information for a single event at the electronic device without delaying to accumulate sensor information for additional events. 5. The method of claim 1, wherein the sensor is configured to generate the sensor information for an event and wherein sending the sensor information to the application server comprises: buffering the sensor information for a plurality of events at the electronic device; and sending the buffered sensor information to the application server. 6. The method of claim 1, wherein the sensor is configured to generate the sensor information for an event and the sensor is further configured to buffer the sensor information for a plurality of events, and wherein sending the sensor information to the application server comprises: receiving the buffered sensor information transmitted by the tag over the communication link; and sending the buffered sensor information to the application server. 7. The method of claim 1, wherein sending the sensor information to the application server comprises: sending the sensor information to the application server based on a transmission bandwidth allocation assigned to the electronic device for communication with the application server. 8. The method of claim 1, wherein sending the sensor information to the application server comprises: receiving metadata transmitted by the tag over the communication link; and sending the sensor information to the application server based on the metadata. 9. The method of claim 1, wherein receiving the sensor information comprises: receiving the sensor information based on movement of the electronic device, time of day, sensor activity, and/or power availability. 10. The method of claim 1, further comprising: sending a message to the tag to change operational behavior of the tag responsive to the tag being placed in a bi-directional communication mode. 11. The method of claim 10, wherein the operational behavior comprises operation of the sensor associated with the tag and/or transmission behavior of the tag. 12. The method of claim 10, wherein sending the message to the tag comprises: updating firmware associated with the tag. 13. The method of claim 10, wherein the message is received from an application server. 14. The method of claim 1, wherein the electronic device is a mobile terminal. 15. The method of claim 1, wherein the sensor information comprises authentication information that identifies a person. 16. The method of claim 15, wherein the authentication information comprises biometric information. 17. The method of claim 15, further comprising: sending a message to the tag, the message used to operate a device responsive to authenticating an identity of the person. 18. The method of claim 1, wherein sensing operation of the sensor and/or transmission operation of the tag are changed based on an event sensed by the sensor.
2,600
9,893
9,893
14,830,693
2,626
The present disclosure relates to systems and processes for limiting notifications on an electronic device. In one example process, data representing a user input can be received by an electronic device. The data representing the user input can include touch data from the touch-sensitive device, ambient light data from an ambient light sensor, intensity data from a contact intensity sensor, and/or motion data from one or more motion sensors. Based on the data, it can be determined whether the user input is a cover gesture over a touch-sensitive display of the electronic device. In response to determining that the user input is a cover gesture over the touch-sensitive display, the electronic device can be put into a DND mode for a predetermined amount of time. While in the DND mode, the electronic device can cease to output some or all notifications.
1. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device comprising a touch-sensitive display, cause the device to: receive data representing a user input; determine, based on the data representing the user input, whether the user input is a cover gesture over the touch-sensitive display; and in accordance with a determination that the user input is the cover gesture over the touch-sensitive display, cause the device to enter a do not disturb (DND) mode. 2. The non-transitory computer readable storage medium of claim 1, wherein the data representing the user input comprises touch data output by the touch-sensitive display, and wherein to determine whether the user input is the cover gesture over the touch-sensitive display comprises: determine, based on the touch data, whether a touch has been detected at a threshold amount of the touch-sensitive display. 3. The non-transitory computer readable storage medium of claim 1, wherein: the electronic device further comprises an ambient light sensor; the data representing the user input comprises ambient light data representing an amount of light received by the ambient light sensor; and wherein to determine whether the user input is the cover gesture over the touch-sensitive display comprises: determine, based on the ambient light data, whether the amount of light received by the ambient light sensor is below a threshold light value. 4. The non-transitory computer readable storage medium of claim 1, wherein: the electronic device further comprises a motion sensor; the data representing the user input comprises motion data representing motion of the electronic device detected by the motion sensor; and wherein to determine whether the user input is the cover gesture over the touch-sensitive display comprises: determine, based on the motion data, whether the motion of the electronic device is above a threshold amount of motion and whether the motion occurred within a predetermined length of time after a notification event. 5. The non-transitory computer readable storage medium of claim 1, wherein: the electronic device further comprises a contact intensity sensor; the data representing the user input comprises intensity data representing a characteristic intensity of a contact detected by the contact intensity sensor; and wherein to determine whether the user input is the cover gesture over the touch-sensitive display comprises: determine, based on the intensity data, whether the characteristic intensity of the contact detected by the contact intensity sensor is above a threshold intensity. 6. The non-transitory computer readable storage medium of claim 1, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: in response to determining that the user input is the cover gesture over the touch-sensitive display, determine a termination time that the electronic device will exit the DND mode. 7. The non-transitory computer readable storage medium of claim 6, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: in response to a current time being the termination time, cause the electronic device to exit the DND mode. 8. The non-transitory computer readable storage medium of claim 6, wherein the termination time is determined based on contextual data. 9. The non-transitory computer readable storage medium of claim 8, wherein to determine the termination time that the electronic device will exit the DND mode comprises: determine whether there is sufficient contextual data to generate a custom termination time; in response to determining that there is sufficient contextual data to generate the custom termination time: determine the custom termination time based on the contextual data; and set the termination time to be equal to the custom termination time; and in response to determining that there is not sufficient contextual data to generate the custom termination time, set the termination time to be equal to a default value. 10. The non-transitory computer readable storage medium of claim 9, wherein to determine the custom termination time based on the contextual data comprises: determine an event based on the contextual data, the event having an end time of the event; and set the custom termination time to be equal to the end time of the event. 11. The non-transitory computer readable storage medium of claim 6, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: generate a notification of a predetermined length of time prior to the termination time. 12. The non-transitory computer readable storage medium of claim 11, wherein the notification comprises an indication of a length of time until the termination time. 13. The non-transitory computer readable storage medium of claim 6, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: while in the DND mode: receive new data representing a new user input; determine, based on the new data representing the new user input, whether the new user input is a second cover gesture over the touch-sensitive display; and in accordance with a determination that the new user input is the second cover gesture over the touch-sensitive display, determine a new termination time that the electronic device will exit the DND mode. 14. The non-transitory computer readable storage medium of claim 13, wherein the new termination time is one hour after a time that it was determined that the new user input is the second cover gesture over the touch-sensitive display. 15. The non-transitory computer readable storage medium of claim 1, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: in response to determining that the user input is the cover gesture over the touch-sensitive display, determine a length of time that the electronic device will be in the DND mode. 16. The non-transitory computer readable storage medium of claim 15, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: in response to the length of time expiring, cause the electronic device to exit the DND mode. 17. The non-transitory computer readable storage medium of claim 15, wherein the length of time is determined based on contextual data. 18. The non-transitory computer readable storage medium of claim 17, wherein to determine the length of time that the electronic device will be in the DND mode comprises: determine whether there is sufficient contextual data to generate a custom length of time; in response to determining that there is sufficient contextual data to generate the custom length of time: determine the custom length of time based on the contextual data; and set the length of time to be equal to the custom length of time; and in response to determining that there is not sufficient contextual data to generate the custom length of time, set the length of time to be equal to a default value. 19. The non-transitory computer readable storage medium of claim 18, wherein to determine the custom length of time based on the contextual data comprises: determine an event based on the contextual data, the event having an end time of the event; determine a difference between a current time and the end time of the event; and set the custom length of time to be equal to the difference between the current time and the end of time of the event. 20. The non-transitory computer readable storage medium of claim 15, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: generate a notification of a predetermined length of time prior to expiration of the length of time that the electronic device will be in the DND mode. 21. The non-transitory computer readable storage medium of claim 20, wherein the notification comprises an indication of a length of time until the expiration of the length of time that the electronic device will be in the DND mode. 22. The non-transitory computer readable storage medium of claim 15, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: while in the DND mode: receive new data representing a new user input; determine, based on the new data representing the new user input, whether the new user input is a second cover gesture over the touch-sensitive display; and in accordance with a determination that the new user input is the second cover gesture over the touch-sensitive display, determine a new length of time that the electronic device will be in the DND mode. 23. The non-transitory computer readable storage medium of claim 1, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: prevent all notifications from being presented while the electronic device is in the DND mode. 24. The non-transitory computer readable storage medium of claim 1, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: prevent a subset of all notifications from being presented while the electronic device is in the DND mode. 25. The non-transitory computer readable storage medium of claim 1, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: cease to display incoming electronic messages on the touch-sensitive display while the electronic device is in the DND mode. 26. The non-transitory computer readable storage medium of claim 1, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: cease to produce haptic outputs in response to incoming electronic messages while the electronic device is in the DND mode. 27. The non-transitory computer readable storage medium of claim 1, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: cease to produce audible outputs in response to incoming electronic messages while the electronic device is in the DND mode. 28. The non-transitory computer readable storage medium of claim 1, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: display, on the touch-sensitive display, a DND indicator while the electronic device is in the DND mode. 29. A computer-implemented method comprising: at an electronic device comprising a touch-sensitive display: receiving data representing a user input; determining, based on the data representing the user input, whether the user input is a cover gesture over the touch-sensitive display; and in accordance with a determination that the user input is the cover gesture over the touch-sensitive display, causing the device to enter a do not disturb (DND) mode. 30. An electronic device, comprising: a touch-sensitive display; one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: receiving data representing a user input; determining, based on the data representing the user input, whether the user input is a cover gesture over the touch-sensitive display; and in accordance with a determination that the user input is the cover gesture over the touch-sensitive display, causing the device to enter a do not disturb (DND) mode.
The present disclosure relates to systems and processes for limiting notifications on an electronic device. In one example process, data representing a user input can be received by an electronic device. The data representing the user input can include touch data from the touch-sensitive device, ambient light data from an ambient light sensor, intensity data from a contact intensity sensor, and/or motion data from one or more motion sensors. Based on the data, it can be determined whether the user input is a cover gesture over a touch-sensitive display of the electronic device. In response to determining that the user input is a cover gesture over the touch-sensitive display, the electronic device can be put into a DND mode for a predetermined amount of time. While in the DND mode, the electronic device can cease to output some or all notifications.1. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device comprising a touch-sensitive display, cause the device to: receive data representing a user input; determine, based on the data representing the user input, whether the user input is a cover gesture over the touch-sensitive display; and in accordance with a determination that the user input is the cover gesture over the touch-sensitive display, cause the device to enter a do not disturb (DND) mode. 2. The non-transitory computer readable storage medium of claim 1, wherein the data representing the user input comprises touch data output by the touch-sensitive display, and wherein to determine whether the user input is the cover gesture over the touch-sensitive display comprises: determine, based on the touch data, whether a touch has been detected at a threshold amount of the touch-sensitive display. 3. The non-transitory computer readable storage medium of claim 1, wherein: the electronic device further comprises an ambient light sensor; the data representing the user input comprises ambient light data representing an amount of light received by the ambient light sensor; and wherein to determine whether the user input is the cover gesture over the touch-sensitive display comprises: determine, based on the ambient light data, whether the amount of light received by the ambient light sensor is below a threshold light value. 4. The non-transitory computer readable storage medium of claim 1, wherein: the electronic device further comprises a motion sensor; the data representing the user input comprises motion data representing motion of the electronic device detected by the motion sensor; and wherein to determine whether the user input is the cover gesture over the touch-sensitive display comprises: determine, based on the motion data, whether the motion of the electronic device is above a threshold amount of motion and whether the motion occurred within a predetermined length of time after a notification event. 5. The non-transitory computer readable storage medium of claim 1, wherein: the electronic device further comprises a contact intensity sensor; the data representing the user input comprises intensity data representing a characteristic intensity of a contact detected by the contact intensity sensor; and wherein to determine whether the user input is the cover gesture over the touch-sensitive display comprises: determine, based on the intensity data, whether the characteristic intensity of the contact detected by the contact intensity sensor is above a threshold intensity. 6. The non-transitory computer readable storage medium of claim 1, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: in response to determining that the user input is the cover gesture over the touch-sensitive display, determine a termination time that the electronic device will exit the DND mode. 7. The non-transitory computer readable storage medium of claim 6, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: in response to a current time being the termination time, cause the electronic device to exit the DND mode. 8. The non-transitory computer readable storage medium of claim 6, wherein the termination time is determined based on contextual data. 9. The non-transitory computer readable storage medium of claim 8, wherein to determine the termination time that the electronic device will exit the DND mode comprises: determine whether there is sufficient contextual data to generate a custom termination time; in response to determining that there is sufficient contextual data to generate the custom termination time: determine the custom termination time based on the contextual data; and set the termination time to be equal to the custom termination time; and in response to determining that there is not sufficient contextual data to generate the custom termination time, set the termination time to be equal to a default value. 10. The non-transitory computer readable storage medium of claim 9, wherein to determine the custom termination time based on the contextual data comprises: determine an event based on the contextual data, the event having an end time of the event; and set the custom termination time to be equal to the end time of the event. 11. The non-transitory computer readable storage medium of claim 6, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: generate a notification of a predetermined length of time prior to the termination time. 12. The non-transitory computer readable storage medium of claim 11, wherein the notification comprises an indication of a length of time until the termination time. 13. The non-transitory computer readable storage medium of claim 6, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: while in the DND mode: receive new data representing a new user input; determine, based on the new data representing the new user input, whether the new user input is a second cover gesture over the touch-sensitive display; and in accordance with a determination that the new user input is the second cover gesture over the touch-sensitive display, determine a new termination time that the electronic device will exit the DND mode. 14. The non-transitory computer readable storage medium of claim 13, wherein the new termination time is one hour after a time that it was determined that the new user input is the second cover gesture over the touch-sensitive display. 15. The non-transitory computer readable storage medium of claim 1, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: in response to determining that the user input is the cover gesture over the touch-sensitive display, determine a length of time that the electronic device will be in the DND mode. 16. The non-transitory computer readable storage medium of claim 15, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: in response to the length of time expiring, cause the electronic device to exit the DND mode. 17. The non-transitory computer readable storage medium of claim 15, wherein the length of time is determined based on contextual data. 18. The non-transitory computer readable storage medium of claim 17, wherein to determine the length of time that the electronic device will be in the DND mode comprises: determine whether there is sufficient contextual data to generate a custom length of time; in response to determining that there is sufficient contextual data to generate the custom length of time: determine the custom length of time based on the contextual data; and set the length of time to be equal to the custom length of time; and in response to determining that there is not sufficient contextual data to generate the custom length of time, set the length of time to be equal to a default value. 19. The non-transitory computer readable storage medium of claim 18, wherein to determine the custom length of time based on the contextual data comprises: determine an event based on the contextual data, the event having an end time of the event; determine a difference between a current time and the end time of the event; and set the custom length of time to be equal to the difference between the current time and the end of time of the event. 20. The non-transitory computer readable storage medium of claim 15, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: generate a notification of a predetermined length of time prior to expiration of the length of time that the electronic device will be in the DND mode. 21. The non-transitory computer readable storage medium of claim 20, wherein the notification comprises an indication of a length of time until the expiration of the length of time that the electronic device will be in the DND mode. 22. The non-transitory computer readable storage medium of claim 15, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: while in the DND mode: receive new data representing a new user input; determine, based on the new data representing the new user input, whether the new user input is a second cover gesture over the touch-sensitive display; and in accordance with a determination that the new user input is the second cover gesture over the touch-sensitive display, determine a new length of time that the electronic device will be in the DND mode. 23. The non-transitory computer readable storage medium of claim 1, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: prevent all notifications from being presented while the electronic device is in the DND mode. 24. The non-transitory computer readable storage medium of claim 1, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: prevent a subset of all notifications from being presented while the electronic device is in the DND mode. 25. The non-transitory computer readable storage medium of claim 1, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: cease to display incoming electronic messages on the touch-sensitive display while the electronic device is in the DND mode. 26. The non-transitory computer readable storage medium of claim 1, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: cease to produce haptic outputs in response to incoming electronic messages while the electronic device is in the DND mode. 27. The non-transitory computer readable storage medium of claim 1, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: cease to produce audible outputs in response to incoming electronic messages while the electronic device is in the DND mode. 28. The non-transitory computer readable storage medium of claim 1, wherein the non-transitory computer readable storage medium further comprises instructions, which when executed by the one or more processors of the electronic device, cause the device to: display, on the touch-sensitive display, a DND indicator while the electronic device is in the DND mode. 29. A computer-implemented method comprising: at an electronic device comprising a touch-sensitive display: receiving data representing a user input; determining, based on the data representing the user input, whether the user input is a cover gesture over the touch-sensitive display; and in accordance with a determination that the user input is the cover gesture over the touch-sensitive display, causing the device to enter a do not disturb (DND) mode. 30. An electronic device, comprising: a touch-sensitive display; one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: receiving data representing a user input; determining, based on the data representing the user input, whether the user input is a cover gesture over the touch-sensitive display; and in accordance with a determination that the user input is the cover gesture over the touch-sensitive display, causing the device to enter a do not disturb (DND) mode.
2,600
9,894
9,894
15,137,021
2,631
A communication device is described comprising a first antenna, a second antenna and a third antenna; a first transceiver configured to communicate using at least the first antenna; a second transceiver configured to communicate using at least the second antenna; and a controller configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on a selection criterion and configured to control the first transceiver to communicate using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to communicate using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver.
1-20. (canceled) 21. A communication device comprising: a first antenna, a second antenna and a third antenna; a first transceiver configured to communicate according to a first radio access technology using at least the first antenna; wherein the first radio access technology is selected from the group consisting of long term evolution (LTE), wireless local area network (WLAN), universal mobile telecommunications system (UMTS), global system for mobile communications (GSM), Bluetooth, global positioning system (GPS), and combinations thereof; a second transceiver configured to communicate according to a second radio access technology different from the first radio access technology using at least the second antenna; wherein the second radio access technology is selected from the group consisting of long term evolution (LTE), wireless local area network (WLAN), universal mobile telecommunications system (UMTS), global system for mobile communications (GSM), Bluetooth, global positioning system (GPS), and combinations thereof; and wherein the first radio access technology is different from the second radio access technology; and a controller configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on a fake MIMO ranking and configured to: control the first transceiver to communicate with a base station according to the first radio access technology using the first antenna; control the second transceiver to communicate with a second communication device according to the second radio access technology using the second antenna; and control the first transceiver to receive from the base station using the third antenna and the second transceiver to transmit to the second communication device using the third antenna to operate the communication device as a tethering access point between the base station and the second communication device. 22. The communication device of claim 21, wherein the first transceiver includes a first baseband circuit and the second transceiver includes a second baseband circuit. 23. The communication device of claim 21, wherein the controller is configured to control the first transceiver to communicate using the first antenna and the third antenna and the second transceiver to communicate simultaneously using the second antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to communicate using the second antenna and the third antenna and the first transceiver to communicate simultaneously using the first antenna if the controller has determined that the third antenna is to be used by the second transceiver. 24. The communication device of claim 21, wherein the controller is configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on a quality requirement of the communication of the first transceiver and a quality requirement of the communication of the second transceiver. 25. The communication device of claim 24, wherein the controller is configured to determine that the third antenna is to be used by the first transceiver if the quality requirement of the communication of the first transceiver is higher than the quality requirement of the communication of the second transceiver and to determine that the third antenna is to be used by the second transceiver if the quality requirement of the communication of the second transceiver is higher than the quality requirement of the communication of the first transceiver. 26. The communication device of claim 24, wherein the quality requirement is a throughput requirement or a robustness requirement or a combination of both. 27. The communication device of claim 21, wherein the controller is configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on a priority of the communication of the first transceiver and a priority of the communication of the second transceiver. 28. The communication device of claim 21, wherein the controller is configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on radio conditions of the communication of the first transceiver and based on radio conditions of the communication of the second transceiver. 29. The communication device of claim 28, wherein the controller is configured to determine that the third antenna is to be used by the first transceiver if the radio conditions of the communication of the first transceiver are worse than the radio conditions of the communication of the second transceiver and to determine that the third antenna is to be used by the second transceiver if the radio conditions of the communication of the second transceiver are worse than the radio conditions of the communication of the first transceiver. 30. The communication device of claim 21, wherein the controller is configured to control the first transceiver to perform a Multiple Input Multiple Output (MIMO) communication using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to perform a MIMO communication using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver. 31. The communication device of claim 21, wherein the communication device is a communication terminal. 32. The communication device of claim 21, wherein the communication device is a subscriber terminal of a mobile cellular radio communication system. 33. The communication device of claim 21, wherein the second transceiver is configured to communicate with an access point of a wireless local area network. 34. The communication device of claim 21, wherein the controller is configured to control the first transceiver to perform downlink communication using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to perform downlink communication using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver. 35. The communication device of claim 21, wherein the controller is configured to control the first transceiver to perform downlink communication using the first antenna and the third antenna and the second transceiver to simultaneously perform downlink communication using the second antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to perform downlink communication using the second antenna and the third antenna and the first transceiver to simultaneously perform downlink communication using the first antenna if the controller has determined that the third antenna is to be used by the second transceiver. 36. The communication device of claim 21, wherein the controller is configured: to determine whether the third antenna is to be used by the first transceiver or the second transceiver for downlink communication; to control the first transceiver to perform downlink communication using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver for downlink communication; to control the second transceiver to perform downlink communication using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver for downlink communication; to determine whether the third antenna is to be used by the first transceiver or the second transceiver for uplink communication; to control the first transceiver to perform uplink communication using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver for uplink communication; and to control the second transceiver to perform uplink communication using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver for uplink communication. 37. The communication device of claim 21, wherein the first transceiver is configured to communicate with a first device and the second transceiver is configured to communicate with a second device different from the first device. 38. A method for performing radio communication comprising: determining whether a third antenna of a communication device comprising a first antenna, a second antenna and the third antenna is to be used by a first transceiver or a second transceiver of the communication device based on a fake MIMO ranking; controlling the first transceiver to communicate with a base station according to a first radio access technology using the first antenna; wherein the first radio access technology is selected from the group consisting of long term evolution (LTE), wireless local area network (WLAN), universal mobile telecommunications system (UNITS), global system for mobile communications (GSM), Bluetooth, global positioning system (GPS), and combinations thereof; controlling the second transceiver to communicate with a second communication device according to a second radio access technology different from the first radio access technology using the second antenna; wherein the second radio access technology is selected from the group consisting of long term evolution (LTE), wireless local area network (WLAN), universal mobile telecommunications system (UNITS), global system for mobile communications (GSM), Bluetooth, global positioning system (GPS), and combinations thereof; and wherein the first radio access technology is different from the second radio access technology; and controlling the first transceiver to receive from the base station using the third antenna and the second transceiver to transmit to the second communication device using the third antenna to operate the communication device as a tethering point between the base station and the second communication device. 39. A non-transitory computer readable medium having recorded instructions thereon which, when executed by a processor, make the processor perform a method for performing radio communication, the method comprising: determining whether a third antenna of a communication device comprising a first antenna, a second antenna and the third antenna is to be used by a first transceiver or a second transceiver of the communication device based on a fake MIMO ranking; controlling the first transceiver to communicate with a base station according to a first radio access technology using the first antenna; wherein the first radio access technology is selected from the group consisting of long term evolution (LTE), wireless local area network (WLAN), universal mobile telecommunications system (UMTS), global system for mobile communications (GSM), Bluetooth, global positioning system (GPS), and combinations thereof; controlling the second transceiver to communicate with a second communication device according to a second radio access technology different from the first radio access technology using the second antenna; wherein the second radio access technology is selected from the group consisting of long term evolution (LTE), wireless local area network (WLAN), universal mobile telecommunications system (UMTS), global system for mobile communications (GSM), Bluetooth, global positioning system (GPS), and combinations thereof; and wherein the first radio access technology is different from the second radio access technology; and controlling the first transceiver to receive from the base station using the third antenna and the second transceiver to transmit to the second communication device using the third antenna to operate the communication device as a tethering point between the base station and the second communication device.
A communication device is described comprising a first antenna, a second antenna and a third antenna; a first transceiver configured to communicate using at least the first antenna; a second transceiver configured to communicate using at least the second antenna; and a controller configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on a selection criterion and configured to control the first transceiver to communicate using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to communicate using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver.1-20. (canceled) 21. A communication device comprising: a first antenna, a second antenna and a third antenna; a first transceiver configured to communicate according to a first radio access technology using at least the first antenna; wherein the first radio access technology is selected from the group consisting of long term evolution (LTE), wireless local area network (WLAN), universal mobile telecommunications system (UMTS), global system for mobile communications (GSM), Bluetooth, global positioning system (GPS), and combinations thereof; a second transceiver configured to communicate according to a second radio access technology different from the first radio access technology using at least the second antenna; wherein the second radio access technology is selected from the group consisting of long term evolution (LTE), wireless local area network (WLAN), universal mobile telecommunications system (UMTS), global system for mobile communications (GSM), Bluetooth, global positioning system (GPS), and combinations thereof; and wherein the first radio access technology is different from the second radio access technology; and a controller configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on a fake MIMO ranking and configured to: control the first transceiver to communicate with a base station according to the first radio access technology using the first antenna; control the second transceiver to communicate with a second communication device according to the second radio access technology using the second antenna; and control the first transceiver to receive from the base station using the third antenna and the second transceiver to transmit to the second communication device using the third antenna to operate the communication device as a tethering access point between the base station and the second communication device. 22. The communication device of claim 21, wherein the first transceiver includes a first baseband circuit and the second transceiver includes a second baseband circuit. 23. The communication device of claim 21, wherein the controller is configured to control the first transceiver to communicate using the first antenna and the third antenna and the second transceiver to communicate simultaneously using the second antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to communicate using the second antenna and the third antenna and the first transceiver to communicate simultaneously using the first antenna if the controller has determined that the third antenna is to be used by the second transceiver. 24. The communication device of claim 21, wherein the controller is configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on a quality requirement of the communication of the first transceiver and a quality requirement of the communication of the second transceiver. 25. The communication device of claim 24, wherein the controller is configured to determine that the third antenna is to be used by the first transceiver if the quality requirement of the communication of the first transceiver is higher than the quality requirement of the communication of the second transceiver and to determine that the third antenna is to be used by the second transceiver if the quality requirement of the communication of the second transceiver is higher than the quality requirement of the communication of the first transceiver. 26. The communication device of claim 24, wherein the quality requirement is a throughput requirement or a robustness requirement or a combination of both. 27. The communication device of claim 21, wherein the controller is configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on a priority of the communication of the first transceiver and a priority of the communication of the second transceiver. 28. The communication device of claim 21, wherein the controller is configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on radio conditions of the communication of the first transceiver and based on radio conditions of the communication of the second transceiver. 29. The communication device of claim 28, wherein the controller is configured to determine that the third antenna is to be used by the first transceiver if the radio conditions of the communication of the first transceiver are worse than the radio conditions of the communication of the second transceiver and to determine that the third antenna is to be used by the second transceiver if the radio conditions of the communication of the second transceiver are worse than the radio conditions of the communication of the first transceiver. 30. The communication device of claim 21, wherein the controller is configured to control the first transceiver to perform a Multiple Input Multiple Output (MIMO) communication using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to perform a MIMO communication using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver. 31. The communication device of claim 21, wherein the communication device is a communication terminal. 32. The communication device of claim 21, wherein the communication device is a subscriber terminal of a mobile cellular radio communication system. 33. The communication device of claim 21, wherein the second transceiver is configured to communicate with an access point of a wireless local area network. 34. The communication device of claim 21, wherein the controller is configured to control the first transceiver to perform downlink communication using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to perform downlink communication using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver. 35. The communication device of claim 21, wherein the controller is configured to control the first transceiver to perform downlink communication using the first antenna and the third antenna and the second transceiver to simultaneously perform downlink communication using the second antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to perform downlink communication using the second antenna and the third antenna and the first transceiver to simultaneously perform downlink communication using the first antenna if the controller has determined that the third antenna is to be used by the second transceiver. 36. The communication device of claim 21, wherein the controller is configured: to determine whether the third antenna is to be used by the first transceiver or the second transceiver for downlink communication; to control the first transceiver to perform downlink communication using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver for downlink communication; to control the second transceiver to perform downlink communication using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver for downlink communication; to determine whether the third antenna is to be used by the first transceiver or the second transceiver for uplink communication; to control the first transceiver to perform uplink communication using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver for uplink communication; and to control the second transceiver to perform uplink communication using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver for uplink communication. 37. The communication device of claim 21, wherein the first transceiver is configured to communicate with a first device and the second transceiver is configured to communicate with a second device different from the first device. 38. A method for performing radio communication comprising: determining whether a third antenna of a communication device comprising a first antenna, a second antenna and the third antenna is to be used by a first transceiver or a second transceiver of the communication device based on a fake MIMO ranking; controlling the first transceiver to communicate with a base station according to a first radio access technology using the first antenna; wherein the first radio access technology is selected from the group consisting of long term evolution (LTE), wireless local area network (WLAN), universal mobile telecommunications system (UNITS), global system for mobile communications (GSM), Bluetooth, global positioning system (GPS), and combinations thereof; controlling the second transceiver to communicate with a second communication device according to a second radio access technology different from the first radio access technology using the second antenna; wherein the second radio access technology is selected from the group consisting of long term evolution (LTE), wireless local area network (WLAN), universal mobile telecommunications system (UNITS), global system for mobile communications (GSM), Bluetooth, global positioning system (GPS), and combinations thereof; and wherein the first radio access technology is different from the second radio access technology; and controlling the first transceiver to receive from the base station using the third antenna and the second transceiver to transmit to the second communication device using the third antenna to operate the communication device as a tethering point between the base station and the second communication device. 39. A non-transitory computer readable medium having recorded instructions thereon which, when executed by a processor, make the processor perform a method for performing radio communication, the method comprising: determining whether a third antenna of a communication device comprising a first antenna, a second antenna and the third antenna is to be used by a first transceiver or a second transceiver of the communication device based on a fake MIMO ranking; controlling the first transceiver to communicate with a base station according to a first radio access technology using the first antenna; wherein the first radio access technology is selected from the group consisting of long term evolution (LTE), wireless local area network (WLAN), universal mobile telecommunications system (UMTS), global system for mobile communications (GSM), Bluetooth, global positioning system (GPS), and combinations thereof; controlling the second transceiver to communicate with a second communication device according to a second radio access technology different from the first radio access technology using the second antenna; wherein the second radio access technology is selected from the group consisting of long term evolution (LTE), wireless local area network (WLAN), universal mobile telecommunications system (UMTS), global system for mobile communications (GSM), Bluetooth, global positioning system (GPS), and combinations thereof; and wherein the first radio access technology is different from the second radio access technology; and controlling the first transceiver to receive from the base station using the third antenna and the second transceiver to transmit to the second communication device using the third antenna to operate the communication device as a tethering point between the base station and the second communication device.
2,600
9,895
9,895
14,062,982
2,631
A communication device is described comprising a first antenna, a second antenna and a third antenna; a first transceiver configured to communicate using at least the first antenna; a second transceiver configured to communicate using at least the second antenna; and a controller configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on a selection criterion and configured to control the first transceiver to communicate using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to communicate using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver.
1. A communication device comprising: a first antenna, a second antenna and a third antenna; a first transceiver configured to communicate using at least the first antenna; a second transceiver configured to communicate using at least the second antenna; and a controller configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on a selection criterion and configured to control the first transceiver to communicate using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to communicate using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver. 2. The communication device of claim 1, wherein the first transceiver is configured to communicate according to a first radio access technology and the second transceiver is configured to communicate according to a second radio access technology different from the first radio access technology. 3. The communication device of claim 1, wherein the first transceiver includes a first baseband circuit and the second transceiver includes a second baseband circuit. 4. The communication device of claim 1, wherein the controller is configured to control the first transceiver to communicate using the first antenna and the third antenna and the second transceiver to communicate simultaneously using the second antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to communicate using the second antenna and the third antenna and the first transceiver to communicate simultaneously using the first antenna if the controller has determined that the third antenna is to be used by the second transceiver. 5. The communication device of claim 1, wherein the controller is configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on a quality requirement of the communication of the first transceiver and a quality requirement of the communication of the second transceiver. 6. The communication device of claim 5, wherein the controller is configured to determine that the third antenna is to be used by the first transceiver if the quality requirement of the communication of the first transceiver is higher than the quality requirement of the communication of the second transceiver and to determine that the third antenna is to be used by the second transceiver if the quality requirement of the communication of the second transceiver is higher than the quality requirement of the communication of the first transceiver. 7. The communication device of claim 5, wherein the quality requirement is a throughput requirement or a robustness requirement or a combination of both. 8. The communication device of claim 1, wherein the controller is configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on a priority of the communication of the first transceiver and a priority of the communication of the second transceiver. 9. The communication device of claim 1, wherein the controller is configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on radio conditions of the communication of the first transceiver and based on radio conditions of the communication of the second transceiver. 10. The communication device of claim 9, wherein the controller is configured to determine that the third antenna is to be used by the first transceiver if the radio conditions of the communication of the first transceiver are worse than the radio conditions of the communication of the second transceiver and to determine that the third antenna is to be used by the second transceiver if the radio conditions of the communication of the second transceiver are worse than the radio conditions of the communication of the first transceiver. 11. The communication device of claim 1, wherein the controller is configured to control the first transceiver to perform a MIMO communication using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to perform a MIMO communication using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver. 12. The communication device of claim 1, wherein the communication device is a communication terminal. 13. The communication device of claim 1, wherein the communication device is a subscriber terminal of a mobile cellular radio communication system and the first transceiver is configured to communicate with a base station of the mobile cellular radio communication system. 14. The communication device of claim 1, wherein the second transceiver is configured to communicate with an access point of a wireless local area network. 15. The communication device of claim 1, wherein the controller is configured to control the first transceiver to perform downlink communication using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to perform downlink communication using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver. 16. The communication device of claim 1, wherein the controller is configured to control the first transceiver to perform downlink communication using the first antenna and the third antenna and the second transceiver to simultaneously perform downlink communication using the second antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to perform downlink communication using the second antenna and the third antenna and the first transceiver to simultaneously perform downlink communication using the first antenna if the controller has determined that the third antenna is to be used by the second transceiver. 17. The communication device of claim 1, wherein the controller is configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver for downlink communication; to control the first transceiver to perform downlink communication using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver for downlink communication; to control the second transceiver to perform downlink communication using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver for downlink communication; to determine whether the third antenna is to be used by the first transceiver or the second transceiver for uplink communication; to control the first transceiver to perform uplink communication using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver for uplink communication; and to control the second transceiver to perform uplink communication using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver for uplink communication. 18. The communication device of claim 1, wherein the first transceiver is configured to communicate with a first device and the second transceiver is configured to communicate with a second device different from the first device. 19. A method for performing radio communication comprising determining whether a third antenna of a communication device comprising a first antenna, a second antenna and the third antenna is to be used by a first transceiver or a second transceiver of the communication device based on a selection criterion; controlling the first transceiver to communicate using the first antenna and the third antenna if the third antenna is to be used by the first transceiver; and controlling the second transceiver to communicate using the second antenna and the third antenna if the third antenna is to be used by the second transceiver. 20. A computer readable medium having recorded instructions thereon which, when executed by a processor, make the processor perform a method for performing radio communication according to claim 19.
A communication device is described comprising a first antenna, a second antenna and a third antenna; a first transceiver configured to communicate using at least the first antenna; a second transceiver configured to communicate using at least the second antenna; and a controller configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on a selection criterion and configured to control the first transceiver to communicate using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to communicate using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver.1. A communication device comprising: a first antenna, a second antenna and a third antenna; a first transceiver configured to communicate using at least the first antenna; a second transceiver configured to communicate using at least the second antenna; and a controller configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on a selection criterion and configured to control the first transceiver to communicate using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to communicate using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver. 2. The communication device of claim 1, wherein the first transceiver is configured to communicate according to a first radio access technology and the second transceiver is configured to communicate according to a second radio access technology different from the first radio access technology. 3. The communication device of claim 1, wherein the first transceiver includes a first baseband circuit and the second transceiver includes a second baseband circuit. 4. The communication device of claim 1, wherein the controller is configured to control the first transceiver to communicate using the first antenna and the third antenna and the second transceiver to communicate simultaneously using the second antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to communicate using the second antenna and the third antenna and the first transceiver to communicate simultaneously using the first antenna if the controller has determined that the third antenna is to be used by the second transceiver. 5. The communication device of claim 1, wherein the controller is configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on a quality requirement of the communication of the first transceiver and a quality requirement of the communication of the second transceiver. 6. The communication device of claim 5, wherein the controller is configured to determine that the third antenna is to be used by the first transceiver if the quality requirement of the communication of the first transceiver is higher than the quality requirement of the communication of the second transceiver and to determine that the third antenna is to be used by the second transceiver if the quality requirement of the communication of the second transceiver is higher than the quality requirement of the communication of the first transceiver. 7. The communication device of claim 5, wherein the quality requirement is a throughput requirement or a robustness requirement or a combination of both. 8. The communication device of claim 1, wherein the controller is configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on a priority of the communication of the first transceiver and a priority of the communication of the second transceiver. 9. The communication device of claim 1, wherein the controller is configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver based on radio conditions of the communication of the first transceiver and based on radio conditions of the communication of the second transceiver. 10. The communication device of claim 9, wherein the controller is configured to determine that the third antenna is to be used by the first transceiver if the radio conditions of the communication of the first transceiver are worse than the radio conditions of the communication of the second transceiver and to determine that the third antenna is to be used by the second transceiver if the radio conditions of the communication of the second transceiver are worse than the radio conditions of the communication of the first transceiver. 11. The communication device of claim 1, wherein the controller is configured to control the first transceiver to perform a MIMO communication using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to perform a MIMO communication using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver. 12. The communication device of claim 1, wherein the communication device is a communication terminal. 13. The communication device of claim 1, wherein the communication device is a subscriber terminal of a mobile cellular radio communication system and the first transceiver is configured to communicate with a base station of the mobile cellular radio communication system. 14. The communication device of claim 1, wherein the second transceiver is configured to communicate with an access point of a wireless local area network. 15. The communication device of claim 1, wherein the controller is configured to control the first transceiver to perform downlink communication using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to perform downlink communication using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver. 16. The communication device of claim 1, wherein the controller is configured to control the first transceiver to perform downlink communication using the first antenna and the third antenna and the second transceiver to simultaneously perform downlink communication using the second antenna if the controller has determined that the third antenna is to be used by the first transceiver and to control the second transceiver to perform downlink communication using the second antenna and the third antenna and the first transceiver to simultaneously perform downlink communication using the first antenna if the controller has determined that the third antenna is to be used by the second transceiver. 17. The communication device of claim 1, wherein the controller is configured to determine whether the third antenna is to be used by the first transceiver or the second transceiver for downlink communication; to control the first transceiver to perform downlink communication using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver for downlink communication; to control the second transceiver to perform downlink communication using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver for downlink communication; to determine whether the third antenna is to be used by the first transceiver or the second transceiver for uplink communication; to control the first transceiver to perform uplink communication using the first antenna and the third antenna if the controller has determined that the third antenna is to be used by the first transceiver for uplink communication; and to control the second transceiver to perform uplink communication using the second antenna and the third antenna if the controller has determined that the third antenna is to be used by the second transceiver for uplink communication. 18. The communication device of claim 1, wherein the first transceiver is configured to communicate with a first device and the second transceiver is configured to communicate with a second device different from the first device. 19. A method for performing radio communication comprising determining whether a third antenna of a communication device comprising a first antenna, a second antenna and the third antenna is to be used by a first transceiver or a second transceiver of the communication device based on a selection criterion; controlling the first transceiver to communicate using the first antenna and the third antenna if the third antenna is to be used by the first transceiver; and controlling the second transceiver to communicate using the second antenna and the third antenna if the third antenna is to be used by the second transceiver. 20. A computer readable medium having recorded instructions thereon which, when executed by a processor, make the processor perform a method for performing radio communication according to claim 19.
2,600
9,896
9,896
15,380,116
2,683
Systems and methods for responding to an Electronic Article Surveillance (“EAS”) alarm's issuance. The methods involve: receiving, by a mobile device, a short range communication signal from a fixed device located in proximity to EAS equipment issuing the EAS alarm; automatically transitioning an operational mode of the mobile device from a first operational mode in which alarm response functions are disabled to a second operational mode in which alarm response functions are enabled, in response to the short range communication signal's reception; receiving, by the mobile device, a user input for inputting a reason code specifying a reason for the EAS alarm's issuance; and communicating the reason code from the mobile device to an external device for causing a deactivation of the EAS alarm's issuance.
1. A method for responding to an Electronic Article Surveillance (“EAS”) alarm's issuance, comprising: receiving, by a mobile device, a short range communication signal from a fixed device located in proximity to EAS equipment issuing the EAS alarm; in response to the short range communication signal's reception, automatically transitioning an operational mode of the mobile device from a first operational mode in which alarm response functions are disabled to a second operational mode in which alarm response functions are enabled; receiving, by the mobile device, a user input for inputting a reason code specifying a reason for the EAS alarm's issuance; and communicating the reason code from the mobile device to an external device for causing a deactivation of the EAS alarm's issuance. 2. The method according to claim 1, wherein the EAS alarm issuance occurs as a result of a detection of an active EAS security tag's presence in a surveillance zone. 3. The method according to claim 1, wherein the reason code comprises a recovery code, a failed-to-deactivate code, an incoming item code, a system test code, an unattended code, an unexplained code, a tag-in area code, a runaway code, or a stock movement code. 4. The method according to claim 1, further comprising automatically prompting a user of the mobile device to indicate at least one detail associated with human activities associated with the EAS alarm's issuance or automatically capturing an image or video of a surrounding environment, in response to the reason code. 5. The method according to claim 1, further comprising automatically initiating voice or sound detection and recognition operations of the mobile device, in response to the reason code. 6. The method according to claim 5, wherein the voice or sound detection and recognition operations of the mobile device are performed to determine if a certain word was spoken or a certain sound was made in proximity to the mobile device that should trigger a remedial action. 7. The method according to claim 6, wherein the remedial action comprises notifying security personnel or notifying emergency personnel. 8. The method according to claim 1, further comprising communicating at least one of the following items from the mobile device to the external device along with the reason code: a unique identifier of the fixed device; a unique identifier of the mobile device; a unique identifier of a user of the mobile device; at least one scanned product barcode; a scanned receipt barcode; and a timestamp. 9. The method according to claim 1, wherein the fixed device comprises a beacon. 10. The method according to claim 1, further comprising automatically transitioning the operational mode of the mobile device from the second operational mode to the first operational mode when the mobile device moves out of range of the fixed device. 11. A mobile device, comprising: a processor; and a computer-readable storage medium comprising programming instructions that are configured to cause the processor to implement a method for responding to an Electronic Article Surveillance (“EAS”) alarm's issuance, wherein the programming instructions comprise instructions to: receive a short range communication signal from a fixed device located in proximity to EAS equipment issuing the EAS alarm; automatically transition an operational mode of the mobile device from a first operational mode in which alarm response functions are disabled to a second operational mode in which alarm response functions are enabled, in response to the short range communication signal's reception; receive a user input for inputting a reason code specifying a reason for the EAS alarm's issuance; and communicate the reason code to an external device for causing a deactivation of the EAS alarm's issuance. 12. The mobile device according to claim 11, wherein the EAS alarm issuance occurs as a result of a detection of an active EAS security tag's presence in a surveillance zone. 13. The mobile device according to claim 11, wherein the reason code comprises a recovery code, a failed-to-deactivate code, an incoming item code, a system test code, an unattended code, an unexplained code, a tag-in area code, a runaway code, or a stock movement code. 14. The mobile device according to claim 11, wherein the programming instructions further comprise instructions to automatically prompt a user of the mobile device to indicate at least one detail associated with human activities associated with the EAS alarm's issuance or automatically capturing an image or video of a surrounding environment, in response to the reason code. 15. The mobile device according to claim 11, wherein the programming instructions further comprise instructions to automatically initiate voice or sound detection and recognition operations of the mobile device, in response to the reason code. 16. The mobile device according to claim 15, wherein the voice or sound detection and recognition operations of the mobile device are performed to determine if a certain word was spoken or a certain sound was made in proximity to the mobile device that should trigger a remedial action. 17. The mobile device according to claim 16, wherein the remedial action comprises notifying security personnel or notifying emergency personnel. 18. The mobile device according to claim 11, wherein at least one of the following items is communicated to the external device along with the reason code: a unique identifier of the fixed device; a unique identifier of the mobile device; a unique identifier of a user of the mobile device; at least one scanned product barcode; a scanned receipt barcode; and a timestamp. 19. The mobile device according to claim 11, wherein the fixed device comprises a beacon. 20. The mobile device according to claim 11, wherein the programming instructions further comprise instructions to automatically transition the operational mode of the mobile device from the second operational mode to the first operational mode when the mobile device moves out of range of the fixed device.
Systems and methods for responding to an Electronic Article Surveillance (“EAS”) alarm's issuance. The methods involve: receiving, by a mobile device, a short range communication signal from a fixed device located in proximity to EAS equipment issuing the EAS alarm; automatically transitioning an operational mode of the mobile device from a first operational mode in which alarm response functions are disabled to a second operational mode in which alarm response functions are enabled, in response to the short range communication signal's reception; receiving, by the mobile device, a user input for inputting a reason code specifying a reason for the EAS alarm's issuance; and communicating the reason code from the mobile device to an external device for causing a deactivation of the EAS alarm's issuance.1. A method for responding to an Electronic Article Surveillance (“EAS”) alarm's issuance, comprising: receiving, by a mobile device, a short range communication signal from a fixed device located in proximity to EAS equipment issuing the EAS alarm; in response to the short range communication signal's reception, automatically transitioning an operational mode of the mobile device from a first operational mode in which alarm response functions are disabled to a second operational mode in which alarm response functions are enabled; receiving, by the mobile device, a user input for inputting a reason code specifying a reason for the EAS alarm's issuance; and communicating the reason code from the mobile device to an external device for causing a deactivation of the EAS alarm's issuance. 2. The method according to claim 1, wherein the EAS alarm issuance occurs as a result of a detection of an active EAS security tag's presence in a surveillance zone. 3. The method according to claim 1, wherein the reason code comprises a recovery code, a failed-to-deactivate code, an incoming item code, a system test code, an unattended code, an unexplained code, a tag-in area code, a runaway code, or a stock movement code. 4. The method according to claim 1, further comprising automatically prompting a user of the mobile device to indicate at least one detail associated with human activities associated with the EAS alarm's issuance or automatically capturing an image or video of a surrounding environment, in response to the reason code. 5. The method according to claim 1, further comprising automatically initiating voice or sound detection and recognition operations of the mobile device, in response to the reason code. 6. The method according to claim 5, wherein the voice or sound detection and recognition operations of the mobile device are performed to determine if a certain word was spoken or a certain sound was made in proximity to the mobile device that should trigger a remedial action. 7. The method according to claim 6, wherein the remedial action comprises notifying security personnel or notifying emergency personnel. 8. The method according to claim 1, further comprising communicating at least one of the following items from the mobile device to the external device along with the reason code: a unique identifier of the fixed device; a unique identifier of the mobile device; a unique identifier of a user of the mobile device; at least one scanned product barcode; a scanned receipt barcode; and a timestamp. 9. The method according to claim 1, wherein the fixed device comprises a beacon. 10. The method according to claim 1, further comprising automatically transitioning the operational mode of the mobile device from the second operational mode to the first operational mode when the mobile device moves out of range of the fixed device. 11. A mobile device, comprising: a processor; and a computer-readable storage medium comprising programming instructions that are configured to cause the processor to implement a method for responding to an Electronic Article Surveillance (“EAS”) alarm's issuance, wherein the programming instructions comprise instructions to: receive a short range communication signal from a fixed device located in proximity to EAS equipment issuing the EAS alarm; automatically transition an operational mode of the mobile device from a first operational mode in which alarm response functions are disabled to a second operational mode in which alarm response functions are enabled, in response to the short range communication signal's reception; receive a user input for inputting a reason code specifying a reason for the EAS alarm's issuance; and communicate the reason code to an external device for causing a deactivation of the EAS alarm's issuance. 12. The mobile device according to claim 11, wherein the EAS alarm issuance occurs as a result of a detection of an active EAS security tag's presence in a surveillance zone. 13. The mobile device according to claim 11, wherein the reason code comprises a recovery code, a failed-to-deactivate code, an incoming item code, a system test code, an unattended code, an unexplained code, a tag-in area code, a runaway code, or a stock movement code. 14. The mobile device according to claim 11, wherein the programming instructions further comprise instructions to automatically prompt a user of the mobile device to indicate at least one detail associated with human activities associated with the EAS alarm's issuance or automatically capturing an image or video of a surrounding environment, in response to the reason code. 15. The mobile device according to claim 11, wherein the programming instructions further comprise instructions to automatically initiate voice or sound detection and recognition operations of the mobile device, in response to the reason code. 16. The mobile device according to claim 15, wherein the voice or sound detection and recognition operations of the mobile device are performed to determine if a certain word was spoken or a certain sound was made in proximity to the mobile device that should trigger a remedial action. 17. The mobile device according to claim 16, wherein the remedial action comprises notifying security personnel or notifying emergency personnel. 18. The mobile device according to claim 11, wherein at least one of the following items is communicated to the external device along with the reason code: a unique identifier of the fixed device; a unique identifier of the mobile device; a unique identifier of a user of the mobile device; at least one scanned product barcode; a scanned receipt barcode; and a timestamp. 19. The mobile device according to claim 11, wherein the fixed device comprises a beacon. 20. The mobile device according to claim 11, wherein the programming instructions further comprise instructions to automatically transition the operational mode of the mobile device from the second operational mode to the first operational mode when the mobile device moves out of range of the fixed device.
2,600
9,897
9,897
15,228,307
2,696
A system and method for use in conjunction with a device having image capture hardware allow a user to selectively enlarge a specific area in a view prior to or during image capture, or to selectively enlarge a portion of a captured image. The function enlarges a selected area of a view while leaving the remainder of the view unchanged. Similarly, the user is able to then digitally capture the previewed image, including the enlargement. In an embodiment, launching the device camera in a preview mode enables an enlargeable area, if the user chooses to use it. The user can then move the phone or the area and point it to the object or region that he or she wants to enlarge. The amplifying factor may be adjustable and the enlarged area may be highlighted or framed in the preview or in the captured image data.
1. An image capture device comprising: camera hardware to generate image data, the camera hardware including at least one image capture element, at least one lens and at least one aperture; a device display linked to the camera hardware to display the generated image data, creating a displayed image; a camera memory medium configured to record data representing an image corresponding to the generated image data; and a non-transitory device memory medium having stored therein instructions for instantiating a camera application, the camera application being configured to receive a user request to enlarge a selected portion of the displayed image, and in response to modify the recorded image data such that the image corresponding to the modified recorded image contains an enlarged view of the selected portion, and write the modified recorded image data to the device display. 2. The image capture device in accordance with claim 1, wherein the camera application is configured to write the modified recorded image data to the device memory in response to receiving an instruction to capture the image data. 3. The image capture device in accordance with claim 1, wherein the camera application is further configured to receive the user request to enlarge a selected portion of the image via user manipulation of an indicator shown on the device display. 4. The image capture device in accordance with claim 1, wherein image capture element is one of a CMOS (complementary metal-oxide semiconductor) device and a CCD (charge-coupled device). 5. The image capture device in accordance with claim 1, wherein the camera memory medium is separate from device memory medium. 6. The image capture device in accordance with claim 1, wherein the camera memory medium is a volatile memory medium. 7. The image capture device in accordance with claim 1, wherein the camera memory medium and the device memory medium are portions of a shared memory medium. 8. A portable cellular communication device comprising: image capture hardware; a display; a first memory medium linked to the image capture hardware for storing a preview image gathered by the image capture hardware, and from which the display is written; a second memory medium for storing a captured image based on the preview image; and a camera application configured to receive user input from the display relative to the displayed preview image and, in response to said user input, to enlarge a user-selected portion of the preview image, producing a modified image. 9. The portable cellular communication device in accordance with claim 8, wherein camera application is configured to write data corresponding to the modified image to the second memory medium in response to receiving an instruction to capture image data. 10. The portable cellular communication device in accordance with claim 8, wherein the user input is received via user manipulation of an indicator shown on the display. 11. The portable cellular communication device in accordance with claim 8, wherein the first memory medium is separate from the second memory medium. 12. The portable cellular communication device in accordance with claim 8, wherein the first memory medium is a volatile memory medium. 13. The portable cellular communication device in accordance with claim 8, wherein the first memory medium and the second memory medium are portions of a shared memory medium. 14. A method of capturing image data comprising: receiving preview image data; displaying a preview image corresponding to the preview image data; displaying a user-interactive indicator visually overlaid on the displayed preview image; receiving a user request via movement of the user-interactive indicator to enlarge a portion of the preview image; in response to receiving the user request, modifying the preview image data such that the corresponding modified preview image contains an enlarged portion in accordance with the user request; and displaying the modified preview image. 15. The method in accordance with claim 14, wherein receiving preview image data comprises receiving image data from image capture hardware. 16. The method in accordance with claim 14, wherein the user-interactive indicator includes one or more of a circle and a rectangle. 17. The method in accordance with claim 14, further comprising storing the preview image data in a volatile memory medium, and storing the modified preview image data in the volatile memory medium. 18. The method in accordance with claim 14, further comprising receiving a user request to capture the modified preview image and, in response, writing the modified preview image data to a nonvolatile memory medium. 19. The method in accordance with claim 18, wherein writing the modified preview image data to a nonvolatile memory medium includes altering the modified preview image data to include a border surrounding the enlarged portion. 20. The method in accordance with claim 18, wherein the border surrounding the enlarged portion comprises a blurred line.
A system and method for use in conjunction with a device having image capture hardware allow a user to selectively enlarge a specific area in a view prior to or during image capture, or to selectively enlarge a portion of a captured image. The function enlarges a selected area of a view while leaving the remainder of the view unchanged. Similarly, the user is able to then digitally capture the previewed image, including the enlargement. In an embodiment, launching the device camera in a preview mode enables an enlargeable area, if the user chooses to use it. The user can then move the phone or the area and point it to the object or region that he or she wants to enlarge. The amplifying factor may be adjustable and the enlarged area may be highlighted or framed in the preview or in the captured image data.1. An image capture device comprising: camera hardware to generate image data, the camera hardware including at least one image capture element, at least one lens and at least one aperture; a device display linked to the camera hardware to display the generated image data, creating a displayed image; a camera memory medium configured to record data representing an image corresponding to the generated image data; and a non-transitory device memory medium having stored therein instructions for instantiating a camera application, the camera application being configured to receive a user request to enlarge a selected portion of the displayed image, and in response to modify the recorded image data such that the image corresponding to the modified recorded image contains an enlarged view of the selected portion, and write the modified recorded image data to the device display. 2. The image capture device in accordance with claim 1, wherein the camera application is configured to write the modified recorded image data to the device memory in response to receiving an instruction to capture the image data. 3. The image capture device in accordance with claim 1, wherein the camera application is further configured to receive the user request to enlarge a selected portion of the image via user manipulation of an indicator shown on the device display. 4. The image capture device in accordance with claim 1, wherein image capture element is one of a CMOS (complementary metal-oxide semiconductor) device and a CCD (charge-coupled device). 5. The image capture device in accordance with claim 1, wherein the camera memory medium is separate from device memory medium. 6. The image capture device in accordance with claim 1, wherein the camera memory medium is a volatile memory medium. 7. The image capture device in accordance with claim 1, wherein the camera memory medium and the device memory medium are portions of a shared memory medium. 8. A portable cellular communication device comprising: image capture hardware; a display; a first memory medium linked to the image capture hardware for storing a preview image gathered by the image capture hardware, and from which the display is written; a second memory medium for storing a captured image based on the preview image; and a camera application configured to receive user input from the display relative to the displayed preview image and, in response to said user input, to enlarge a user-selected portion of the preview image, producing a modified image. 9. The portable cellular communication device in accordance with claim 8, wherein camera application is configured to write data corresponding to the modified image to the second memory medium in response to receiving an instruction to capture image data. 10. The portable cellular communication device in accordance with claim 8, wherein the user input is received via user manipulation of an indicator shown on the display. 11. The portable cellular communication device in accordance with claim 8, wherein the first memory medium is separate from the second memory medium. 12. The portable cellular communication device in accordance with claim 8, wherein the first memory medium is a volatile memory medium. 13. The portable cellular communication device in accordance with claim 8, wherein the first memory medium and the second memory medium are portions of a shared memory medium. 14. A method of capturing image data comprising: receiving preview image data; displaying a preview image corresponding to the preview image data; displaying a user-interactive indicator visually overlaid on the displayed preview image; receiving a user request via movement of the user-interactive indicator to enlarge a portion of the preview image; in response to receiving the user request, modifying the preview image data such that the corresponding modified preview image contains an enlarged portion in accordance with the user request; and displaying the modified preview image. 15. The method in accordance with claim 14, wherein receiving preview image data comprises receiving image data from image capture hardware. 16. The method in accordance with claim 14, wherein the user-interactive indicator includes one or more of a circle and a rectangle. 17. The method in accordance with claim 14, further comprising storing the preview image data in a volatile memory medium, and storing the modified preview image data in the volatile memory medium. 18. The method in accordance with claim 14, further comprising receiving a user request to capture the modified preview image and, in response, writing the modified preview image data to a nonvolatile memory medium. 19. The method in accordance with claim 18, wherein writing the modified preview image data to a nonvolatile memory medium includes altering the modified preview image data to include a border surrounding the enlarged portion. 20. The method in accordance with claim 18, wherein the border surrounding the enlarged portion comprises a blurred line.
2,600
9,898
9,898
15,150,781
2,662
An anti-dazzle imaging camera is provided that includes a photorefractive crystal that is wavelength-agnostic. The photorefractive crystal is configured to receive an optical beam. When the optical beam includes no laser, the photorefractive crystal is configured to pass the optical beam unchanged to an imaging detector. When the optical beam includes a laser, the photorefractive crystal is configured to attenuate the laser to generate a modified optical beam and to pass the modified optical beam to the imaging detector.
1. An anti-dazzle imaging camera, comprising: a photorefractive crystal having a first surface and a second surface, the photorefractive crystal configured (i) to receive an optical beam, (ii) when the optical beam includes no laser radiation, to pass the optical beam unchanged to an imaging detector, and (iii) when the optical beam includes laser radiation, to attenuate the laser radiation to generate a modified optical beam and to pass the modified optical beam to the imaging detector, wherein, to attenuate the laser radiation, the photorefractive crystal is configured to (i) reflect a portion of the laser radiation off the first surface of the photorefractive crystal back into the photorefractive crystal and (ii) write a grating in the photorefractive crystal due to interference between the portion of the laser radiation reflected off the first surface and further laser radiation entering the second surface of the photorefractive crystal, wherein the photorefractive crystal is wavelength-agnostic. 2. The anti-dazzle imaging camera of claim 1, wherein the photorefractive crystal comprises a potassium niobate crystal. 3. The anti-dazzle imaging camera of claim 1, wherein the photorefractive crystal comprises a ternary crystal. 4. The anti-dazzle imaging camera of claim 1, wherein the photorefractive crystal is configured, when the optical beam includes laser radiation and subsequently includes no laser radiation, to pass the optical beam unchanged to the imaging detector when the optical beam subsequently includes no laser radiation. 5. The anti-dazzle imaging camera of claim 1, wherein the photorefractive crystal is configured to attenuate laser radiation across a visible to near-infrared (NIR) spectral band. 6. The anti-dazzle imaging camera of claim 1, wherein the photorefractive crystal is configured to function at f-numbers less than f/10. 7. The anti-dazzle imaging camera of claim 1, wherein the photorefractive crystal is located in a converging optical beam of the anti-dazzle imaging camera in front of the imaging detector. 8. An anti-dazzle imaging camera, comprising: an imaging detector configured to generate an image; and a photorefractive crystal having a first surface and a second surface, the photorefractive crystal configured (i) to receive an optical beam, (ii) when the optical beam includes no laser radiation, to pass the optical beam to the imaging detector, and (iii) when the optical beam includes laser radiation, to attenuate the laser radiation to generate a modified optical beam and to pass the modified optical beam to the imaging detector, wherein, to attenuate the laser radiation, the photorefractive crystal is configured to (i) reflect a portion of the laser radiation off the first surface of the photorefractive crystal back into the photorefractive crystal and (ii) write a grating in the photorefractive crystal due to interference between the portion of the laser radiation reflected off the first surface and further laser radiation entering the second surface of the photorefractive crystal, wherein the photorefractive crystal is wavelength-agnostic, and wherein the imaging detector is configured to generate the image based on the optical beam or the modified optical beam received from the photorefractive crystal. 9. The anti-dazzle imaging camera of claim 8, wherein the photorefractive crystal comprises a potassium niobate crystal. 10. The anti-dazzle imaging camera of claim 8, wherein the photorefractive crystal is configured, when the optical beam includes laser radiation and subsequently includes no laser radiation, to pass the optical beam unchanged to the imaging detector when the optical beam subsequently includes no laser radiation. 11. The anti-dazzle imaging camera of claim 8, wherein the photorefractive crystal is configured to attenuate laser radiation across a visible to near-infrared (NIR) spectral band. 12. The anti-dazzle imaging camera of claim 8, wherein the photorefractive crystal is configured to function at f-numbers less than f/10. 13. The anti-dazzle imaging camera of claim 8, wherein the photorefractive crystal is located in a converging optical beam of the anti-dazzle imaging camera in front of the imaging detector. 14. A method for attenuating a laser using an anti-dazzle imaging camera, comprising: receiving an optical beam at a wavelength-agnostic photorefractive crystal having a first surface and a second surface; when the optical beam includes no laser radiation, passing the optical beam to an imaging detector; and when the optical beam includes laser radiation, passively attenuating the laser radiation with the photorefractive crystal to generate a modified optical beam and passing the modified optical beam to the imaging detector, wherein passively attenuating the laser radiation comprises (i) reflecting a portion of the laser radiation off the first surface of the photorefractive crystal back into the photorefractive crystal and (ii) writing a grating in the photorefractive crystal due to interference between the portion of the laser radiation reflected off the first surface and further laser radiation entering the second surface of the photorefractive crystal. 15. The method of claim 14, wherein the photorefractive crystal comprises a potassium niobate crystal. 16. The method of claim 14, wherein the grating in the photorefractive crystal is written according to a wavelength of the laser. 17. The method of claim 16, further comprising, when the optical beam includes laser radiation and subsequently includes no laser radiation, dissipating the grating from the photorefractive crystal. 18. The method of claim 16, wherein writing the grating in the photorefractive crystal comprises writing the grating in about 50 to about 100 μs. 19. The method of claim 16, wherein the grating comprises a holographic grating. 20. The method of claim 16, wherein the grating comprises a refractive index grating.
An anti-dazzle imaging camera is provided that includes a photorefractive crystal that is wavelength-agnostic. The photorefractive crystal is configured to receive an optical beam. When the optical beam includes no laser, the photorefractive crystal is configured to pass the optical beam unchanged to an imaging detector. When the optical beam includes a laser, the photorefractive crystal is configured to attenuate the laser to generate a modified optical beam and to pass the modified optical beam to the imaging detector.1. An anti-dazzle imaging camera, comprising: a photorefractive crystal having a first surface and a second surface, the photorefractive crystal configured (i) to receive an optical beam, (ii) when the optical beam includes no laser radiation, to pass the optical beam unchanged to an imaging detector, and (iii) when the optical beam includes laser radiation, to attenuate the laser radiation to generate a modified optical beam and to pass the modified optical beam to the imaging detector, wherein, to attenuate the laser radiation, the photorefractive crystal is configured to (i) reflect a portion of the laser radiation off the first surface of the photorefractive crystal back into the photorefractive crystal and (ii) write a grating in the photorefractive crystal due to interference between the portion of the laser radiation reflected off the first surface and further laser radiation entering the second surface of the photorefractive crystal, wherein the photorefractive crystal is wavelength-agnostic. 2. The anti-dazzle imaging camera of claim 1, wherein the photorefractive crystal comprises a potassium niobate crystal. 3. The anti-dazzle imaging camera of claim 1, wherein the photorefractive crystal comprises a ternary crystal. 4. The anti-dazzle imaging camera of claim 1, wherein the photorefractive crystal is configured, when the optical beam includes laser radiation and subsequently includes no laser radiation, to pass the optical beam unchanged to the imaging detector when the optical beam subsequently includes no laser radiation. 5. The anti-dazzle imaging camera of claim 1, wherein the photorefractive crystal is configured to attenuate laser radiation across a visible to near-infrared (NIR) spectral band. 6. The anti-dazzle imaging camera of claim 1, wherein the photorefractive crystal is configured to function at f-numbers less than f/10. 7. The anti-dazzle imaging camera of claim 1, wherein the photorefractive crystal is located in a converging optical beam of the anti-dazzle imaging camera in front of the imaging detector. 8. An anti-dazzle imaging camera, comprising: an imaging detector configured to generate an image; and a photorefractive crystal having a first surface and a second surface, the photorefractive crystal configured (i) to receive an optical beam, (ii) when the optical beam includes no laser radiation, to pass the optical beam to the imaging detector, and (iii) when the optical beam includes laser radiation, to attenuate the laser radiation to generate a modified optical beam and to pass the modified optical beam to the imaging detector, wherein, to attenuate the laser radiation, the photorefractive crystal is configured to (i) reflect a portion of the laser radiation off the first surface of the photorefractive crystal back into the photorefractive crystal and (ii) write a grating in the photorefractive crystal due to interference between the portion of the laser radiation reflected off the first surface and further laser radiation entering the second surface of the photorefractive crystal, wherein the photorefractive crystal is wavelength-agnostic, and wherein the imaging detector is configured to generate the image based on the optical beam or the modified optical beam received from the photorefractive crystal. 9. The anti-dazzle imaging camera of claim 8, wherein the photorefractive crystal comprises a potassium niobate crystal. 10. The anti-dazzle imaging camera of claim 8, wherein the photorefractive crystal is configured, when the optical beam includes laser radiation and subsequently includes no laser radiation, to pass the optical beam unchanged to the imaging detector when the optical beam subsequently includes no laser radiation. 11. The anti-dazzle imaging camera of claim 8, wherein the photorefractive crystal is configured to attenuate laser radiation across a visible to near-infrared (NIR) spectral band. 12. The anti-dazzle imaging camera of claim 8, wherein the photorefractive crystal is configured to function at f-numbers less than f/10. 13. The anti-dazzle imaging camera of claim 8, wherein the photorefractive crystal is located in a converging optical beam of the anti-dazzle imaging camera in front of the imaging detector. 14. A method for attenuating a laser using an anti-dazzle imaging camera, comprising: receiving an optical beam at a wavelength-agnostic photorefractive crystal having a first surface and a second surface; when the optical beam includes no laser radiation, passing the optical beam to an imaging detector; and when the optical beam includes laser radiation, passively attenuating the laser radiation with the photorefractive crystal to generate a modified optical beam and passing the modified optical beam to the imaging detector, wherein passively attenuating the laser radiation comprises (i) reflecting a portion of the laser radiation off the first surface of the photorefractive crystal back into the photorefractive crystal and (ii) writing a grating in the photorefractive crystal due to interference between the portion of the laser radiation reflected off the first surface and further laser radiation entering the second surface of the photorefractive crystal. 15. The method of claim 14, wherein the photorefractive crystal comprises a potassium niobate crystal. 16. The method of claim 14, wherein the grating in the photorefractive crystal is written according to a wavelength of the laser. 17. The method of claim 16, further comprising, when the optical beam includes laser radiation and subsequently includes no laser radiation, dissipating the grating from the photorefractive crystal. 18. The method of claim 16, wherein writing the grating in the photorefractive crystal comprises writing the grating in about 50 to about 100 μs. 19. The method of claim 16, wherein the grating comprises a holographic grating. 20. The method of claim 16, wherein the grating comprises a refractive index grating.
2,600
9,899
9,899
14,590,988
2,683
An intelligent sensing device including a temperature sensor to detect a temperature proximate the intelligent sensing device, an optical sensor to detect a presence of one or more objects, a controller to receive the temperature and the presence of the one or more object, to determine if the received temperature exceeds a temperature threshold, and to track a number of the one or more objects, and an indicator to generate an alarm responsive to one of 1) the received temperature exceeding the temperature threshold and 2) the number not matching a number threshold.
1. An intelligent sensing device comprising: a temperature sensor configured to detect a temperature proximate the intelligent sensing device; a motion sensor configured to detect a presence of one or more objects; a controller configured to receive the temperature and the presence of the one or more object, to determine if the received temperature exceeds a temperature threshold, and to track a number of the one or more objects; and an indicator configured to generate an alarm responsive to one of 1) the received temperature exceeding the temperature threshold and 2) the number not matching a number threshold. 2. An intelligent sensing device of claim 1, further comprising an optical sensor configured to measure light intensity near the intelligent sensing device, and wherein said controller is further configured to activate a network device in response to the measure light intensity exceeding a threshold. 3. An intelligent sensing device of claim 1, further comprising an accelerometer configured to detect a vibration of the intelligent sensing device, and to detect a tampering of the intelligent sensing device. 4. An intelligent sensing device of claim 1, further comprising a wireless transceiver configured to include the intelligent sensing device in a wireless network, and to communicate with a second wireless device in the wireless network after the intelligent sensing device has been included in the wireless network. 5. An intelligent sensing device of claim 4, wherein the second wireless device comprises a network manager configured provide the wireless network, and to manage the intelligent sensing device wirelessly within a range. 6. An intelligent sensing device of claim 4, wherein the second wireless device comprises a second intelligent sensing device included in the wireless network. 7. An intelligent sensing device of claim 1, wherein the indicator comprises an light-emitting diode (LED) configured to emit light of one or more colors responsive to different temperature ranges detected by the temperature sensor. 8. An intelligent sensing device of claim 1, wherein the motion sensor is further configured to facially recognize the one or more objects, and to continually track the recognized one or more objects. 9. A control system for use with a sensing device in a wireless network, the control system comprising: a control device configured to perform a predefined action; an intelligent sensing device comprising: a temperature sensor configured to detect a temperature proximate the intelligent sensing device; a motion sensor configured to detect a presence of one or more objects; a controller configured to receive the temperature and the presence of the one or more object, to determine if the received temperature exceeds a temperature threshold, and to track a number of the one or more objects; and an indicator configured to generate an indication responsive to one of 1) the received temperature exceeding the temperature threshold and 2) the number not matching a number threshold; a transceiver configured to communicate the indication; and a network manager, being remote from the intelligent sensing device and the control device, wirelessly coupled to the intelligent sensing device and the control device through the wireless network, and configured to receive the indication from the intelligent sensing device, and, in response to having received the indication, to perform the predefined action. 10. A control system of claim 9, and wherein the network manager is further configured to determine if a predefined condition is met, and, in response to the predefined condition having been met, and the indication having been received, to carry out the predefined action. 11. A control system of claim 10, and wherein the predefined condition comprises at least one of a time of day, a day of a week, and a lighting condition within a vicinity of the intelligent sensing device. 12. A control system of claim 9, and wherein the control device comprises at least one of a roller blind, a temperature control, a burglary alarm, and a second intelligent sensing device different and remote from the network manager. 13. A control system of claim 9, and wherein the intelligent sensing device further comprises an optical sensor configured to measure light intensity near the intelligent sensing device. 14. A control system of claim 9, and wherein the intelligent sensing device further comprises an accelerometer configured to detect a vibration of the intelligent sensing device, and to detect a tampering of the intelligent sensing device. 15. A control system of claim 9, and wherein the intelligent sensing device further comprises a wireless transceiver configured to include the intelligent sensing device in the wireless network, and to communicate with the control device in the wireless network after the intelligent sensing device has been included in the wireless network. 16. A control system of claim 9, and wherein the indicator comprises an light-emitting diode (LED) configured to emit light of one or more colors responsive to different temperature ranges detected by the temperature sensor. 17. A control system of claim 9, and wherein the motion sensor is further configured to facially recognize the one or more objects, and to continually track the recognized one or more objects. 18. A method of controlling a network device operable to perform a function in a network via a) an intelligent sensing device, and being remote from the network device, and b) a network manager operable to communicate with the network device and the intelligent sensing device, the method comprising: detecting at the intelligent sensing device a temperature proximate the intelligent sensing device; detecting at the intelligent sensing device a presence of one or more objects; determining if the temperature exceeds a temperature threshold; tracking a number of the one or more objects; and generating an indication responsive to one of 1) the temperature exceeding the temperature threshold and 2) the number not matching a number threshold. 19. A method of claim 18, and further comprising determining if a predefined condition is met; and, in response to the predefined condition having been met, and the indication having been received, performing the predefined function. 20. A method of claim 18, and wherein the predefined condition comprises at least one of a time of day, a day of a week, and a lighting condition within a vicinity of the intelligent sensing device.
An intelligent sensing device including a temperature sensor to detect a temperature proximate the intelligent sensing device, an optical sensor to detect a presence of one or more objects, a controller to receive the temperature and the presence of the one or more object, to determine if the received temperature exceeds a temperature threshold, and to track a number of the one or more objects, and an indicator to generate an alarm responsive to one of 1) the received temperature exceeding the temperature threshold and 2) the number not matching a number threshold.1. An intelligent sensing device comprising: a temperature sensor configured to detect a temperature proximate the intelligent sensing device; a motion sensor configured to detect a presence of one or more objects; a controller configured to receive the temperature and the presence of the one or more object, to determine if the received temperature exceeds a temperature threshold, and to track a number of the one or more objects; and an indicator configured to generate an alarm responsive to one of 1) the received temperature exceeding the temperature threshold and 2) the number not matching a number threshold. 2. An intelligent sensing device of claim 1, further comprising an optical sensor configured to measure light intensity near the intelligent sensing device, and wherein said controller is further configured to activate a network device in response to the measure light intensity exceeding a threshold. 3. An intelligent sensing device of claim 1, further comprising an accelerometer configured to detect a vibration of the intelligent sensing device, and to detect a tampering of the intelligent sensing device. 4. An intelligent sensing device of claim 1, further comprising a wireless transceiver configured to include the intelligent sensing device in a wireless network, and to communicate with a second wireless device in the wireless network after the intelligent sensing device has been included in the wireless network. 5. An intelligent sensing device of claim 4, wherein the second wireless device comprises a network manager configured provide the wireless network, and to manage the intelligent sensing device wirelessly within a range. 6. An intelligent sensing device of claim 4, wherein the second wireless device comprises a second intelligent sensing device included in the wireless network. 7. An intelligent sensing device of claim 1, wherein the indicator comprises an light-emitting diode (LED) configured to emit light of one or more colors responsive to different temperature ranges detected by the temperature sensor. 8. An intelligent sensing device of claim 1, wherein the motion sensor is further configured to facially recognize the one or more objects, and to continually track the recognized one or more objects. 9. A control system for use with a sensing device in a wireless network, the control system comprising: a control device configured to perform a predefined action; an intelligent sensing device comprising: a temperature sensor configured to detect a temperature proximate the intelligent sensing device; a motion sensor configured to detect a presence of one or more objects; a controller configured to receive the temperature and the presence of the one or more object, to determine if the received temperature exceeds a temperature threshold, and to track a number of the one or more objects; and an indicator configured to generate an indication responsive to one of 1) the received temperature exceeding the temperature threshold and 2) the number not matching a number threshold; a transceiver configured to communicate the indication; and a network manager, being remote from the intelligent sensing device and the control device, wirelessly coupled to the intelligent sensing device and the control device through the wireless network, and configured to receive the indication from the intelligent sensing device, and, in response to having received the indication, to perform the predefined action. 10. A control system of claim 9, and wherein the network manager is further configured to determine if a predefined condition is met, and, in response to the predefined condition having been met, and the indication having been received, to carry out the predefined action. 11. A control system of claim 10, and wherein the predefined condition comprises at least one of a time of day, a day of a week, and a lighting condition within a vicinity of the intelligent sensing device. 12. A control system of claim 9, and wherein the control device comprises at least one of a roller blind, a temperature control, a burglary alarm, and a second intelligent sensing device different and remote from the network manager. 13. A control system of claim 9, and wherein the intelligent sensing device further comprises an optical sensor configured to measure light intensity near the intelligent sensing device. 14. A control system of claim 9, and wherein the intelligent sensing device further comprises an accelerometer configured to detect a vibration of the intelligent sensing device, and to detect a tampering of the intelligent sensing device. 15. A control system of claim 9, and wherein the intelligent sensing device further comprises a wireless transceiver configured to include the intelligent sensing device in the wireless network, and to communicate with the control device in the wireless network after the intelligent sensing device has been included in the wireless network. 16. A control system of claim 9, and wherein the indicator comprises an light-emitting diode (LED) configured to emit light of one or more colors responsive to different temperature ranges detected by the temperature sensor. 17. A control system of claim 9, and wherein the motion sensor is further configured to facially recognize the one or more objects, and to continually track the recognized one or more objects. 18. A method of controlling a network device operable to perform a function in a network via a) an intelligent sensing device, and being remote from the network device, and b) a network manager operable to communicate with the network device and the intelligent sensing device, the method comprising: detecting at the intelligent sensing device a temperature proximate the intelligent sensing device; detecting at the intelligent sensing device a presence of one or more objects; determining if the temperature exceeds a temperature threshold; tracking a number of the one or more objects; and generating an indication responsive to one of 1) the temperature exceeding the temperature threshold and 2) the number not matching a number threshold. 19. A method of claim 18, and further comprising determining if a predefined condition is met; and, in response to the predefined condition having been met, and the indication having been received, performing the predefined function. 20. A method of claim 18, and wherein the predefined condition comprises at least one of a time of day, a day of a week, and a lighting condition within a vicinity of the intelligent sensing device.
2,600