Unnamed: 0
int64
0
350k
level_0
int64
0
351k
ApplicationNumber
int64
9.75M
96.1M
ArtUnit
int64
1.6k
3.99k
Abstract
stringlengths
1
8.37k
Claims
stringlengths
3
292k
abstract-claims
stringlengths
68
293k
TechCenter
int64
1.6k
3.9k
9,900
9,900
15,425,748
2,667
In accordance with the teachings described herein, systems and methods are provided for generating a seed plan for use in radiation therapy. The system includes an image database, the image database comprising image slices and a seed template database comprising seed templates. A contour engine is configured to generate target contour data to identify one or more objects within each image slice. A reslicer engine is configured to rotate the contoured image about an angle of rotation to produce a resliced contoured image, such that the resliced contoured image is resampled at an angle perpendicular to the angle of rotation and intersecting an isocenter. The system also includes a seed grid engine configured to generate a seed grid perpendicular to the angle of rotation.
1. A processor-implemented method for generating a seed grid for use in radiation therapy, comprising: receiving one or more image slices, the image slices comprising one or more cross sectional medical images; contouring each image slice to generate target contour data to identify one or more objects within each image slice; defining an isocenter and an angle of rotation of the image slices; rotating the contoured image slices about the angle of rotation to produce a resliced contoured image, the resliced contoured image being the image slice rotated at an angle perpendicular to the angle of rotation; and generating a seed grid perpendicular to the angle of rotation, the seed grid comprising indicators for a seed plan, the indicators for the seed plan comprising locations for insertion points for radiation containing seeds for use in radiation therapy. 2. The method of claim 1, further comprising: adjusting the seed grid location; and defining a new isocenter and new angle of rotation based on the adjusted seed grid and repeating the rotating and generating steps to produce a resliced contoured image with the seed grid perpendicular to the new angle of rotation and centered at the new isocenter. 3. The method of claim 1, further comprising: inserting seeds at one or more indicators on the seed grid to generate a seed plan; determining whether the seed plan is optimized by, receiving seed plan migration data; comparing the seed plan migration data to the seed plan with predetermined indicators to generate an optimized seed plan. 4. The method of claim 1, wherein the isocenter defines a center point of a target volume. 5. The method of claim 1, wherein the angle of rotation is a needle path. 6. The method of claim 5, wherein the needle path is configured for a fiducial needle. 7. The method of claim 1, wherein the locations of the insertion points represent insertion points for seed-carrying needles. 8. The method of claim 7, wherein the seed-carrying needles comprise one or more seeds distributed along the length of the needle shaft. 9. The method of claim 8, wherein the one or more seeds are distributed along the length of the needle shaft so as to provide a deposit of a seed at varying depths of a target volume. 10. The method of claim 1, wherein the angle of rotation is in plane with beams for use in external radiation therapy. 11. A system for generating a seed grid for use in radiation therapy, the system comprising: an image database, the image database comprising image slices, the image slices comprising one or more cross sectional medical images; a seed template database, the seed grid database comprising one or more seed templates, the seed template comprising indicators for a seed plan, the indicators for the seed plan comprising locations for radiation containing seeds for the radiation therapy; a contour engine, the contour engine configured to generate target contour data to identify one or more objects within each image slice; a reslicer engine configured to rotate the contoured image about an angle of rotation to produce a resliced contoured image, the resliced contoured image being the image slice rotated at an angle perpendicular to the angle of rotation and intersecting an isocenter; and a seed grid engine, the seed grid engine configured to generate a seed grid perpendicular to the angle of rotation, the seed grid corresponding to a seed template from the seed template database, wherein the contour engine, reslicer engine, and seed grid engine comprise software instructions stored in one or more memory devices and executable by one or more processors. 12. The system of claim 11, wherein the seed grid engine is further configured to: adjust the seed grid location; and the reslicer engine is further configured to define a new isocenter and new angle of rotation based on the adjusted seed grid to produce a resliced contoured image with the adjusted seed grid perpendicular to the new angle of rotation and centered at the new isocenter. 13. The system of claim 11, further comprising a seed plan engine, the seed plan engine configured to insert seeds at one or more indicators on the seed grid to generate a seed plan, wherein the seed plan engine comprise software instructions stored in one or more memory devices and executable by one or more processors. 14. The system of claim 13, further comprising a plan optimizer engine, the plan optimizer engine configured to determine whether the seed plan is optimized by receiving seed plan migration data and comparing the seed plan migration data to the seed plan with predetermined indicators to generate an optimized seed plan, wherein the plan optimizer engine comprises software instructions stored in one or more memory devices and executable by one or more processors. 15. The system of claim 11, wherein the isocenter defines a center point of a target mass. 16. The system of claim 11, wherein the angle of rotation is a needle path. 17. The system of claim 16, wherein the needle path is configured for a fiducial needle. 18. The system of claim 11, wherein the locations for radiation containing seeds represent insertion points for seed-carrying needles. 19. The system of claim 18, wherein the seed-carrying needles comprise one or more seeds distributed along the length of the needle shaft. 20. The system of claim 19, wherein the one or more seeds are distributed along the length of the needle shaft so as to provide a deposit of a seed at varying depths of a target volume.
In accordance with the teachings described herein, systems and methods are provided for generating a seed plan for use in radiation therapy. The system includes an image database, the image database comprising image slices and a seed template database comprising seed templates. A contour engine is configured to generate target contour data to identify one or more objects within each image slice. A reslicer engine is configured to rotate the contoured image about an angle of rotation to produce a resliced contoured image, such that the resliced contoured image is resampled at an angle perpendicular to the angle of rotation and intersecting an isocenter. The system also includes a seed grid engine configured to generate a seed grid perpendicular to the angle of rotation.1. A processor-implemented method for generating a seed grid for use in radiation therapy, comprising: receiving one or more image slices, the image slices comprising one or more cross sectional medical images; contouring each image slice to generate target contour data to identify one or more objects within each image slice; defining an isocenter and an angle of rotation of the image slices; rotating the contoured image slices about the angle of rotation to produce a resliced contoured image, the resliced contoured image being the image slice rotated at an angle perpendicular to the angle of rotation; and generating a seed grid perpendicular to the angle of rotation, the seed grid comprising indicators for a seed plan, the indicators for the seed plan comprising locations for insertion points for radiation containing seeds for use in radiation therapy. 2. The method of claim 1, further comprising: adjusting the seed grid location; and defining a new isocenter and new angle of rotation based on the adjusted seed grid and repeating the rotating and generating steps to produce a resliced contoured image with the seed grid perpendicular to the new angle of rotation and centered at the new isocenter. 3. The method of claim 1, further comprising: inserting seeds at one or more indicators on the seed grid to generate a seed plan; determining whether the seed plan is optimized by, receiving seed plan migration data; comparing the seed plan migration data to the seed plan with predetermined indicators to generate an optimized seed plan. 4. The method of claim 1, wherein the isocenter defines a center point of a target volume. 5. The method of claim 1, wherein the angle of rotation is a needle path. 6. The method of claim 5, wherein the needle path is configured for a fiducial needle. 7. The method of claim 1, wherein the locations of the insertion points represent insertion points for seed-carrying needles. 8. The method of claim 7, wherein the seed-carrying needles comprise one or more seeds distributed along the length of the needle shaft. 9. The method of claim 8, wherein the one or more seeds are distributed along the length of the needle shaft so as to provide a deposit of a seed at varying depths of a target volume. 10. The method of claim 1, wherein the angle of rotation is in plane with beams for use in external radiation therapy. 11. A system for generating a seed grid for use in radiation therapy, the system comprising: an image database, the image database comprising image slices, the image slices comprising one or more cross sectional medical images; a seed template database, the seed grid database comprising one or more seed templates, the seed template comprising indicators for a seed plan, the indicators for the seed plan comprising locations for radiation containing seeds for the radiation therapy; a contour engine, the contour engine configured to generate target contour data to identify one or more objects within each image slice; a reslicer engine configured to rotate the contoured image about an angle of rotation to produce a resliced contoured image, the resliced contoured image being the image slice rotated at an angle perpendicular to the angle of rotation and intersecting an isocenter; and a seed grid engine, the seed grid engine configured to generate a seed grid perpendicular to the angle of rotation, the seed grid corresponding to a seed template from the seed template database, wherein the contour engine, reslicer engine, and seed grid engine comprise software instructions stored in one or more memory devices and executable by one or more processors. 12. The system of claim 11, wherein the seed grid engine is further configured to: adjust the seed grid location; and the reslicer engine is further configured to define a new isocenter and new angle of rotation based on the adjusted seed grid to produce a resliced contoured image with the adjusted seed grid perpendicular to the new angle of rotation and centered at the new isocenter. 13. The system of claim 11, further comprising a seed plan engine, the seed plan engine configured to insert seeds at one or more indicators on the seed grid to generate a seed plan, wherein the seed plan engine comprise software instructions stored in one or more memory devices and executable by one or more processors. 14. The system of claim 13, further comprising a plan optimizer engine, the plan optimizer engine configured to determine whether the seed plan is optimized by receiving seed plan migration data and comparing the seed plan migration data to the seed plan with predetermined indicators to generate an optimized seed plan, wherein the plan optimizer engine comprises software instructions stored in one or more memory devices and executable by one or more processors. 15. The system of claim 11, wherein the isocenter defines a center point of a target mass. 16. The system of claim 11, wherein the angle of rotation is a needle path. 17. The system of claim 16, wherein the needle path is configured for a fiducial needle. 18. The system of claim 11, wherein the locations for radiation containing seeds represent insertion points for seed-carrying needles. 19. The system of claim 18, wherein the seed-carrying needles comprise one or more seeds distributed along the length of the needle shaft. 20. The system of claim 19, wherein the one or more seeds are distributed along the length of the needle shaft so as to provide a deposit of a seed at varying depths of a target volume.
2,600
9,901
9,901
14,669,387
2,693
One embodiment provides a method including: receiving, at a wearable device, non-image data from at least one sensor operatively coupled to the wearable device, wherein the non-image data is based upon a gesture performed by a user; identifying, using a processor, the gesture performed by a user using the non-image data; and performing an action based upon the gesture identified. Other aspects are described and claimed.
1. A method, comprising: receiving, at a wearable device, non-image data from at least one sensor operatively coupled to the wearable device, wherein the non-image data is based upon a gesture performed by a user; identifying, using a processor, the gesture performed by a user using the non-image data; and performing an action based upon the gesture identified. 2. The method of claim 1, wherein the non-image data comprises at least one of: electromyography data, pressure data, and inertial data. 3. The method of claim 1, wherein the identifying comprises associating the non-image data with a gesture. 4. The method of claim 1, wherein the non-image data comprises an electromyography data stream, a pressure sensor data stream, and an inertial data stream, and wherein the identifying comprises using each of the data streams to extract at least one feature of the gesture. 5. The method of claim 4, further comprising aggregating the data streams into a single nonlinear model. 6. The method of claim 5, wherein the aggregating comprises using an unscented Kalman filter. 7. The method of claim 1, wherein the non-image data comprises an electromyography data stream, a pressure sensor data stream, and an inertial data stream and wherein the identifying comprises combining the data streams. 8. The method of claim 7, wherein the identifying comprises classifying the combined data streams using at least one support vector machine. 9. The method of claim 1, wherein the performing an action comprises controlling an alternate device using the gesture identified. 10. The method of claim 1, further comprising associating the gesture with an action. 11. A wearable device, comprising: a wearable housing; a display screen; at least one sensor; a processor operatively coupled to the display screen and the at least one sensor and housed by the wearable housing; and a memory that stores instructions executable by the processor to: receive non-image data from the at least one sensor, wherein the non-image data is based upon a gesture performed by a user; identify the gesture performed by a user using the non-image data; and perform an action based upon the gesture identified. 12. The wearable device of claim 11, wherein the non-image data comprises at least one of: electromyography data, pressure data, and inertial data. 13. The wearable device of claim 11, wherein to identify comprises associating the non-image data with a gesture. 14. The wearable device of claim 11, wherein the non-image data comprises an electromyography data stream, a pressure sensor data stream, and an inertial data stream, and wherein to identify comprises using each of the data streams to extract at least one feature of the gesture. 15. The wearable device of claim 14, wherein the instructions are further executable by the processor to aggregate the data streams into a single nonlinear model. 16. The wearable device of claim 15, wherein to aggregate comprises using an unscented Kalman filter. 17. The wearable device of claim 11, wherein the non-image data comprises an electromyography data stream, a pressure sensor data stream, and an inertial data stream and wherein to identify comprises combining the data streams. 18. The wearable device of claim 17, wherein to identify comprises classifying the combined data streams using at least one support vector machine. 19. The wearable device of claim 11, wherein to perform an action comprises controlling an alternate device using the gesture identified. 20. A product, comprising: a storage device that stores code executable by a processor, the code comprising: code that receives, at a wearable device, non-image data from at least one sensor operatively coupled to the wearable device, wherein the non-image data is based upon a gesture performed by a user; code that identifies the gesture performed by a user using the non-image data; and code that performs an action based upon the gesture identified.
One embodiment provides a method including: receiving, at a wearable device, non-image data from at least one sensor operatively coupled to the wearable device, wherein the non-image data is based upon a gesture performed by a user; identifying, using a processor, the gesture performed by a user using the non-image data; and performing an action based upon the gesture identified. Other aspects are described and claimed.1. A method, comprising: receiving, at a wearable device, non-image data from at least one sensor operatively coupled to the wearable device, wherein the non-image data is based upon a gesture performed by a user; identifying, using a processor, the gesture performed by a user using the non-image data; and performing an action based upon the gesture identified. 2. The method of claim 1, wherein the non-image data comprises at least one of: electromyography data, pressure data, and inertial data. 3. The method of claim 1, wherein the identifying comprises associating the non-image data with a gesture. 4. The method of claim 1, wherein the non-image data comprises an electromyography data stream, a pressure sensor data stream, and an inertial data stream, and wherein the identifying comprises using each of the data streams to extract at least one feature of the gesture. 5. The method of claim 4, further comprising aggregating the data streams into a single nonlinear model. 6. The method of claim 5, wherein the aggregating comprises using an unscented Kalman filter. 7. The method of claim 1, wherein the non-image data comprises an electromyography data stream, a pressure sensor data stream, and an inertial data stream and wherein the identifying comprises combining the data streams. 8. The method of claim 7, wherein the identifying comprises classifying the combined data streams using at least one support vector machine. 9. The method of claim 1, wherein the performing an action comprises controlling an alternate device using the gesture identified. 10. The method of claim 1, further comprising associating the gesture with an action. 11. A wearable device, comprising: a wearable housing; a display screen; at least one sensor; a processor operatively coupled to the display screen and the at least one sensor and housed by the wearable housing; and a memory that stores instructions executable by the processor to: receive non-image data from the at least one sensor, wherein the non-image data is based upon a gesture performed by a user; identify the gesture performed by a user using the non-image data; and perform an action based upon the gesture identified. 12. The wearable device of claim 11, wherein the non-image data comprises at least one of: electromyography data, pressure data, and inertial data. 13. The wearable device of claim 11, wherein to identify comprises associating the non-image data with a gesture. 14. The wearable device of claim 11, wherein the non-image data comprises an electromyography data stream, a pressure sensor data stream, and an inertial data stream, and wherein to identify comprises using each of the data streams to extract at least one feature of the gesture. 15. The wearable device of claim 14, wherein the instructions are further executable by the processor to aggregate the data streams into a single nonlinear model. 16. The wearable device of claim 15, wherein to aggregate comprises using an unscented Kalman filter. 17. The wearable device of claim 11, wherein the non-image data comprises an electromyography data stream, a pressure sensor data stream, and an inertial data stream and wherein to identify comprises combining the data streams. 18. The wearable device of claim 17, wherein to identify comprises classifying the combined data streams using at least one support vector machine. 19. The wearable device of claim 11, wherein to perform an action comprises controlling an alternate device using the gesture identified. 20. A product, comprising: a storage device that stores code executable by a processor, the code comprising: code that receives, at a wearable device, non-image data from at least one sensor operatively coupled to the wearable device, wherein the non-image data is based upon a gesture performed by a user; code that identifies the gesture performed by a user using the non-image data; and code that performs an action based upon the gesture identified.
2,600
9,902
9,902
12,454,561
2,699
A method including scanning a document by a scanner to form a scanned document; determining a form of the document; and redacting a cell of the scanned document based upon the determined form of the document to thereby form a scanned redacted document.
1. A method comprising: scanning a document by a scanner to form a scanned document; determining a form of the document; and redacting a cell of the scanned document based upon the determined form of the document to thereby form a scanned redacted document. 2. A method as in claim 1 further comprising printing a copy of the scanned redacted document with the cell redacted on the copy. 3. A method as in claim 2 wherein the copy is printed by a same machine having the scanner. 4. A method as in claim 1 wherein the scanned redacted document is an electronic file which is not initially printed as a physical paper document. 5. A method as in claim 1 further comprising electronically storing the scanned redacted document in a memory. 6. A method as in claim 5 further comprising electronically storing the scanned document in a memory. 7. A method as in claim 1 further comprising electronically storing the scanned document in a memory. 8. A method comprising: opening an electronic scanned version of a document by a computer; and when the electronic scanned version of the document is opened, automatically redacting a cell of the opened electronic scanned version of the document. 9. A method as in claim 8 wherein redacting the cell of the opened electronic scanned version of the document occurs in a visual editor software program on a client computer. 10. A method as in claim 8 wherein the electronic scanned version of the document is obtained by a client computer from a server through a virtual redaction portal. 11. A method as in claim 8 further comprising: scanning a document by a scanner to form the electronic scanned version of the document; and determining a form of the document; wherein redacting the cell of the opened electronic scanned version of the document is based upon the determined form of the document. 12. A method as in claim 8 further comprising printing a copy of the opened electronic scanned version of the document with the cell redacted on the copy. 13. A method as in claim 12 wherein the copy is printed by a same machine having the scanner. 14. A method as in claim 11 further comprising electronically storing the electronic scanned version of the document in a memory. 15. A method as in claim 15 further comprising electronically storing the opened electronic scanned version of the document in a memory as an electronic un-redacted version of the document at about a same time as the electronic scanned version of the document is stored. 16. A program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for performing operations, the operations comprising: scanning a document by a scanner to form a scanned document; determining a form of the document; and redacting a cell of the scanned document based upon the determined form of the document to thereby form a scanned redacted document. 17. A method comprising: creating web automated collection parameters and prioritization parameters; collecting data from information locations; assembling collected data as results with prioritization; and making prioritized results available via fee for service online access as assembled virtual portal access on a server. 18. A method as in claim 17 wherein collecting data from information locations comprises redacting information from a scanned document. 19. A method as in claim 17 wherein collecting data from information locations comprises redacting information from an electronic form of a document. 20. A method as in claim 17 wherein making prioritized results available comprises redacting a portion of the results based upon a fee agreement.
A method including scanning a document by a scanner to form a scanned document; determining a form of the document; and redacting a cell of the scanned document based upon the determined form of the document to thereby form a scanned redacted document.1. A method comprising: scanning a document by a scanner to form a scanned document; determining a form of the document; and redacting a cell of the scanned document based upon the determined form of the document to thereby form a scanned redacted document. 2. A method as in claim 1 further comprising printing a copy of the scanned redacted document with the cell redacted on the copy. 3. A method as in claim 2 wherein the copy is printed by a same machine having the scanner. 4. A method as in claim 1 wherein the scanned redacted document is an electronic file which is not initially printed as a physical paper document. 5. A method as in claim 1 further comprising electronically storing the scanned redacted document in a memory. 6. A method as in claim 5 further comprising electronically storing the scanned document in a memory. 7. A method as in claim 1 further comprising electronically storing the scanned document in a memory. 8. A method comprising: opening an electronic scanned version of a document by a computer; and when the electronic scanned version of the document is opened, automatically redacting a cell of the opened electronic scanned version of the document. 9. A method as in claim 8 wherein redacting the cell of the opened electronic scanned version of the document occurs in a visual editor software program on a client computer. 10. A method as in claim 8 wherein the electronic scanned version of the document is obtained by a client computer from a server through a virtual redaction portal. 11. A method as in claim 8 further comprising: scanning a document by a scanner to form the electronic scanned version of the document; and determining a form of the document; wherein redacting the cell of the opened electronic scanned version of the document is based upon the determined form of the document. 12. A method as in claim 8 further comprising printing a copy of the opened electronic scanned version of the document with the cell redacted on the copy. 13. A method as in claim 12 wherein the copy is printed by a same machine having the scanner. 14. A method as in claim 11 further comprising electronically storing the electronic scanned version of the document in a memory. 15. A method as in claim 15 further comprising electronically storing the opened electronic scanned version of the document in a memory as an electronic un-redacted version of the document at about a same time as the electronic scanned version of the document is stored. 16. A program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for performing operations, the operations comprising: scanning a document by a scanner to form a scanned document; determining a form of the document; and redacting a cell of the scanned document based upon the determined form of the document to thereby form a scanned redacted document. 17. A method comprising: creating web automated collection parameters and prioritization parameters; collecting data from information locations; assembling collected data as results with prioritization; and making prioritized results available via fee for service online access as assembled virtual portal access on a server. 18. A method as in claim 17 wherein collecting data from information locations comprises redacting information from a scanned document. 19. A method as in claim 17 wherein collecting data from information locations comprises redacting information from an electronic form of a document. 20. A method as in claim 17 wherein making prioritized results available comprises redacting a portion of the results based upon a fee agreement.
2,600
9,903
9,903
15,521,994
2,648
A technique includes sensing a wireless charging station that includes a wireless charging transmitter and determining a status for the machine based at least in part on the sensing of the wireless charging station. The technique also includes assisting with a process to wirelessly transfer power to the machine, where assisting includes causing the machine to provide guidance to the user based at least in part on the determined status.
1. A method comprising: sensing a wireless charging station comprising a wireless charging transmitter and determining a status for the machine based at least in part on the sensing of the wireless charging station; and assisting a process to wirelessly transfer power to the machine, wherein assisting comprises causing the machine to provide guidance to the user based at least in part on the determined status. 2. The method of claim 1, wherein: sensing the wireless charging station comprises detecting the wireless charging transmitter of the wireless charging station; and assisting the user of the machine with a process to wirelessly transfer power to the machine comprises at least one of the following: using the machine to provide an indication of a compatibility of a wireless charging standard used by the machine relative to a wireless charging standard used by the wireless charging station; using the machine to provide an indication of a compatibility of a wireless charging transmitter of the wireless charging station to a wireless charging receiver of the machine; and using the machine to provide an indication of a relative position of a wireless receiver of the machine relative to a wireless charging transmitter of the wireless charging station. 3. The method of claim 1, wherein: sensing the wireless charging station comprises detecting a physical orientation of a wireless charging transmitter of the wireless charging station relative to the machine. 4. The method of claim 1, wherein sensing the wireless charging station comprises at least one of the following: using a magnetometer of the machine to detect a magnetic field generated by a wireless charging transmitter of the wireless charging station; communicating with the wireless charging station by communicating data using a wireless charging receiver of the machine; using a coil of the wireless charging receiver of the machine to detect a magnetic field emanating from the wireless charging transmitter of the wireless charging station; and using a plurality of wireless charging coils of the machine to attempt to communicate with the wireless charging station using a plurality of protocols associated with a plurality of associated wireless charging standards. 5. The method of claim 1, wherein sensing the wireless charging station comprises: monitoring an output of a magnetometer for an indication that a wireless charging transmitter is in proximity to the portable electronic device; and in response to the output indicating that the wireless charging transmitter is in proximity to the portable electronic device, locking an indication of magnetic north from the magnetometer. 6. An article comprising a non-transitory computer readable storage medium storing instructions that when executed by a processor-based system cause the processor-based system to: determine a wireless power transfer status for the portable electronic device; display an indication of the determined status to a user of the portable electronic device; and selectively initiate wireless charging of a battery of the portable electronic device based at least in part on user feedback to the indication of the determined status. 7. The article of claim 6, the storage medium storing instructions that when executed by the processor-based system cause the processor-based system to determine and display at least one of the following: a compatibility status of a wireless charging receiver of the portable electronic device to a wireless charging transmitter detected to be in proximity to the wireless charging receiver; an amount of power to be received by the portable electronic device in response to the portable electronic device being charged by a wireless charging transmitter detected to be in proximity to the portable electronic device; and a status representing a degree to which the portable electronic device may be used while being wirelessly charged. 8. The article of claim 6, the storage medium storing instructions that when executed by the processor-based system cause the processor-based system to determine and display at least one of the following: a status representing that a wireless charging station cannot deliver sufficient power to fully power the portable electronic device while the battery is being charged; a status representing that a wireless charging station can deliver sufficient power to fully power the portable electronic device but not simultaneously permit charging of the battery without limiting power available for the charging of the battery; and a status representing that a wireless charging station can deliver sufficient power to fully power the portable electronic device and simultaneously charge the battery. 9. The article of claim 8, the storage medium storing instructions that when executed by the processor-based system cause the processor-based system to determine and display a status representing a time for charging a battery of the portable electronic device for a plurality of different use scenarios for the portable electronic device while the battery is being wirelessly charged. 10. The article of claim 8, the storage medium storing instructions that when executed by the processor-based system cause the processor-based system to determine and display a status representing at least one: a time for the battery to be discharged given a current charge state of the battery; and a current run time for the portable electronic device given the current charge state. 11. The article of claim 6, the storage medium storing instructions that when executed by the processor-based system cause the processor-based system to: wirelessly communicate with a wireless charging controller to determine a charging status for an electronic device other than the portable electronic device. 12. The article of claim 11, the storage medium storing instructions that when executed by the processor-based system cause the processor-based system to: wirelessly communicate with a wireless charging controller to regulate charging of at least one of the portable device and the other electronic device based at least in part on a behavior profile. 13. An apparatus comprising: a central processing unit (CPU); a battery; a power supply system to provide power to the central processing unit; a housing containing the CPU, the battery and the power supply system; and a controller to: wirelessly communicate with a wireless charging station on which a portable electronic device is disposed, the portable electronic device being physically separated from the apparatus; and based on the wireless communication, determine and indicate a wireless power transfer status of the portable electronic device to a user of the apparatus. 14. The apparatus of claim 13, wherein the controller further wirelessly communicates with the wireless charging station to determine a wireless power transfer status of at least one other portable electronic device, and determines and provides an indication of the wireless power transfer status for each of the other portable electronic devices to the user. 15. The apparatus of claim 13, wherein the controller, based on the wireless communication with the wireless station, determines a level of power being wirelessly communicated to the portable electronic device and provides an indication of the determined level to the user.
A technique includes sensing a wireless charging station that includes a wireless charging transmitter and determining a status for the machine based at least in part on the sensing of the wireless charging station. The technique also includes assisting with a process to wirelessly transfer power to the machine, where assisting includes causing the machine to provide guidance to the user based at least in part on the determined status.1. A method comprising: sensing a wireless charging station comprising a wireless charging transmitter and determining a status for the machine based at least in part on the sensing of the wireless charging station; and assisting a process to wirelessly transfer power to the machine, wherein assisting comprises causing the machine to provide guidance to the user based at least in part on the determined status. 2. The method of claim 1, wherein: sensing the wireless charging station comprises detecting the wireless charging transmitter of the wireless charging station; and assisting the user of the machine with a process to wirelessly transfer power to the machine comprises at least one of the following: using the machine to provide an indication of a compatibility of a wireless charging standard used by the machine relative to a wireless charging standard used by the wireless charging station; using the machine to provide an indication of a compatibility of a wireless charging transmitter of the wireless charging station to a wireless charging receiver of the machine; and using the machine to provide an indication of a relative position of a wireless receiver of the machine relative to a wireless charging transmitter of the wireless charging station. 3. The method of claim 1, wherein: sensing the wireless charging station comprises detecting a physical orientation of a wireless charging transmitter of the wireless charging station relative to the machine. 4. The method of claim 1, wherein sensing the wireless charging station comprises at least one of the following: using a magnetometer of the machine to detect a magnetic field generated by a wireless charging transmitter of the wireless charging station; communicating with the wireless charging station by communicating data using a wireless charging receiver of the machine; using a coil of the wireless charging receiver of the machine to detect a magnetic field emanating from the wireless charging transmitter of the wireless charging station; and using a plurality of wireless charging coils of the machine to attempt to communicate with the wireless charging station using a plurality of protocols associated with a plurality of associated wireless charging standards. 5. The method of claim 1, wherein sensing the wireless charging station comprises: monitoring an output of a magnetometer for an indication that a wireless charging transmitter is in proximity to the portable electronic device; and in response to the output indicating that the wireless charging transmitter is in proximity to the portable electronic device, locking an indication of magnetic north from the magnetometer. 6. An article comprising a non-transitory computer readable storage medium storing instructions that when executed by a processor-based system cause the processor-based system to: determine a wireless power transfer status for the portable electronic device; display an indication of the determined status to a user of the portable electronic device; and selectively initiate wireless charging of a battery of the portable electronic device based at least in part on user feedback to the indication of the determined status. 7. The article of claim 6, the storage medium storing instructions that when executed by the processor-based system cause the processor-based system to determine and display at least one of the following: a compatibility status of a wireless charging receiver of the portable electronic device to a wireless charging transmitter detected to be in proximity to the wireless charging receiver; an amount of power to be received by the portable electronic device in response to the portable electronic device being charged by a wireless charging transmitter detected to be in proximity to the portable electronic device; and a status representing a degree to which the portable electronic device may be used while being wirelessly charged. 8. The article of claim 6, the storage medium storing instructions that when executed by the processor-based system cause the processor-based system to determine and display at least one of the following: a status representing that a wireless charging station cannot deliver sufficient power to fully power the portable electronic device while the battery is being charged; a status representing that a wireless charging station can deliver sufficient power to fully power the portable electronic device but not simultaneously permit charging of the battery without limiting power available for the charging of the battery; and a status representing that a wireless charging station can deliver sufficient power to fully power the portable electronic device and simultaneously charge the battery. 9. The article of claim 8, the storage medium storing instructions that when executed by the processor-based system cause the processor-based system to determine and display a status representing a time for charging a battery of the portable electronic device for a plurality of different use scenarios for the portable electronic device while the battery is being wirelessly charged. 10. The article of claim 8, the storage medium storing instructions that when executed by the processor-based system cause the processor-based system to determine and display a status representing at least one: a time for the battery to be discharged given a current charge state of the battery; and a current run time for the portable electronic device given the current charge state. 11. The article of claim 6, the storage medium storing instructions that when executed by the processor-based system cause the processor-based system to: wirelessly communicate with a wireless charging controller to determine a charging status for an electronic device other than the portable electronic device. 12. The article of claim 11, the storage medium storing instructions that when executed by the processor-based system cause the processor-based system to: wirelessly communicate with a wireless charging controller to regulate charging of at least one of the portable device and the other electronic device based at least in part on a behavior profile. 13. An apparatus comprising: a central processing unit (CPU); a battery; a power supply system to provide power to the central processing unit; a housing containing the CPU, the battery and the power supply system; and a controller to: wirelessly communicate with a wireless charging station on which a portable electronic device is disposed, the portable electronic device being physically separated from the apparatus; and based on the wireless communication, determine and indicate a wireless power transfer status of the portable electronic device to a user of the apparatus. 14. The apparatus of claim 13, wherein the controller further wirelessly communicates with the wireless charging station to determine a wireless power transfer status of at least one other portable electronic device, and determines and provides an indication of the wireless power transfer status for each of the other portable electronic devices to the user. 15. The apparatus of claim 13, wherein the controller, based on the wireless communication with the wireless station, determines a level of power being wirelessly communicated to the portable electronic device and provides an indication of the determined level to the user.
2,600
9,904
9,904
11,544,498
2,633
A method for operating a wireless communication system in a cell includes allocating a first plurality of pilot resources to a first frequency sub-band comprising a plurality of high power sub-carriers, allocating a second plurality of pilot resources to a second frequency sub-band comprising a plurality of low power sub-carriers, signaling a power offset between the plurality of pilot resources in the plurality of high power sub-carriers and the plurality of pilot resources in the plurality of low power sub-carriers, and transmitting the first and the second plurality of pilot resources over the first and the second frequency sub-bands.
1. A method comprising: allocating a first plurality of pilot resources to a first frequency sub-band comprising a plurality of high power sub-carriers; allocating a second plurality of pilot resources to a second frequency sub-band comprising a plurality of low power sub-carriers; signaling a power offset between said first plurality of pilot resources in the plurality of high power sub-carriers and said second plurality of pilot resources in the plurality of low power sub-carriers; and transmitting said first and said second plurality of pilot resources over said first and said second frequency sub-bands. 2. The method of claim 1 wherein a first density of said first plurality of pilot resources in said first frequency sub-band is different from a second density of said second plurality of pilot resources in said second frequency sub-band. 3. The method of claim 1 wherein a first density of said first plurality of pilot resources in said first frequency sub-band is the same as a second density of said second plurality of pilot resources in said second frequency sub-band. 4. The method of claim 2 wherein said first density is greater than said second density. 5. The method of claim 1 wherein a pilot density of said first and said second plurality of pilot resources across said first and said second frequency sub-bands is between approximately ½ and 1/10. 6. The method of claim 1 wherein a density of said first plurality of pilot resources and said second plurality of pilot resources is constant in at least one of the time and frequency domain. 7. The method of claim 1 wherein each of said first and second plurality of pilot resources comprises a common pilot sequence. 8. The method of claim 1 further comprising provisioning said first and said second plurality of pilot resources into said first and said second frequency sub-bands utilizing at least one of a Time Division Multiplex (TDM), a Code Division Multiplex (CDM), a Frequency Division Multiplex (FDM), and a staggered method. 9. The method of claim 1 wherein said first plurality of pilot resources in said first frequency sub-band comprises a first pilot-data power offset and said second plurality of pilot resources in said second frequency sub-band comprises a second pilot-data power offset. 10. The method of claim 9 wherein said first pilot-data offset is different from said second pilot-data offset. 11. The method of claim 1 wherein a pilot power of any of said first and second frequency bands differs between a first symbol and a second symbol forming a pilot resource. 12. The method of claim 1 wherein at least one pilot resource is allocated on at least every eighth sub-carrier. 13. The method of claim 1 wherein a sum of said high power sub-carriers and said low power sub-carriers is between approximately thirty-six and eighty. 14. The method of claim 13 wherein said sum is equal to seventy-two. 15. The method of claim 1 wherein a cell into which the first and second plurality of pilot resources are transmitted is one of a plurality of cells in a soft-reuse network. 16. A program of machine-readable instructions, tangibly embodied on an information bearing medium and executable by a digital data processor, to perform actions comprising: allocating a first plurality of pilot resources to a first frequency sub-band comprising a plurality of high power sub-carriers; allocating a second plurality of pilot resources to a second frequency sub-band comprising a plurality of low power sub-carriers; signaling a power offset between said first plurality of pilot resources in the plurality of high power sub-carriers and said second plurality of pilot resources in the plurality of low power sub-carriers; and transmitting said first and said second plurality of pilot resources over said first and said second frequency sub-bands in a cell. 17. The program of claim 16 wherein a first density of said first plurality of pilot resources in said first frequency sub-band is different from a second density of said second plurality of pilot resources in said second frequency sub-band. 18. The program of claim 17 wherein said first density is greater than said second density. 19. The program of claim 16 wherein a density of said first plurality of pilot resources and said second plurality of pilot resources is constant in at least one of the time or frequency domain. 20. The program of claim 16 wherein said cell is one of a plurality of cells in a soft-reuse network. 21. A network element comprising: a processor configured to allocate a first plurality of pilot resources to a first frequency sub-band comprising a plurality of high power sub-carriers and to allocate a second plurality of pilot resources to a second frequency sub-band comprising a plurality of low power sub-carriers; and a transmitter coupled to said processor and configured to signal a power offset between said first plurality of pilot resources in the plurality of high power sub-carriers and said second plurality of pilot resources in the plurality of low power sub-carriers and to transmit said first and said second plurality of pilot resources over said first and said second frequency sub-bands in a cell. 22. The network element of claim 21 wherein a first density of said first plurality of pilot resources in said first frequency sub-band is different from a second density of said second plurality of pilot resources in said second frequency sub-band. 23. The network element of claim 22 wherein said first density is greater than said second density. 24. The network element of claim 21 wherein a pilot power of any of said first and second frequency bands differs between a first symbol and a second symbol forming a pilot resource. 25. The network element of claim 21 wherein said cell is one of a plurality of cells in a soft-reuse network. 26. A user equipment comprising: a receiver configured to receive a power offset between a plurality of pilot resources in a plurality of high power sub-carriers and a plurality of pilot resources in a plurality of low power sub-carriers; and a processor coupled to the receiver configured to use said power offset to receive a first plurality of pilot resources allocated to a first frequency sub-band comprising said plurality of high power sub-carriers and to receive a second plurality of pilot resources allocated to a second frequency sub-band comprising said plurality of low power sub-carriers. 27. The user equipment of claim 26 wherein a first density of said first plurality of pilot resources in said first frequency sub-band is different from a second density of said second plurality of pilot resources in said second frequency sub-band. 28. The user equipment of claim 27 wherein said first density is greater than said second density. 29. The user equipment of claim 26 wherein a pilot density of said first and said second plurality of pilot resources across said first and said second frequency sub-bands is between approximately ½ and 1/10. 30. The user equipment of claim 26 wherein a density of said first plurality of pilot resources and said second plurality of pilot resources is constant in the time domain. 31. The user equipment of claim 26 further configured to detect a plurality of data symbols transmitted on at least a part of the high-power and the low-power subcarriers, and further configured to use a first pilot-data offset to detect the plurality of data symbols on the high-power subcarriers, and to use a second pilot-data offset to detect the plurality of data symbols on the low-power subcarriers. 32. The user equipment of claim 26 further configured to estimate said first and second pilot-data offsets. 33. The user equipment of claim 26 wherein the first and second pilot-data offsets are the same. 34. The user equipment of claim 26 wherein each of said first and second plurality of pilot resources comprise a common pilot sequence. 35. The user equipment of claim 26 wherein said user equipment operates in a soft-reuse network. 36. An integrated circuit comprising: first circuitry operable to allocate a first plurality of pilot resources to a first frequency sub-band comprising a plurality of high power sub-carriers and to allocate a second plurality of pilot resources to a second frequency sub-band comprising a plurality of low power sub-carriers; second circuit operable to transmit a power offset between said first plurality of pilot resources in the plurality of high power sub-carriers and said second plurality of pilot resources in the plurality of low power sub-carriers; and third circuitry operable to transmit said first and said second plurality of pilot resources over said first and said second frequency sub-bands in a cell. 37. An integrated circuit comprising: first circuitry operable to receive pilot resources from a first frequency sub-band comprising a plurality of high power sub-carriers and to receive a second plurality of pilot resources from a second frequency sub-band comprising a plurality of low power sub-carriers; and second circuit operable to receive a power offset between said plurality of pilot resources in the high power sub-carriers and said plurality of pilot resources in the low power sub-carriers. 38. A network element comprising: means for allocating a first plurality of pilot resources to a first frequency sub-band comprising a plurality of high power sub-carriers; means for allocating a second plurality of pilot resources to a second frequency sub-band comprising a plurality of low power sub-carriers; means for signaling a power offset between said plurality of pilot resources in the high power sub-carriers and said plurality of pilot resources in the low power sub-carriers; and means for transmitting said first and said second plurality of pilot resources over said first and said second frequency sub-bands in a cell. 39. The network element of claim 38 wherein a first density of said first plurality of pilot resources in said first frequency sub-band is different from a second density of said second plurality of pilot resources in said second frequency sub-band and wherein said first density is greater than said second density. 40. The network element of claim 38, embodied at least partially in an integrated circuit.
A method for operating a wireless communication system in a cell includes allocating a first plurality of pilot resources to a first frequency sub-band comprising a plurality of high power sub-carriers, allocating a second plurality of pilot resources to a second frequency sub-band comprising a plurality of low power sub-carriers, signaling a power offset between the plurality of pilot resources in the plurality of high power sub-carriers and the plurality of pilot resources in the plurality of low power sub-carriers, and transmitting the first and the second plurality of pilot resources over the first and the second frequency sub-bands.1. A method comprising: allocating a first plurality of pilot resources to a first frequency sub-band comprising a plurality of high power sub-carriers; allocating a second plurality of pilot resources to a second frequency sub-band comprising a plurality of low power sub-carriers; signaling a power offset between said first plurality of pilot resources in the plurality of high power sub-carriers and said second plurality of pilot resources in the plurality of low power sub-carriers; and transmitting said first and said second plurality of pilot resources over said first and said second frequency sub-bands. 2. The method of claim 1 wherein a first density of said first plurality of pilot resources in said first frequency sub-band is different from a second density of said second plurality of pilot resources in said second frequency sub-band. 3. The method of claim 1 wherein a first density of said first plurality of pilot resources in said first frequency sub-band is the same as a second density of said second plurality of pilot resources in said second frequency sub-band. 4. The method of claim 2 wherein said first density is greater than said second density. 5. The method of claim 1 wherein a pilot density of said first and said second plurality of pilot resources across said first and said second frequency sub-bands is between approximately ½ and 1/10. 6. The method of claim 1 wherein a density of said first plurality of pilot resources and said second plurality of pilot resources is constant in at least one of the time and frequency domain. 7. The method of claim 1 wherein each of said first and second plurality of pilot resources comprises a common pilot sequence. 8. The method of claim 1 further comprising provisioning said first and said second plurality of pilot resources into said first and said second frequency sub-bands utilizing at least one of a Time Division Multiplex (TDM), a Code Division Multiplex (CDM), a Frequency Division Multiplex (FDM), and a staggered method. 9. The method of claim 1 wherein said first plurality of pilot resources in said first frequency sub-band comprises a first pilot-data power offset and said second plurality of pilot resources in said second frequency sub-band comprises a second pilot-data power offset. 10. The method of claim 9 wherein said first pilot-data offset is different from said second pilot-data offset. 11. The method of claim 1 wherein a pilot power of any of said first and second frequency bands differs between a first symbol and a second symbol forming a pilot resource. 12. The method of claim 1 wherein at least one pilot resource is allocated on at least every eighth sub-carrier. 13. The method of claim 1 wherein a sum of said high power sub-carriers and said low power sub-carriers is between approximately thirty-six and eighty. 14. The method of claim 13 wherein said sum is equal to seventy-two. 15. The method of claim 1 wherein a cell into which the first and second plurality of pilot resources are transmitted is one of a plurality of cells in a soft-reuse network. 16. A program of machine-readable instructions, tangibly embodied on an information bearing medium and executable by a digital data processor, to perform actions comprising: allocating a first plurality of pilot resources to a first frequency sub-band comprising a plurality of high power sub-carriers; allocating a second plurality of pilot resources to a second frequency sub-band comprising a plurality of low power sub-carriers; signaling a power offset between said first plurality of pilot resources in the plurality of high power sub-carriers and said second plurality of pilot resources in the plurality of low power sub-carriers; and transmitting said first and said second plurality of pilot resources over said first and said second frequency sub-bands in a cell. 17. The program of claim 16 wherein a first density of said first plurality of pilot resources in said first frequency sub-band is different from a second density of said second plurality of pilot resources in said second frequency sub-band. 18. The program of claim 17 wherein said first density is greater than said second density. 19. The program of claim 16 wherein a density of said first plurality of pilot resources and said second plurality of pilot resources is constant in at least one of the time or frequency domain. 20. The program of claim 16 wherein said cell is one of a plurality of cells in a soft-reuse network. 21. A network element comprising: a processor configured to allocate a first plurality of pilot resources to a first frequency sub-band comprising a plurality of high power sub-carriers and to allocate a second plurality of pilot resources to a second frequency sub-band comprising a plurality of low power sub-carriers; and a transmitter coupled to said processor and configured to signal a power offset between said first plurality of pilot resources in the plurality of high power sub-carriers and said second plurality of pilot resources in the plurality of low power sub-carriers and to transmit said first and said second plurality of pilot resources over said first and said second frequency sub-bands in a cell. 22. The network element of claim 21 wherein a first density of said first plurality of pilot resources in said first frequency sub-band is different from a second density of said second plurality of pilot resources in said second frequency sub-band. 23. The network element of claim 22 wherein said first density is greater than said second density. 24. The network element of claim 21 wherein a pilot power of any of said first and second frequency bands differs between a first symbol and a second symbol forming a pilot resource. 25. The network element of claim 21 wherein said cell is one of a plurality of cells in a soft-reuse network. 26. A user equipment comprising: a receiver configured to receive a power offset between a plurality of pilot resources in a plurality of high power sub-carriers and a plurality of pilot resources in a plurality of low power sub-carriers; and a processor coupled to the receiver configured to use said power offset to receive a first plurality of pilot resources allocated to a first frequency sub-band comprising said plurality of high power sub-carriers and to receive a second plurality of pilot resources allocated to a second frequency sub-band comprising said plurality of low power sub-carriers. 27. The user equipment of claim 26 wherein a first density of said first plurality of pilot resources in said first frequency sub-band is different from a second density of said second plurality of pilot resources in said second frequency sub-band. 28. The user equipment of claim 27 wherein said first density is greater than said second density. 29. The user equipment of claim 26 wherein a pilot density of said first and said second plurality of pilot resources across said first and said second frequency sub-bands is between approximately ½ and 1/10. 30. The user equipment of claim 26 wherein a density of said first plurality of pilot resources and said second plurality of pilot resources is constant in the time domain. 31. The user equipment of claim 26 further configured to detect a plurality of data symbols transmitted on at least a part of the high-power and the low-power subcarriers, and further configured to use a first pilot-data offset to detect the plurality of data symbols on the high-power subcarriers, and to use a second pilot-data offset to detect the plurality of data symbols on the low-power subcarriers. 32. The user equipment of claim 26 further configured to estimate said first and second pilot-data offsets. 33. The user equipment of claim 26 wherein the first and second pilot-data offsets are the same. 34. The user equipment of claim 26 wherein each of said first and second plurality of pilot resources comprise a common pilot sequence. 35. The user equipment of claim 26 wherein said user equipment operates in a soft-reuse network. 36. An integrated circuit comprising: first circuitry operable to allocate a first plurality of pilot resources to a first frequency sub-band comprising a plurality of high power sub-carriers and to allocate a second plurality of pilot resources to a second frequency sub-band comprising a plurality of low power sub-carriers; second circuit operable to transmit a power offset between said first plurality of pilot resources in the plurality of high power sub-carriers and said second plurality of pilot resources in the plurality of low power sub-carriers; and third circuitry operable to transmit said first and said second plurality of pilot resources over said first and said second frequency sub-bands in a cell. 37. An integrated circuit comprising: first circuitry operable to receive pilot resources from a first frequency sub-band comprising a plurality of high power sub-carriers and to receive a second plurality of pilot resources from a second frequency sub-band comprising a plurality of low power sub-carriers; and second circuit operable to receive a power offset between said plurality of pilot resources in the high power sub-carriers and said plurality of pilot resources in the low power sub-carriers. 38. A network element comprising: means for allocating a first plurality of pilot resources to a first frequency sub-band comprising a plurality of high power sub-carriers; means for allocating a second plurality of pilot resources to a second frequency sub-band comprising a plurality of low power sub-carriers; means for signaling a power offset between said plurality of pilot resources in the high power sub-carriers and said plurality of pilot resources in the low power sub-carriers; and means for transmitting said first and said second plurality of pilot resources over said first and said second frequency sub-bands in a cell. 39. The network element of claim 38 wherein a first density of said first plurality of pilot resources in said first frequency sub-band is different from a second density of said second plurality of pilot resources in said second frequency sub-band and wherein said first density is greater than said second density. 40. The network element of claim 38, embodied at least partially in an integrated circuit.
2,600
9,905
9,905
13,871,808
2,621
An electronic device ( 1 ) is provided with a display unit ( 20 ), a contact unit ( 30 ) to be touched by a contact object, and a control unit ( 10 ) that detects a contact pattern on the contact unit ( 30 ) and, based on the contact pattern, alters a number of display pages changed at a time on the display unit ( 20 ).
1. An electronic device comprising: a display unit; a contact unit to be contacted by a contact object; and a control unit configured to detect a contact pattern on the contact unit and, based on the contact pattern, alter a number of display page(s) to be changed at a time on the display unit. 2. The electronic device of claim 1, further comprising a pressure detection unit configured to detect pressure on the contact unit, wherein the control unit detects the contact pattern as the pressure detected by the pressure detection unit. 3. A control method for an electronic device provided with a display unit and a contact unit to be touched by a contact object, the control method comprising the steps of: detecting a contact pattern on the contact unit; and altering a number of display page(s) to be changed at a time on the display unit based on the detected contact pattern.
An electronic device ( 1 ) is provided with a display unit ( 20 ), a contact unit ( 30 ) to be touched by a contact object, and a control unit ( 10 ) that detects a contact pattern on the contact unit ( 30 ) and, based on the contact pattern, alters a number of display pages changed at a time on the display unit ( 20 ).1. An electronic device comprising: a display unit; a contact unit to be contacted by a contact object; and a control unit configured to detect a contact pattern on the contact unit and, based on the contact pattern, alter a number of display page(s) to be changed at a time on the display unit. 2. The electronic device of claim 1, further comprising a pressure detection unit configured to detect pressure on the contact unit, wherein the control unit detects the contact pattern as the pressure detected by the pressure detection unit. 3. A control method for an electronic device provided with a display unit and a contact unit to be touched by a contact object, the control method comprising the steps of: detecting a contact pattern on the contact unit; and altering a number of display page(s) to be changed at a time on the display unit based on the detected contact pattern.
2,600
9,906
9,906
14,991,397
2,621
A bistable electro-optic display has a plurality of pixels, each of which is capable of displaying at least three gray levels. The display is driven by a method comprising: storing a look-up table containing data representing the impulses necessary to convert an initial gray level to a final gray level; storing data representing at least an initial state of each pixel of the display; receiving an input signal representing a desired final state of at least one pixel of the display; and generating an output signal representing the impulse necessary to convert the initial state of said one pixel to the desired final state thereof, as determined from said look-up table. The invention also provides a method for reducing the remnant voltage of an electro-optic display.
1. A method of driving a bistable electrophoretic display having at least one pixel with two extreme optical states, the electrophoretic display containing electrophoretically-mobile particles suspended in a liquid suspension medium, the method comprising: (a) driving the pixel from an initial gray level to one extreme optical state different from the initial gray level; and (b) immediately driving the pixel from the one extreme optical state to the opposed extreme optical state and immediately thereafter driving the pixel to a final gray level different from the one extreme optical state. 2. A method according to claim 1 wherein the display is a microcell display with the electrophoretically-mobile particles and the suspension medium retained within a plurality of cavities formed in a carrier medium. 3. A method according to claim 18 wherein the display is an encapsulated electrophoretic display comprising a plurality of capsules, each of which itself comprises an internal phase containing the electrophoretically-mobile particles and the suspension medium, and a capsule wall surrounding the internal phase. 4. A bistable electrophoretic display having at least one pixel with two extreme optical states, the electrophoretic display containing electrophoretically-mobile particles suspended in a liquid suspension medium, and a display controller for applying electric field to the pixel and thereby changing the optical state thereof from an initial gray level to a final gray level, the display controller being arranged to: (a) drive the pixel from the initial gray level to one extreme optical state different from the initial gray level; and (b) immediately drive the pixel from the one extreme optical state to the opposed extreme optical state and immediately thereafter drive the pixel to a final gray level different from the one extreme optical state. 5. A display according to claim 4 wherein the display is a microcell display with the electrophoretically-mobile particles and the suspension medium retained within a plurality of cavities formed in a carrier medium. 6. A display according to claim 4 wherein the display is an encapsulated electrophoretic display comprising a plurality of capsules, each of which itself comprises an internal phase containing the electrophoretically-mobile particles and the suspension medium, and a capsule wall surrounding the internal phase. 7. A method of driving a bistable electrophoretic display having at least one pixel with two extreme optical states, the electrophoretic display containing electrophoretically-mobile particles suspended in a liquid suspension medium, the method comprising: (a) driving the pixel from an initial gray level to one extreme optical state different from the initial gray level; and (b) driving the pixel from the one extreme optical state to the opposed extreme optical state and thereafter driving the pixel to a final gray level different from the one extreme optical state, wherein step (b) is effected by driving the pixel immediately the one extreme optical state to the opposed extreme optical state, immediately hereafter driving the pixel back to the one extreme optical state and immediately thereafter driving the pixel to the final gray level. 8. A bistable electrophoretic display having at least one pixel with two extreme optical states, the electrophoretic display containing electrophoretically-mobile particles suspended in a liquid suspension medium, and a display controller for applying electric field to the pixel and thereby changing the optical state thereof from an initial gray level to a final gray level, the display controller being arranged to: (a) drive the pixel from the initial gray level to one extreme optical state different from the initial gray level; and (b) drive the pixel from the one extreme optical state to the opposed extreme optical state and thereafter drive the pixel to a final gray level different from the one extreme optical state, wherein the display controller is arranged to effect (b) by driving the pixel immediately from the one extreme optical state to the opposed extreme optical state, immediately thereafter driving the pixel back to the one extreme optical state and immediately thereafter driving the pixel to the final gray level.
A bistable electro-optic display has a plurality of pixels, each of which is capable of displaying at least three gray levels. The display is driven by a method comprising: storing a look-up table containing data representing the impulses necessary to convert an initial gray level to a final gray level; storing data representing at least an initial state of each pixel of the display; receiving an input signal representing a desired final state of at least one pixel of the display; and generating an output signal representing the impulse necessary to convert the initial state of said one pixel to the desired final state thereof, as determined from said look-up table. The invention also provides a method for reducing the remnant voltage of an electro-optic display.1. A method of driving a bistable electrophoretic display having at least one pixel with two extreme optical states, the electrophoretic display containing electrophoretically-mobile particles suspended in a liquid suspension medium, the method comprising: (a) driving the pixel from an initial gray level to one extreme optical state different from the initial gray level; and (b) immediately driving the pixel from the one extreme optical state to the opposed extreme optical state and immediately thereafter driving the pixel to a final gray level different from the one extreme optical state. 2. A method according to claim 1 wherein the display is a microcell display with the electrophoretically-mobile particles and the suspension medium retained within a plurality of cavities formed in a carrier medium. 3. A method according to claim 18 wherein the display is an encapsulated electrophoretic display comprising a plurality of capsules, each of which itself comprises an internal phase containing the electrophoretically-mobile particles and the suspension medium, and a capsule wall surrounding the internal phase. 4. A bistable electrophoretic display having at least one pixel with two extreme optical states, the electrophoretic display containing electrophoretically-mobile particles suspended in a liquid suspension medium, and a display controller for applying electric field to the pixel and thereby changing the optical state thereof from an initial gray level to a final gray level, the display controller being arranged to: (a) drive the pixel from the initial gray level to one extreme optical state different from the initial gray level; and (b) immediately drive the pixel from the one extreme optical state to the opposed extreme optical state and immediately thereafter drive the pixel to a final gray level different from the one extreme optical state. 5. A display according to claim 4 wherein the display is a microcell display with the electrophoretically-mobile particles and the suspension medium retained within a plurality of cavities formed in a carrier medium. 6. A display according to claim 4 wherein the display is an encapsulated electrophoretic display comprising a plurality of capsules, each of which itself comprises an internal phase containing the electrophoretically-mobile particles and the suspension medium, and a capsule wall surrounding the internal phase. 7. A method of driving a bistable electrophoretic display having at least one pixel with two extreme optical states, the electrophoretic display containing electrophoretically-mobile particles suspended in a liquid suspension medium, the method comprising: (a) driving the pixel from an initial gray level to one extreme optical state different from the initial gray level; and (b) driving the pixel from the one extreme optical state to the opposed extreme optical state and thereafter driving the pixel to a final gray level different from the one extreme optical state, wherein step (b) is effected by driving the pixel immediately the one extreme optical state to the opposed extreme optical state, immediately hereafter driving the pixel back to the one extreme optical state and immediately thereafter driving the pixel to the final gray level. 8. A bistable electrophoretic display having at least one pixel with two extreme optical states, the electrophoretic display containing electrophoretically-mobile particles suspended in a liquid suspension medium, and a display controller for applying electric field to the pixel and thereby changing the optical state thereof from an initial gray level to a final gray level, the display controller being arranged to: (a) drive the pixel from the initial gray level to one extreme optical state different from the initial gray level; and (b) drive the pixel from the one extreme optical state to the opposed extreme optical state and thereafter drive the pixel to a final gray level different from the one extreme optical state, wherein the display controller is arranged to effect (b) by driving the pixel immediately from the one extreme optical state to the opposed extreme optical state, immediately thereafter driving the pixel back to the one extreme optical state and immediately thereafter driving the pixel to the final gray level.
2,600
9,907
9,907
12,357,632
2,694
An electronic apparatus and method of implementing a user interface according to a pressure intensity of a touch on the electronic apparatus, the method including detecting a position at which the touch is input, identifying the type of object displayed on the position, and detecting the pressure intensity. Accordingly, the user can manipulate electronic apparatuses with greater convenience.
1. An electronic apparatus, comprising: a display unit comprising a touch screen to display an object and to receive a user input; a sensing unit to sense whether the displayed object is approached or touched; and a control unit to control the display unit to display data related to the displayed object according to whether the displayed object is approached or touched, as sensed by the sensing unit. 2. The apparatus as claimed in claim 1, wherein the sensing unit comprises: a position detector to detect a position of an approach or a touch on the touch screen; and a pressure detector to detect a pressure intensity of the approach or the touch on the touch screen. 3. The apparatus as claimed in claim 2, wherein the control unit recognizes that the position is approached if position data corresponding to the position is received from the position detector and a pressure intensity that is lower than a first predetermined value is received from the pressure detector. 4. The apparatus as claimed in claim 2, wherein the control unit recognizes that the position is touched if position data corresponding to the position is received from the position detector and a pressure intensity that is higher than a first predetermined value is received from the pressure detector. 5. The apparatus as claimed in claim 4, wherein the control unit recognizes that the position is pressed if the position data corresponding to the position is received from the position detector and a pressure intensity that is higher than a second predetermined value, which higher than the first predetermined value, is received from the pressure detector. 6. The apparatus as claimed in claim 2, wherein if position data of the position of the approach or the touch on the touch screen received from the position detector corresponds to a position of the displayed object, the control unit determines a type of the displayed object and controls the display unit to display the data related to the displayed object according to the pressure intensity detected by the pressure detector and the determined type of the displayed object. 7. The apparatus as claimed in claim 6, wherein if the control unit determines that the type of the displayed object is a menu, and a pressure intensity that is lower than a first predetermined value is received from the pressure detector, the control unit controls the display unit to display a sub menu of the menu. 8. The apparatus as claimed in claim 6, wherein if the control unit determines that the type of the displayed object is a menu, and a pressure intensity that is higher than a first predetermined value is received from the pressure detector, the control unit selects the menu. 9. The apparatus as claimed in claim 8, wherein if the control unit determines that the type of the displayed object is the menu, and a pressure intensity that is higher than a second predetermined value, which is higher than the first predetermined value, is received from the pressure detector, the control unit displays a direct menu item of the menu. 10. The apparatus as claimed in claim 6, wherein if the control unit determines that the type of the displayed object is content, and a pressure intensity that is lower than a first predetermined value is received from the pressure detector, the control unit controls the display unit to display a summary screen of the content. 11. The apparatus as claimed in claim 6, wherein if the control unit determines that the type of the displayed object is content, and a pressure intensity that is higher than a first predetermined value is received from the pressure detector, the control unit plays back the content. 12. The apparatus as claimed in claim 11, wherein if the control unit determines that the type of the displayed object is a title of the content, and a pressure intensity that is higher than a second predetermined value, which is higher than the first predetermined value, is received from the pressure detector, the control unit displays a control menu of the content. 13. The apparatus as claimed in claim 12, wherein if the control unit determines that the content is being played back at the position, and the pressure intensity that is higher than the second predetermined value is received from the pressure detector, the control unit displays a control menu regarding the playback of the content. 14. The apparatus as claimed in claim 6, wherein if the control unit determines that the type of the displayed object is an icon, and a pressure intensity that is lower than a first predetermined value is received from the pressure detector, the control unit controls the display unit to display detailed information regarding the icon and/or controls the display unit to enlarge the icon. 15. The apparatus as claimed in claim 6, wherein if the control unit determines that the type of the displayed object is an icon, and a pressure intensity that is higher than a first predetermined value is received from the pressure detector, the control unit executes an operation regarding the icon. 16. The apparatus as claimed in claim 15, wherein if the control unit determines that the type of the displayed object is the icon, and a pressure intensity that is higher than a second predetermined value, which is higher than the first predetermined value, is received from the pressure detector, the control unit controls the display unit to display a control menu regarding the icon. 17. The apparatus as claimed in claim 6, wherein if the control unit determines that the type of the displayed object is a character, and a pressure intensity that is lower than a first predetermined value is received from the pressure detector, the control unit controls the display unit to enlarge the character. 18. The apparatus as claimed in claim 17, wherein if the control unit determines that the type of the displayed object is the character, and the pressure intensity that is lower than the first predetermined value is received from the pressure detector, without interruption, for longer than a predetermined period of time, the control unit controls the display unit to display one or more predetermined terms related to the character. 19. The apparatus as claimed in claim 6, wherein if the control unit determines that the type of the displayed object is a character, and a pressure intensity that is higher than a first predetermined value is received from the pressure detector, the control unit controls the display unit to display the character on an input window. 20. The apparatus as claimed in claim 19, wherein if the control unit determines that the type of the displayed object is the character, and a pressure intensity that is higher than a second predetermined value, which is higher than the first predetermined value, is received from the pressure detector, the control unit controls the display unit to display a first predetermined term related to the character on the input window. 21. The apparatus as claimed in claim 20, wherein if the control unit determines that the type of the displayed object is the character, and the pressure intensity that is higher than the second predetermined value is received from the pressure detector, without interruption, for longer than a predetermined period of time, the control unit controls the display unit to remove the first predetermined term from the input window and to display a second predetermined term related to the character, different from the first predetermined term, on the input window. 22. The apparatus as claimed in claim 1, wherein the control unit controls the display unit to display the data related to the displayed object according to whether the displayed object is approached or touched, as sensed by the sensing unit, and according to a type of the displayed object. 23. The apparatus as claimed in claim 1, wherein the control unit controls the display unit to display the data related to the displayed object according to whether the displayed object is approached or touched, as sensed by the sensing unit, and according to a length of time that the displayed object is approached or touched without interruption. 24. A method of implementing a user interface for an electronic apparatus comprising a sensing unit to sense whether an object displayed on a display unit is approached or touched, the method comprising: determining whether the displayed object is approached or touched according to a result of a sensing output by the sensing unit; and controlling the display unit to display data related the displayed object according to whether the displayed object is determined to be approached or touched. 25. The method as claimed in claim 24, further comprising: detecting a position of an approach or a touch on the display unit; and detecting a pressure intensity of the approach or the touch on the display unit. 26. The method as claimed in claim 25, wherein the determining of whether the displayed object is approached or touched comprises: determining that the displayed object is approached if position data of the detected position of the approach or the touch on the display unit corresponds to a position of the displayed object, and the detected pressure intensity is lower than a first predetermined value. 27. The method as claimed in claim 25, wherein the determining of whether the displayed object is approached or touched comprises: determining that the displayed object is touched if position data of the detected position of the approach or the touch on the display unit corresponds to a position of the displayed object, and the detected pressure intensity is higher than a first predetermined value. 28. The method as claimed in claim 27, wherein the determining of whether the displayed object is approached or touched further comprises: determining that the displayed object is pressed if the position data of the detected position of the approach or the touch on the display unit corresponds to the position of the displayed object, and the detected pressure intensity is higher than a second predetermined value, which is higher than the first predetermined value. 29. The method as claimed in claim 25, wherein: the determining of whether the displayed object is approached or touched comprises, if position data of the detected position of the approach or the touch on the display unit corresponds to a position of the displayed object, determining a type of the displayed object; and the controlling of the display unit comprises controlling the display unit to display the data related to the object according to the detected pressure intensity and the determined type of the displayed object. 30. The method as claimed in claim 29, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object comprises, if the determined type of the displayed object is a menu and the detected pressure intensity is lower than a first predetermined value, controlling the display unit to display a sub menu of the menu. 31. The method as claimed in claim 29, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object comprises, if the determined type of the displayed object is a menu and the detected pressure intensity is higher than a first predetermined value, selecting the menu. 32. The method as claimed in claim 31, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object further comprises, if the determined type of the displayed object is the menu and the detected pressure intensity is higher than a second predetermined value, which is higher than the first predetermined value, controlling the display unit to display a direct menu item of the menu. 33. The method as claimed in claim 29, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object comprises, if the determined type of the displayed object is content and the detected pressure intensity is lower than a first predetermined value, controlling the display unit to display a summary screen of the content. 34. The method as claimed in claim 29, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object comprises, if the determined type of the displayed object is content and the detected pressure intensity is higher than a first predetermined value, playing back the content. 35. The method as claimed in claim 34, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object further comprises, if the determined type of the displayed object is a title of the content and the detected pressure intensity is higher than a second predetermined value, which is higher than the first predetermined value, controlling the display unit to display a control menu of the content. 36. The method as claimed in claim 35, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object further comprises, if the determined type of the displayed object is the content, the content is being played back, and the detected pressure intensity is higher than the second predetermined value, controlling the display unit to display a control menu regarding the playback of the content. 37. The method as claimed in claim 29, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object comprises, if the determined type of the displayed object is an icon and the detected pressure intensity is lower than a first predetermined value, controlling the display unit to display detailed information regarding the icon and/or controlling the display unit to enlarge the icon. 38. The method as claimed in claim 29, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object comprises, if the determined type of the displayed object is an icon and the detected pressure intensity is higher than a first predetermined value, executing an operation corresponding to the icon. 39. The method as claimed in claim 38, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object further comprises, if the determined type of the displayed object is the icon and the detected pressure intensity is higher than a second predetermined value, which is higher than the first predetermined value, controlling the display unit to display a control menu regarding the icon. 40. The method as claimed in claim 29, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object comprises, if the determined type of the displayed object is a character and the detected pressure intensity is lower than a first predetermined value, controlling the display unit to enlarge the character. 41. The method as claimed in claim 40, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object further comprises, if the determined type of the displayed object is the character and the detected pressure intensity that is lower than the first predetermined value is output for longer than a predetermined period of time without interruption, controlling the display unit to display one or more predetermined terms related to the character. 42. The method as claimed in claim 29, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object comprises, if the determined type of the displayed object is a character and the detected pressure intensity is higher than a first predetermined value, controlling the display unit to display the character on an input window. 43. The method as claimed in claim 42, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object further comprises, if the determined type of the displayed object is the character and the detected pressure intensity is higher than a second predetermined value, which is higher than the first predetermined value, controlling the display unit to display a first predetermined term related to the character on the input window. 44. The method as claimed in claim 42, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object further comprises, if the determined type of the displayed object is the character and the detected pressure intensity is higher than a second predetermined value, which is higher than the first predetermined value, controlling the display unit to display a predetermined term, from among a plurality of predetermined terms related to the character, mapped according to the pressure intensity. 45. The method as claimed in claim 43, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object further comprises, if the determined type of the displayed object is the character and the detected pressure intensity that is higher than the second predetermined value is output for longer than a predetermined period of time without interruption, controlling the display unit to display a second predetermined term related to the character, different from the first predetermined term, on the input window. 46. The method as claimed in claim 24, wherein the controlling of the display unit comprises controlling the display unit to display the data related to the object according to whether the displayed object is determined to be approached or touched and according to a type of the displayed object. 47. The method as claimed in claim 24, wherein the controlling of the display unit comprises controlling the display unit to display the data related to the object according to whether the displayed object is determined to be approached or touched and according to a length of time that the displayed object is approached or touched without interruption. 48. A computer-readable recording medium encoded with the method of claim 24 and implemented by at least one computer. 49. A method of implementing a user interface for an electronic apparatus comprising a display unit to display an object, the method comprising: controlling the display unit to display data related the displayed object according to a pressure intensity of a touch on the displayed object. 50. A computer-readable recording medium encoded with the method of claim 49 and implemented by at least one computer.
An electronic apparatus and method of implementing a user interface according to a pressure intensity of a touch on the electronic apparatus, the method including detecting a position at which the touch is input, identifying the type of object displayed on the position, and detecting the pressure intensity. Accordingly, the user can manipulate electronic apparatuses with greater convenience.1. An electronic apparatus, comprising: a display unit comprising a touch screen to display an object and to receive a user input; a sensing unit to sense whether the displayed object is approached or touched; and a control unit to control the display unit to display data related to the displayed object according to whether the displayed object is approached or touched, as sensed by the sensing unit. 2. The apparatus as claimed in claim 1, wherein the sensing unit comprises: a position detector to detect a position of an approach or a touch on the touch screen; and a pressure detector to detect a pressure intensity of the approach or the touch on the touch screen. 3. The apparatus as claimed in claim 2, wherein the control unit recognizes that the position is approached if position data corresponding to the position is received from the position detector and a pressure intensity that is lower than a first predetermined value is received from the pressure detector. 4. The apparatus as claimed in claim 2, wherein the control unit recognizes that the position is touched if position data corresponding to the position is received from the position detector and a pressure intensity that is higher than a first predetermined value is received from the pressure detector. 5. The apparatus as claimed in claim 4, wherein the control unit recognizes that the position is pressed if the position data corresponding to the position is received from the position detector and a pressure intensity that is higher than a second predetermined value, which higher than the first predetermined value, is received from the pressure detector. 6. The apparatus as claimed in claim 2, wherein if position data of the position of the approach or the touch on the touch screen received from the position detector corresponds to a position of the displayed object, the control unit determines a type of the displayed object and controls the display unit to display the data related to the displayed object according to the pressure intensity detected by the pressure detector and the determined type of the displayed object. 7. The apparatus as claimed in claim 6, wherein if the control unit determines that the type of the displayed object is a menu, and a pressure intensity that is lower than a first predetermined value is received from the pressure detector, the control unit controls the display unit to display a sub menu of the menu. 8. The apparatus as claimed in claim 6, wherein if the control unit determines that the type of the displayed object is a menu, and a pressure intensity that is higher than a first predetermined value is received from the pressure detector, the control unit selects the menu. 9. The apparatus as claimed in claim 8, wherein if the control unit determines that the type of the displayed object is the menu, and a pressure intensity that is higher than a second predetermined value, which is higher than the first predetermined value, is received from the pressure detector, the control unit displays a direct menu item of the menu. 10. The apparatus as claimed in claim 6, wherein if the control unit determines that the type of the displayed object is content, and a pressure intensity that is lower than a first predetermined value is received from the pressure detector, the control unit controls the display unit to display a summary screen of the content. 11. The apparatus as claimed in claim 6, wherein if the control unit determines that the type of the displayed object is content, and a pressure intensity that is higher than a first predetermined value is received from the pressure detector, the control unit plays back the content. 12. The apparatus as claimed in claim 11, wherein if the control unit determines that the type of the displayed object is a title of the content, and a pressure intensity that is higher than a second predetermined value, which is higher than the first predetermined value, is received from the pressure detector, the control unit displays a control menu of the content. 13. The apparatus as claimed in claim 12, wherein if the control unit determines that the content is being played back at the position, and the pressure intensity that is higher than the second predetermined value is received from the pressure detector, the control unit displays a control menu regarding the playback of the content. 14. The apparatus as claimed in claim 6, wherein if the control unit determines that the type of the displayed object is an icon, and a pressure intensity that is lower than a first predetermined value is received from the pressure detector, the control unit controls the display unit to display detailed information regarding the icon and/or controls the display unit to enlarge the icon. 15. The apparatus as claimed in claim 6, wherein if the control unit determines that the type of the displayed object is an icon, and a pressure intensity that is higher than a first predetermined value is received from the pressure detector, the control unit executes an operation regarding the icon. 16. The apparatus as claimed in claim 15, wherein if the control unit determines that the type of the displayed object is the icon, and a pressure intensity that is higher than a second predetermined value, which is higher than the first predetermined value, is received from the pressure detector, the control unit controls the display unit to display a control menu regarding the icon. 17. The apparatus as claimed in claim 6, wherein if the control unit determines that the type of the displayed object is a character, and a pressure intensity that is lower than a first predetermined value is received from the pressure detector, the control unit controls the display unit to enlarge the character. 18. The apparatus as claimed in claim 17, wherein if the control unit determines that the type of the displayed object is the character, and the pressure intensity that is lower than the first predetermined value is received from the pressure detector, without interruption, for longer than a predetermined period of time, the control unit controls the display unit to display one or more predetermined terms related to the character. 19. The apparatus as claimed in claim 6, wherein if the control unit determines that the type of the displayed object is a character, and a pressure intensity that is higher than a first predetermined value is received from the pressure detector, the control unit controls the display unit to display the character on an input window. 20. The apparatus as claimed in claim 19, wherein if the control unit determines that the type of the displayed object is the character, and a pressure intensity that is higher than a second predetermined value, which is higher than the first predetermined value, is received from the pressure detector, the control unit controls the display unit to display a first predetermined term related to the character on the input window. 21. The apparatus as claimed in claim 20, wherein if the control unit determines that the type of the displayed object is the character, and the pressure intensity that is higher than the second predetermined value is received from the pressure detector, without interruption, for longer than a predetermined period of time, the control unit controls the display unit to remove the first predetermined term from the input window and to display a second predetermined term related to the character, different from the first predetermined term, on the input window. 22. The apparatus as claimed in claim 1, wherein the control unit controls the display unit to display the data related to the displayed object according to whether the displayed object is approached or touched, as sensed by the sensing unit, and according to a type of the displayed object. 23. The apparatus as claimed in claim 1, wherein the control unit controls the display unit to display the data related to the displayed object according to whether the displayed object is approached or touched, as sensed by the sensing unit, and according to a length of time that the displayed object is approached or touched without interruption. 24. A method of implementing a user interface for an electronic apparatus comprising a sensing unit to sense whether an object displayed on a display unit is approached or touched, the method comprising: determining whether the displayed object is approached or touched according to a result of a sensing output by the sensing unit; and controlling the display unit to display data related the displayed object according to whether the displayed object is determined to be approached or touched. 25. The method as claimed in claim 24, further comprising: detecting a position of an approach or a touch on the display unit; and detecting a pressure intensity of the approach or the touch on the display unit. 26. The method as claimed in claim 25, wherein the determining of whether the displayed object is approached or touched comprises: determining that the displayed object is approached if position data of the detected position of the approach or the touch on the display unit corresponds to a position of the displayed object, and the detected pressure intensity is lower than a first predetermined value. 27. The method as claimed in claim 25, wherein the determining of whether the displayed object is approached or touched comprises: determining that the displayed object is touched if position data of the detected position of the approach or the touch on the display unit corresponds to a position of the displayed object, and the detected pressure intensity is higher than a first predetermined value. 28. The method as claimed in claim 27, wherein the determining of whether the displayed object is approached or touched further comprises: determining that the displayed object is pressed if the position data of the detected position of the approach or the touch on the display unit corresponds to the position of the displayed object, and the detected pressure intensity is higher than a second predetermined value, which is higher than the first predetermined value. 29. The method as claimed in claim 25, wherein: the determining of whether the displayed object is approached or touched comprises, if position data of the detected position of the approach or the touch on the display unit corresponds to a position of the displayed object, determining a type of the displayed object; and the controlling of the display unit comprises controlling the display unit to display the data related to the object according to the detected pressure intensity and the determined type of the displayed object. 30. The method as claimed in claim 29, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object comprises, if the determined type of the displayed object is a menu and the detected pressure intensity is lower than a first predetermined value, controlling the display unit to display a sub menu of the menu. 31. The method as claimed in claim 29, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object comprises, if the determined type of the displayed object is a menu and the detected pressure intensity is higher than a first predetermined value, selecting the menu. 32. The method as claimed in claim 31, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object further comprises, if the determined type of the displayed object is the menu and the detected pressure intensity is higher than a second predetermined value, which is higher than the first predetermined value, controlling the display unit to display a direct menu item of the menu. 33. The method as claimed in claim 29, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object comprises, if the determined type of the displayed object is content and the detected pressure intensity is lower than a first predetermined value, controlling the display unit to display a summary screen of the content. 34. The method as claimed in claim 29, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object comprises, if the determined type of the displayed object is content and the detected pressure intensity is higher than a first predetermined value, playing back the content. 35. The method as claimed in claim 34, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object further comprises, if the determined type of the displayed object is a title of the content and the detected pressure intensity is higher than a second predetermined value, which is higher than the first predetermined value, controlling the display unit to display a control menu of the content. 36. The method as claimed in claim 35, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object further comprises, if the determined type of the displayed object is the content, the content is being played back, and the detected pressure intensity is higher than the second predetermined value, controlling the display unit to display a control menu regarding the playback of the content. 37. The method as claimed in claim 29, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object comprises, if the determined type of the displayed object is an icon and the detected pressure intensity is lower than a first predetermined value, controlling the display unit to display detailed information regarding the icon and/or controlling the display unit to enlarge the icon. 38. The method as claimed in claim 29, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object comprises, if the determined type of the displayed object is an icon and the detected pressure intensity is higher than a first predetermined value, executing an operation corresponding to the icon. 39. The method as claimed in claim 38, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object further comprises, if the determined type of the displayed object is the icon and the detected pressure intensity is higher than a second predetermined value, which is higher than the first predetermined value, controlling the display unit to display a control menu regarding the icon. 40. The method as claimed in claim 29, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object comprises, if the determined type of the displayed object is a character and the detected pressure intensity is lower than a first predetermined value, controlling the display unit to enlarge the character. 41. The method as claimed in claim 40, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object further comprises, if the determined type of the displayed object is the character and the detected pressure intensity that is lower than the first predetermined value is output for longer than a predetermined period of time without interruption, controlling the display unit to display one or more predetermined terms related to the character. 42. The method as claimed in claim 29, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object comprises, if the determined type of the displayed object is a character and the detected pressure intensity is higher than a first predetermined value, controlling the display unit to display the character on an input window. 43. The method as claimed in claim 42, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object further comprises, if the determined type of the displayed object is the character and the detected pressure intensity is higher than a second predetermined value, which is higher than the first predetermined value, controlling the display unit to display a first predetermined term related to the character on the input window. 44. The method as claimed in claim 42, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object further comprises, if the determined type of the displayed object is the character and the detected pressure intensity is higher than a second predetermined value, which is higher than the first predetermined value, controlling the display unit to display a predetermined term, from among a plurality of predetermined terms related to the character, mapped according to the pressure intensity. 45. The method as claimed in claim 43, wherein the controlling of the display unit according to the detected pressure intensity and the determined type of the displayed object further comprises, if the determined type of the displayed object is the character and the detected pressure intensity that is higher than the second predetermined value is output for longer than a predetermined period of time without interruption, controlling the display unit to display a second predetermined term related to the character, different from the first predetermined term, on the input window. 46. The method as claimed in claim 24, wherein the controlling of the display unit comprises controlling the display unit to display the data related to the object according to whether the displayed object is determined to be approached or touched and according to a type of the displayed object. 47. The method as claimed in claim 24, wherein the controlling of the display unit comprises controlling the display unit to display the data related to the object according to whether the displayed object is determined to be approached or touched and according to a length of time that the displayed object is approached or touched without interruption. 48. A computer-readable recording medium encoded with the method of claim 24 and implemented by at least one computer. 49. A method of implementing a user interface for an electronic apparatus comprising a display unit to display an object, the method comprising: controlling the display unit to display data related the displayed object according to a pressure intensity of a touch on the displayed object. 50. A computer-readable recording medium encoded with the method of claim 49 and implemented by at least one computer.
2,600
9,908
9,908
13,946,611
2,687
Systems, apparatus and methods are described for purposes of initiating a response to detection of an adverse operational condition involving a system including a drill rig and an inground tool. The response can be based on an uphole sensed parameter in combination with a downhole sensed parameter. The adverse operational condition can involve cross-bore detection, frac-out detection, excessive downhole pressure, a plugged jet indication and drill string key-holing detection. A communication system includes an inground communication link that allows bidirectional communication between a walkover detector and the drill rig via the inground tool. Monitoring of inground tool depth and/or lateral movement can be performed using techniques that approach integrated values. Bit force based auto-carving is described in the context of an automated procedure. Loss of locator to drill rig telemetry can trigger an automated switch to a different communication path within the system.
1. In a system for performing an inground operation at least which utilizes a drill string extending from a drill rig to an inground tool and a walkover locator at least for receiving a locating signal that is transmitted from the inground tool, a communication system comprising: an uphole transceiver located proximate to the drill rig; a portable transceiver forming part of the walkover locator and configured for receiving said locating signal to at least periodically update a depth reading of the inground tool; a telemetry link at least for unidirectional communication from the portable transceiver of the walkover detector to the uphole transceiver via a walkover locator telemetry signal for periodically transmitting at least said depth reading to the uphole transceiver; and a processor configured for monitoring the telemetry link to detect signal degradation of said walkover locator telemetry signal and, responsive to detecting such signal degradation, for switching the periodic transmission of the depth reading to a different communication path for reception by the uphole transceiver. 2. The communication system of claim 1, further comprising: said portable transceiver of the walkover detector further configured for transmitting an inground communications signal for reception by the inground tool to form an inground communication link; and a downhole transceiver supported by the inground tool for transmitting said locating signal and for bidirectional communication with said uphole transceiver, serving as a bidirectional communication link, by using the drill string as an electrical conductor to provide communication between the uphole transceiver and the downhole transceiver and wherein the different communication path includes the inground communication link and the bidirectional communication link. 3. The communication system of claim 2 wherein said processor is located at the drill rig and is further configured for generating a prompt responsive to detection of the signal degradation for transmission to the inground tool at least via the bidirectional communication link, and the inground transceiver is configured for relay of the prompt to the portable device to instruct the portable device to thereafter transfer at least the periodic depth reading to the inground tool on the inground communications link for subsequent transfer to the uphole transceiver via the bidirectional communications link. 4. The communication system of claim 3 wherein said telemetry link is bidirectional and said processor is further configured for transfer of the prompt to the portable device via the telemetry link such that a redundant pair of communication paths is available for the prompt. 5. The communication system of claim 2 wherein said processor is located at the drill rig and is further configured for generating a prompt that is indicative of the signal degradation for transmission to the inground tool and wherein said telemetry link is bidirectional and said processor is configured to transfer the prompt to the portable device at least via the telemetry link to instruct the portable device to thereafter transfer at least the periodic depth reading to the inground tool on the inground communication link for subsequent relay to the uphole transceiver via the bidirectional communications link. 6. The communication system of claim 2 wherein the processor is further configured for at least periodically generating a confirmation responsive to receiving data from the walkover locator on the telemetry link and for sending the confirmation for reception by the walkover locator. 7. The communication system of claim 6 wherein the walkover locator is configured to monitor for periodic reception of said confirmation and, responsive to at least one missed confirmation, switching to the different communication path. 8. The communication system of claim 7 wherein the processor is configured to send the confirmation to the walkover detector via the bidirectional communications link and the locating signal. 9. The communications system of claim 7 wherein said telemetry link is bidirectional and the processor is configured to send the confirmation to the walkover detector via the telemetry link. 10. The communication system of claim 1 wherein said processor is configured for monitoring at least one characteristic of the walkover telemetry signal for detecting the signal degradation. 11. The communication system of claim 10 wherein the characteristic is at least one of a signal to noise ratio, a bit error rate and a packet loss rate. 12. In a system for performing an inground operation at least which utilizes a drill string extending from a drill rig to an inground tool and a walkover locator at least for receiving a locating signal that is transmitted from the inground tool, a method comprising: arranging an uphole transceiver proximate to the drill rig; configuring a portable transceiver to form part of the walkover locator for receiving said locating signal to at least periodically update a depth reading of the inground tool; forming a telemetry link at least for unidirectional communication from the portable transceiver of the walkover detector to the uphole transceiver via a walkover locator telemetry signal for periodically transmitting at least said depth reading to the uphole transceiver; automatically monitoring the telemetry link to detect signal degradation of said walkover locator telemetry signal; and responsive to detecting the signal degradation, switching the periodic transmission of the depth reading to a different communication path for reception by the uphole transceiver.
Systems, apparatus and methods are described for purposes of initiating a response to detection of an adverse operational condition involving a system including a drill rig and an inground tool. The response can be based on an uphole sensed parameter in combination with a downhole sensed parameter. The adverse operational condition can involve cross-bore detection, frac-out detection, excessive downhole pressure, a plugged jet indication and drill string key-holing detection. A communication system includes an inground communication link that allows bidirectional communication between a walkover detector and the drill rig via the inground tool. Monitoring of inground tool depth and/or lateral movement can be performed using techniques that approach integrated values. Bit force based auto-carving is described in the context of an automated procedure. Loss of locator to drill rig telemetry can trigger an automated switch to a different communication path within the system.1. In a system for performing an inground operation at least which utilizes a drill string extending from a drill rig to an inground tool and a walkover locator at least for receiving a locating signal that is transmitted from the inground tool, a communication system comprising: an uphole transceiver located proximate to the drill rig; a portable transceiver forming part of the walkover locator and configured for receiving said locating signal to at least periodically update a depth reading of the inground tool; a telemetry link at least for unidirectional communication from the portable transceiver of the walkover detector to the uphole transceiver via a walkover locator telemetry signal for periodically transmitting at least said depth reading to the uphole transceiver; and a processor configured for monitoring the telemetry link to detect signal degradation of said walkover locator telemetry signal and, responsive to detecting such signal degradation, for switching the periodic transmission of the depth reading to a different communication path for reception by the uphole transceiver. 2. The communication system of claim 1, further comprising: said portable transceiver of the walkover detector further configured for transmitting an inground communications signal for reception by the inground tool to form an inground communication link; and a downhole transceiver supported by the inground tool for transmitting said locating signal and for bidirectional communication with said uphole transceiver, serving as a bidirectional communication link, by using the drill string as an electrical conductor to provide communication between the uphole transceiver and the downhole transceiver and wherein the different communication path includes the inground communication link and the bidirectional communication link. 3. The communication system of claim 2 wherein said processor is located at the drill rig and is further configured for generating a prompt responsive to detection of the signal degradation for transmission to the inground tool at least via the bidirectional communication link, and the inground transceiver is configured for relay of the prompt to the portable device to instruct the portable device to thereafter transfer at least the periodic depth reading to the inground tool on the inground communications link for subsequent transfer to the uphole transceiver via the bidirectional communications link. 4. The communication system of claim 3 wherein said telemetry link is bidirectional and said processor is further configured for transfer of the prompt to the portable device via the telemetry link such that a redundant pair of communication paths is available for the prompt. 5. The communication system of claim 2 wherein said processor is located at the drill rig and is further configured for generating a prompt that is indicative of the signal degradation for transmission to the inground tool and wherein said telemetry link is bidirectional and said processor is configured to transfer the prompt to the portable device at least via the telemetry link to instruct the portable device to thereafter transfer at least the periodic depth reading to the inground tool on the inground communication link for subsequent relay to the uphole transceiver via the bidirectional communications link. 6. The communication system of claim 2 wherein the processor is further configured for at least periodically generating a confirmation responsive to receiving data from the walkover locator on the telemetry link and for sending the confirmation for reception by the walkover locator. 7. The communication system of claim 6 wherein the walkover locator is configured to monitor for periodic reception of said confirmation and, responsive to at least one missed confirmation, switching to the different communication path. 8. The communication system of claim 7 wherein the processor is configured to send the confirmation to the walkover detector via the bidirectional communications link and the locating signal. 9. The communications system of claim 7 wherein said telemetry link is bidirectional and the processor is configured to send the confirmation to the walkover detector via the telemetry link. 10. The communication system of claim 1 wherein said processor is configured for monitoring at least one characteristic of the walkover telemetry signal for detecting the signal degradation. 11. The communication system of claim 10 wherein the characteristic is at least one of a signal to noise ratio, a bit error rate and a packet loss rate. 12. In a system for performing an inground operation at least which utilizes a drill string extending from a drill rig to an inground tool and a walkover locator at least for receiving a locating signal that is transmitted from the inground tool, a method comprising: arranging an uphole transceiver proximate to the drill rig; configuring a portable transceiver to form part of the walkover locator for receiving said locating signal to at least periodically update a depth reading of the inground tool; forming a telemetry link at least for unidirectional communication from the portable transceiver of the walkover detector to the uphole transceiver via a walkover locator telemetry signal for periodically transmitting at least said depth reading to the uphole transceiver; automatically monitoring the telemetry link to detect signal degradation of said walkover locator telemetry signal; and responsive to detecting the signal degradation, switching the periodic transmission of the depth reading to a different communication path for reception by the uphole transceiver.
2,600
9,909
9,909
14,629,952
2,684
An electronic key for a merchandise security device is provided. The electronic key may include electronic circuitry for providing electrical power to a lock mechanism for locking and unlocking the lock mechanism. The electronic key may also include an audio component configured to indicate a status of the lock mechanism.
1. A security system for protecting an item of merchandise from theft, comprising: an electronic key; and a plurality of merchandise security devices each comprising a lock mechanism that is configured to be locked or unlocked in response to receiving electrical power transferred from the electronic key to the lock mechanism, wherein the electronic key is incapable of unlocking a second lock mechanism prior to locking a first lock mechanism that has been successfully unlocked. 2. The security system according to claim 1, wherein the electrical power is configured to transferred from the electronic key via inductive transfer. 3. The security system according to claim 1, wherein the electronic key comprises an audio component. 4. The security system according to claim 3, wherein the audio component is configured to emit an audible signal using a piezo. 5. The security system according to claim 3, wherein the audio component is configured to emit a first audible signal indicative of successfully changing a state of the first lock mechanism and a second audible signal that is different than the first audible signal and that is indicative of unsuccessfully changing a state of the second lock mechanism. 6. The security system according to claim 3, wherein the audio component is configured to emit an audible signal in response to the lock mechanism being locked or unlocked. 7. The security system according to claim 3, wherein the audio component is configured to indicate a status of the lock mechanism based on the change in state thereof. 8. The security system according to claim 7, wherein the audio component is configured to emit a first audible signal indicative of successfully changing a state of the lock mechanism and a second audible signal that is different than the first audible signal and that is indicative of unsuccessfully changing a state of the lock mechanism. 9. The security system according to claim 7, wherein the audio component is configured to continuously or intermittently emit an audible signal while the lock mechanism is in an unlocked state. 10. The security system according to claim 7, wherein the audio component is configured to emit: (i) an initial audio indication in response to the lock mechanism being unlocked, (ii) a first audio indication while the lock mechanism is in an unlocked state, and (iii) a second audio indication in response to unsuccessfully changing the state of the lock mechanism, and wherein each of the initial, first, and second audio indications are different than one another. 11. The security system according to claim 1, wherein the electronic key comprises electronic circuitry configured to communicate a communications protocol signal between the electronic key and the lock. 12. The security system according to claim 11, wherein the communications protocol signal comprises a security code. 13. The security system according to claim 1, wherein the electronic key is configured to receive a signal transmitted from the lock mechanism indicating a change in state thereof. 14. A method for protecting an item of merchandise susceptible to theft, comprising: transferring electrical power from an electronic key to a first lock to thereby lock or unlock the first lock, wherein the electronic key is incapable of unlocking a second lock prior to locking the first lock that has been successfully unlocked. 15. The method according to claim 14, further comprising emitting an audible signal with the electronic key in response to a change in state of the first lock. 16. The method according to claim 15, wherein emitting comprises emitting a first audible signal indicative of successfully changing a state of the first lock and emitting a second audible signal that is different than the first audible signal and that is indicative of unsuccessfully changing a state of the second lock. 17. The method according to claim 15, wherein emitting comprises continuously or intermittently emitting an audible signal while the first lock is in an unlocked state. 18. The method according to claim 14, further comprising communicating a communications protocol signal between the electronic key and the first lock. 19. The method according to claim 14, wherein transferring comprises inductively transferring electrical power. 20. The method according to claim 14, further comprising receiving a signal at the electronic key transmitted from the first lock indicating a change in state thereof.
An electronic key for a merchandise security device is provided. The electronic key may include electronic circuitry for providing electrical power to a lock mechanism for locking and unlocking the lock mechanism. The electronic key may also include an audio component configured to indicate a status of the lock mechanism.1. A security system for protecting an item of merchandise from theft, comprising: an electronic key; and a plurality of merchandise security devices each comprising a lock mechanism that is configured to be locked or unlocked in response to receiving electrical power transferred from the electronic key to the lock mechanism, wherein the electronic key is incapable of unlocking a second lock mechanism prior to locking a first lock mechanism that has been successfully unlocked. 2. The security system according to claim 1, wherein the electrical power is configured to transferred from the electronic key via inductive transfer. 3. The security system according to claim 1, wherein the electronic key comprises an audio component. 4. The security system according to claim 3, wherein the audio component is configured to emit an audible signal using a piezo. 5. The security system according to claim 3, wherein the audio component is configured to emit a first audible signal indicative of successfully changing a state of the first lock mechanism and a second audible signal that is different than the first audible signal and that is indicative of unsuccessfully changing a state of the second lock mechanism. 6. The security system according to claim 3, wherein the audio component is configured to emit an audible signal in response to the lock mechanism being locked or unlocked. 7. The security system according to claim 3, wherein the audio component is configured to indicate a status of the lock mechanism based on the change in state thereof. 8. The security system according to claim 7, wherein the audio component is configured to emit a first audible signal indicative of successfully changing a state of the lock mechanism and a second audible signal that is different than the first audible signal and that is indicative of unsuccessfully changing a state of the lock mechanism. 9. The security system according to claim 7, wherein the audio component is configured to continuously or intermittently emit an audible signal while the lock mechanism is in an unlocked state. 10. The security system according to claim 7, wherein the audio component is configured to emit: (i) an initial audio indication in response to the lock mechanism being unlocked, (ii) a first audio indication while the lock mechanism is in an unlocked state, and (iii) a second audio indication in response to unsuccessfully changing the state of the lock mechanism, and wherein each of the initial, first, and second audio indications are different than one another. 11. The security system according to claim 1, wherein the electronic key comprises electronic circuitry configured to communicate a communications protocol signal between the electronic key and the lock. 12. The security system according to claim 11, wherein the communications protocol signal comprises a security code. 13. The security system according to claim 1, wherein the electronic key is configured to receive a signal transmitted from the lock mechanism indicating a change in state thereof. 14. A method for protecting an item of merchandise susceptible to theft, comprising: transferring electrical power from an electronic key to a first lock to thereby lock or unlock the first lock, wherein the electronic key is incapable of unlocking a second lock prior to locking the first lock that has been successfully unlocked. 15. The method according to claim 14, further comprising emitting an audible signal with the electronic key in response to a change in state of the first lock. 16. The method according to claim 15, wherein emitting comprises emitting a first audible signal indicative of successfully changing a state of the first lock and emitting a second audible signal that is different than the first audible signal and that is indicative of unsuccessfully changing a state of the second lock. 17. The method according to claim 15, wherein emitting comprises continuously or intermittently emitting an audible signal while the first lock is in an unlocked state. 18. The method according to claim 14, further comprising communicating a communications protocol signal between the electronic key and the first lock. 19. The method according to claim 14, wherein transferring comprises inductively transferring electrical power. 20. The method according to claim 14, further comprising receiving a signal at the electronic key transmitted from the first lock indicating a change in state thereof.
2,600
9,910
9,910
14,443,940
2,689
A downhole electromagnetic telemetry system and method whereby electrically insulating material is placed above and/or below an electrical current launching device or receiver along a well string in order to extend the range of the telemetry system, increase the telemetry rate, and/or reduce downhole power requirements.
1. A method for utilizing an electromagnetic telemetry system in a downhole well, the method comprising: providing a well string comprising one or more tubulars attached to a bottom hole assembly, the bottom hole assembly comprising at least one of an electrical current launching device or a receiver; applying electrically insulating material around one or more portions of the well string; deploying the bottom hole assembly into the well; conducting an electromagnetic telemetry operation using the bottom hole assembly; and utilizing the electrically insulating material to reduce at least one of: short circuits from the current launching device to casing; or current leakage from the well string into the casing or formation along the well. 2. A method as defined in claim 1, further comprising applying the electrically insulating material around one or more portions of the well string immediately above or below the current launching device or receiver. 3. A method as defined in claim 1, wherein applying the electrically insulating material around the one or more portions of the well string comprises wrapping the one or more portions of the well string with one or more sheets of electrically insulating material. 4. A method as defined in claim 1, wherein applying the electrically insulating material around the one or more portions of the well string comprises positioning an insulation sleeve around the one or more portions of the well string, the insulation sleeve being comprised of electrically insulating swellable material. 5. A method as defined in claim 1, wherein applying the electrically insulating material around the one or more portions of the well string comprises applying at least one of: an electrically insulating swellable material; an electrically insulating injection-molded coating; an electrically insulating spray coating; or an electrically insulating anodized layer. 6. A method as defined in claim 1, wherein applying the electrically insulating material around the one or more portions of the well string comprises: determining a length of an electrically conductive portion of the formation along the well; and applying the electrically insulating material based upon the determined length. 7. An electromagnetic telemetry system for use in a downhole well, the system comprising: a well string comprising one or more tubulars attached to a bottom hole assembly, the bottom hole assembly comprising at least one of an electrical current launching device or a receiver; and electrically insulating material positioned around one or more portions of the well string to reduce at least one of: short circuits from the current launching device to casing; or current leakage from the well string into the casing or formation along the well. 8. A system as defined in claim 7, wherein the electrically insulating material is positioned immediately above or below the current launching device or receiver. 9. A system as defined in claim 7, wherein the electrical current launching device is a gap sub assembly or a toroid. 10. A system as defined in claim 7, wherein the receiver is a gap sub assembly or a toroid. 11. A system as defined in claim 7, wherein the electrically insulating material is one or more sheets of electrically insulating material. 12. A system as defined in claim 7, wherein the electrically insulating material is an insulation sleeve. 13. A system as defined in claim 7, wherein the electrically insulating material is at least one of: an electrically insulating swellable material; an electrically insulating injection-molded coating; an electrically insulating spray coating; or an electrically insulating anodized layer. 14. A method for utilizing an electromagnetic telemetry system in a downhole well, the method comprising: applying electrically insulating material around one or more portions of a well string comprising at least one of an electrical current launching device or a receiver; deploying the well string into the well; and utilizing the electrically insulating material to reduce at least one of: short circuits from the current launching device to casing; or current leakage from the well string into the casing or formation along the well. 15. A method as defined in claim 14, further comprising applying the electrically insulating material around one or more portions of the well string immediately above or below the current launching device or receiver. 16. A method as defined in claim 14, wherein applying the electrically insulating material around the one or more portions of the well string comprises applying at least one of: an electrically insulating swellable material; an electrically insulating injection-molded coating; an electrically insulating spray coating; or an electrically insulating anodized layer. 17. A method as defined in claim 14, wherein applying the electrically insulating material around the one or more portions of the well string comprises: determining a length of an electrically conductive portion of the formation along the well; and applying the electrically insulating material based upon the determined length.
A downhole electromagnetic telemetry system and method whereby electrically insulating material is placed above and/or below an electrical current launching device or receiver along a well string in order to extend the range of the telemetry system, increase the telemetry rate, and/or reduce downhole power requirements.1. A method for utilizing an electromagnetic telemetry system in a downhole well, the method comprising: providing a well string comprising one or more tubulars attached to a bottom hole assembly, the bottom hole assembly comprising at least one of an electrical current launching device or a receiver; applying electrically insulating material around one or more portions of the well string; deploying the bottom hole assembly into the well; conducting an electromagnetic telemetry operation using the bottom hole assembly; and utilizing the electrically insulating material to reduce at least one of: short circuits from the current launching device to casing; or current leakage from the well string into the casing or formation along the well. 2. A method as defined in claim 1, further comprising applying the electrically insulating material around one or more portions of the well string immediately above or below the current launching device or receiver. 3. A method as defined in claim 1, wherein applying the electrically insulating material around the one or more portions of the well string comprises wrapping the one or more portions of the well string with one or more sheets of electrically insulating material. 4. A method as defined in claim 1, wherein applying the electrically insulating material around the one or more portions of the well string comprises positioning an insulation sleeve around the one or more portions of the well string, the insulation sleeve being comprised of electrically insulating swellable material. 5. A method as defined in claim 1, wherein applying the electrically insulating material around the one or more portions of the well string comprises applying at least one of: an electrically insulating swellable material; an electrically insulating injection-molded coating; an electrically insulating spray coating; or an electrically insulating anodized layer. 6. A method as defined in claim 1, wherein applying the electrically insulating material around the one or more portions of the well string comprises: determining a length of an electrically conductive portion of the formation along the well; and applying the electrically insulating material based upon the determined length. 7. An electromagnetic telemetry system for use in a downhole well, the system comprising: a well string comprising one or more tubulars attached to a bottom hole assembly, the bottom hole assembly comprising at least one of an electrical current launching device or a receiver; and electrically insulating material positioned around one or more portions of the well string to reduce at least one of: short circuits from the current launching device to casing; or current leakage from the well string into the casing or formation along the well. 8. A system as defined in claim 7, wherein the electrically insulating material is positioned immediately above or below the current launching device or receiver. 9. A system as defined in claim 7, wherein the electrical current launching device is a gap sub assembly or a toroid. 10. A system as defined in claim 7, wherein the receiver is a gap sub assembly or a toroid. 11. A system as defined in claim 7, wherein the electrically insulating material is one or more sheets of electrically insulating material. 12. A system as defined in claim 7, wherein the electrically insulating material is an insulation sleeve. 13. A system as defined in claim 7, wherein the electrically insulating material is at least one of: an electrically insulating swellable material; an electrically insulating injection-molded coating; an electrically insulating spray coating; or an electrically insulating anodized layer. 14. A method for utilizing an electromagnetic telemetry system in a downhole well, the method comprising: applying electrically insulating material around one or more portions of a well string comprising at least one of an electrical current launching device or a receiver; deploying the well string into the well; and utilizing the electrically insulating material to reduce at least one of: short circuits from the current launching device to casing; or current leakage from the well string into the casing or formation along the well. 15. A method as defined in claim 14, further comprising applying the electrically insulating material around one or more portions of the well string immediately above or below the current launching device or receiver. 16. A method as defined in claim 14, wherein applying the electrically insulating material around the one or more portions of the well string comprises applying at least one of: an electrically insulating swellable material; an electrically insulating injection-molded coating; an electrically insulating spray coating; or an electrically insulating anodized layer. 17. A method as defined in claim 14, wherein applying the electrically insulating material around the one or more portions of the well string comprises: determining a length of an electrically conductive portion of the formation along the well; and applying the electrically insulating material based upon the determined length.
2,600
9,911
9,911
14,574,041
2,616
Techniques are disclosed relating to scheduling tasks for graphics processing. In one embodiment, a graphics unit is configured to render a frame of graphics data using a plurality of pass groups and the frame of graphics data includes a plurality of frame portions. In this embodiment, the graphics unit includes scheduling circuitry configured to receive a plurality of tasks, maintain pass group information for each of the plurality of tasks, and maintain relative age information for the plurality of frame portions. In this embodiment, the scheduling circuitry is configured to select a task for execution based on the pass group information and the age information. In some embodiments, the scheduling circuitry is configured to select tasks from an oldest frame portion and current pass group before selecting other tasks. This scheduling approach may result in efficient execution of various different types of graphics workloads.
1. An apparatus, comprising: a graphics unit configured to render a frame of graphics data using a plurality of pass groups, wherein the frame includes a plurality of frame portions, and wherein the graphics unit comprises: scheduling circuitry configured to: receive a plurality of tasks; maintain, for each of the plurality of tasks, information that identifies one of the plurality of frame portions and pass group information that identifies one of the plurality of pass groups; maintain age information usable to determine relative ages of the plurality of frame portions; and select, for execution by the graphics unit, a task from among the plurality of tasks based on the age information and the pass group information. 2. The apparatus of claim 1, wherein the scheduling circuitry is configured to select tasks according to the following greatest-to-least priority order: first, tasks associated with an oldest frame portion and a current pass group of the plurality of pass groups; second, tasks associated with a second oldest frame portion and the current pass group; third, tasks associated with an oldest frame portion and a different pass group than the current pass group; and fourth, tasks associated with a second oldest frame portion and a different pass group than the current pass group. 3. The apparatus of claim 2, wherein the scheduling circuitry is configured to select tasks according to a round-robin among the frame portions if no task is found using the priority order. 4. The apparatus of claim 1, wherein an age of one of the plurality of frame portions is determined by an age of an oldest task associated with that frame portion; and wherein the scheduling circuitry is configured to maintain the age information using an age table in which an entry is allocated for a frame portion when a first task is received for that frame portion. 5. The apparatus of claim 1, wherein the scheduling circuitry is configured to schedule all tasks for an oldest frame portion and a current pass group before scheduling other tasks. 6. The apparatus of claim 1, wherein the frame portions are rectangular tiles of pixels. 7. The apparatus of claim 1, further comprising a data sequencer unit configured to receive pixel data from multiple raster pipelines and send the pixel data to the scheduling circuitry. 8. The apparatus of claim 1, wherein the scheduling circuitry is further configured to select a task from among the plurality of tasks based on the information that identifies one of the plurality of frame portions for a task. 9. A method, comprising: scheduling circuitry receiving a plurality of tasks associated with rendering a frame of graphics data using a plurality of pass groups; the scheduling circuitry maintaining relative ages of a plurality of frame portions included in the frame; the scheduling circuitry maintaining information indicating a frame portion and a pass group associated with each of the plurality of tasks; and the scheduling circuitry selecting a task based on the relative ages of the frame portions, the pass groups associated with the plurality of tasks, and the frame portions associated with the plurality of tasks. 10. The method of claim 9, wherein the selecting includes selecting all tasks for an oldest frame portion and a current pass group before selecting other tasks. 11. The method of claim 9, wherein the selecting is performed according to the following greatest-to-least priority order: first, tasks associated with an oldest frame portion and a current pass group of the plurality of pass groups; second, tasks associated with a second oldest frame portion and the current pass group; third, tasks associated with an oldest frame portion and a different pass group than the current pass group; and fourth, tasks associated with a second oldest frame portion and a different pass group than the current pass group. 12. The method of claim 11, wherein the scheduling circuitry is configured to select tasks according to a round-robin among the frame portions if no task is selected based on the priority order. 13. The method of claim 9, wherein an age of each of the plurality of frame portions is determined by an age of an oldest task associated with that frame portion; and wherein the scheduling circuitry is configured to maintain the relative ages based on receipt of a first task for a given frame portion. 14. The method of claim 9, wherein the frame portions are rectangular tiles of pixels. 15. The method of claim 9, wherein the selecting is performed using at most three clock cycles. 16. An apparatus, comprising: a graphics unit configured to render a frame of graphics data using a plurality of pass groups, wherein the frame includes a plurality of tiles; scheduling circuitry configured to: receive a plurality of tasks, wherein each of the plurality of tasks corresponds to one of the plurality of tiles and one of the plurality of pass groups; maintain age information usable to determine relative ages of the plurality of tiles and pass group information indicating the one of the plurality of pass groups corresponding to each of the plurality of tasks; and select, for execution by the graphics unit, a task from among the plurality of tasks based on the age information and the pass group information. 17. The apparatus of claim 16, wherein the scheduling circuitry is configured to schedule all available tasks for an oldest frame portion and a current pass group before scheduling other tasks. 18. The apparatus of claim 16, wherein the scheduling circuitry is configured to select tasks according to the following greatest-to-least priority order: first, tasks associated with an oldest frame portion and a current pass group of the plurality of pass groups; second, tasks associated with a second oldest frame portion and the current pass group; third, tasks associated with an oldest frame portion and a different pass group than the current pass group; and fourth, tasks associated with a second oldest frame portion and a different pass group than the current pass group. 19. The apparatus of claim 16, further comprising: a data sequencer unit configured to receive pixel data from multiple raster pipelines, assign the pixel data to tasks, and send the tasks to the scheduling circuitry. 20. The apparatus of claim 16, further comprising: a plurality of execution units configured to execute a selected task in parallel for different pixels of the selected task.
Techniques are disclosed relating to scheduling tasks for graphics processing. In one embodiment, a graphics unit is configured to render a frame of graphics data using a plurality of pass groups and the frame of graphics data includes a plurality of frame portions. In this embodiment, the graphics unit includes scheduling circuitry configured to receive a plurality of tasks, maintain pass group information for each of the plurality of tasks, and maintain relative age information for the plurality of frame portions. In this embodiment, the scheduling circuitry is configured to select a task for execution based on the pass group information and the age information. In some embodiments, the scheduling circuitry is configured to select tasks from an oldest frame portion and current pass group before selecting other tasks. This scheduling approach may result in efficient execution of various different types of graphics workloads.1. An apparatus, comprising: a graphics unit configured to render a frame of graphics data using a plurality of pass groups, wherein the frame includes a plurality of frame portions, and wherein the graphics unit comprises: scheduling circuitry configured to: receive a plurality of tasks; maintain, for each of the plurality of tasks, information that identifies one of the plurality of frame portions and pass group information that identifies one of the plurality of pass groups; maintain age information usable to determine relative ages of the plurality of frame portions; and select, for execution by the graphics unit, a task from among the plurality of tasks based on the age information and the pass group information. 2. The apparatus of claim 1, wherein the scheduling circuitry is configured to select tasks according to the following greatest-to-least priority order: first, tasks associated with an oldest frame portion and a current pass group of the plurality of pass groups; second, tasks associated with a second oldest frame portion and the current pass group; third, tasks associated with an oldest frame portion and a different pass group than the current pass group; and fourth, tasks associated with a second oldest frame portion and a different pass group than the current pass group. 3. The apparatus of claim 2, wherein the scheduling circuitry is configured to select tasks according to a round-robin among the frame portions if no task is found using the priority order. 4. The apparatus of claim 1, wherein an age of one of the plurality of frame portions is determined by an age of an oldest task associated with that frame portion; and wherein the scheduling circuitry is configured to maintain the age information using an age table in which an entry is allocated for a frame portion when a first task is received for that frame portion. 5. The apparatus of claim 1, wherein the scheduling circuitry is configured to schedule all tasks for an oldest frame portion and a current pass group before scheduling other tasks. 6. The apparatus of claim 1, wherein the frame portions are rectangular tiles of pixels. 7. The apparatus of claim 1, further comprising a data sequencer unit configured to receive pixel data from multiple raster pipelines and send the pixel data to the scheduling circuitry. 8. The apparatus of claim 1, wherein the scheduling circuitry is further configured to select a task from among the plurality of tasks based on the information that identifies one of the plurality of frame portions for a task. 9. A method, comprising: scheduling circuitry receiving a plurality of tasks associated with rendering a frame of graphics data using a plurality of pass groups; the scheduling circuitry maintaining relative ages of a plurality of frame portions included in the frame; the scheduling circuitry maintaining information indicating a frame portion and a pass group associated with each of the plurality of tasks; and the scheduling circuitry selecting a task based on the relative ages of the frame portions, the pass groups associated with the plurality of tasks, and the frame portions associated with the plurality of tasks. 10. The method of claim 9, wherein the selecting includes selecting all tasks for an oldest frame portion and a current pass group before selecting other tasks. 11. The method of claim 9, wherein the selecting is performed according to the following greatest-to-least priority order: first, tasks associated with an oldest frame portion and a current pass group of the plurality of pass groups; second, tasks associated with a second oldest frame portion and the current pass group; third, tasks associated with an oldest frame portion and a different pass group than the current pass group; and fourth, tasks associated with a second oldest frame portion and a different pass group than the current pass group. 12. The method of claim 11, wherein the scheduling circuitry is configured to select tasks according to a round-robin among the frame portions if no task is selected based on the priority order. 13. The method of claim 9, wherein an age of each of the plurality of frame portions is determined by an age of an oldest task associated with that frame portion; and wherein the scheduling circuitry is configured to maintain the relative ages based on receipt of a first task for a given frame portion. 14. The method of claim 9, wherein the frame portions are rectangular tiles of pixels. 15. The method of claim 9, wherein the selecting is performed using at most three clock cycles. 16. An apparatus, comprising: a graphics unit configured to render a frame of graphics data using a plurality of pass groups, wherein the frame includes a plurality of tiles; scheduling circuitry configured to: receive a plurality of tasks, wherein each of the plurality of tasks corresponds to one of the plurality of tiles and one of the plurality of pass groups; maintain age information usable to determine relative ages of the plurality of tiles and pass group information indicating the one of the plurality of pass groups corresponding to each of the plurality of tasks; and select, for execution by the graphics unit, a task from among the plurality of tasks based on the age information and the pass group information. 17. The apparatus of claim 16, wherein the scheduling circuitry is configured to schedule all available tasks for an oldest frame portion and a current pass group before scheduling other tasks. 18. The apparatus of claim 16, wherein the scheduling circuitry is configured to select tasks according to the following greatest-to-least priority order: first, tasks associated with an oldest frame portion and a current pass group of the plurality of pass groups; second, tasks associated with a second oldest frame portion and the current pass group; third, tasks associated with an oldest frame portion and a different pass group than the current pass group; and fourth, tasks associated with a second oldest frame portion and a different pass group than the current pass group. 19. The apparatus of claim 16, further comprising: a data sequencer unit configured to receive pixel data from multiple raster pipelines, assign the pixel data to tasks, and send the tasks to the scheduling circuitry. 20. The apparatus of claim 16, further comprising: a plurality of execution units configured to execute a selected task in parallel for different pixels of the selected task.
2,600
9,912
9,912
14,283,254
2,659
Described herein are various technologies pertaining to performing an operation relative to tabular data based upon voice input. An ASR system includes a language model that is customized based upon content of the tabular data. The ASR system receives a voice signal that is representative of speech of a user. The ASR system creates a transcription of the voice signal based upon the ASR being customized with the content of the tabular data. The operation relative to the tabular data is performed based upon the transcription of the voice signal.
1. A computing device comprising: a processor; and a memory that comprises an application that is executed by the processor, the application comprises: tabular data loaded into the application, the tabular data comprises a text string; an executor component that executes a computing operation relative to the tabular data, the executor component comprises: an automatic speech recognition (ASR) system that is customized based upon the text string being included in the tabular data, the ASR system receives a voice signal that is representative of voice input to the application, the voice input includes a reference to the text string, the ASR system generates a transcription of the voice signal; and a table manipulation system that is in communication with the ASR system, the table manipulation system receives the transcription of the voice signal from the ASR system and performs the computing operation relative to the tabular data based upon the transcription of the voice signal. 2. The computing device of claim 1 being a client computing device, the client computing device being one of a tablet computing device or a mobile telephone. 3. The computing device of claim 1 being a server computing device, the ASR system of the application receives the voice signal from a client computing device in network communication with the server computing device. 4. The computing device of claim 1, the application being a spreadsheet application, the tabular data loaded into a spreadsheet of the spreadsheet application. 5. The computing device of claim 1, the ASR system comprises an acoustic model that models phones in a language of the spoken utterance, a lexicon model that defines probabilities over elements that include respective sequences of phones, and a language model that defines probabilities over sequences of elements, the language model updated to include at least one of the text string or a synonym of the text string responsive to the text string being loaded into the application. 6. The computing device of claim 1, wherein the ASR system is customized to constrain potential interpretations of the voice input. 7. The computing device of claim 1, wherein the voice input is a natural language query, and wherein the table manipulation system performs the computing operation based upon the natural language query. 8. The computing device of claim 1, the computing operation being one of a sort of the tabular data, a filter of the tabular data, a mathematical operation performed over entries in the tabular data, a visualization of the tabular data, or an augmentation of the tabular data. 9. The computing device of claim 8, the computing operation is the augmentation of the tabular data, and wherein the table manipulation system generates a query based upon the transcription, the search system executes a search over a network accessible index of tables based upon query augments the tabular data loaded into the application with additional tabular data included in the index of tables. 10. The computing device of claim 1, the table manipulation system generates a program based upon the transcription, the table manipulation system executes the program over at least a portion of the tabular data loaded into the application to perform the computing operation. 11. A method comprising: receiving tabular data that has been loaded into a computer-executable application; responsive to receiving the tabular data, updating a language model of an automatic speech recognition (ASR) system based upon the tabular data; receiving a voice signal that is indicative of an operation to be performed with respect to the tabular data; decoding the voice signal based upon the updating of the language model of the ASR system; and performing the operation based upon the decoding of the voice signal. 12. The method of claim 11 executed by a client computing device, wherein the voice signal represents a query, and wherein the operation comprises: augmenting the tabular data with additional data retrieved from a data source that is accessible to the client computing device by way of a network, the augmenting based upon the query; and performing a subsequent operation over the tabular data and the additional data based upon the query. 13. The method of claim 11, the computer-executable application being a web browser. 14. The method of claim 11, the tabular data comprises an entry, the entry comprises a character sequence, and wherein updating the language model comprises including the character sequence in the language model. 15. The method of claim 14, the entry being one of a column header or a row header. 16. The method of claim 14, the language model comprises a partially completed phrase that is representative of historically observed commands, wherein updating the language model comprises including the character sequence in the language model to complete the partially completed phrase. 17. The method of claim 16, wherein updating the language model further comprises: assigning a type to the character sequence based upon the tabular data, the type indicating that the character sequence is representative of one of people, places, or things in the tabular data; and assigning a probability value to the completed phrase based upon the type assigned to the character sequence. 18. The method of claim 11, wherein receiving the voice signal comprises: receiving an indication that a graphical button has been selected; and monitoring output of a microphone responsive to receiving the indication. 19. The method of claim 11, wherein the voice signal represents a query, the performing of the operation causes updated tabular data to be generated, the method further comprising: subsequent to performing the operation based upon the query, receiving a second voice signal that is representative of a second query; decoding the second voice signal based upon the updating of the language model of the ASR system; and performing a second operation relative to the updated tabular data, the second operation performed based upon the query and the second query. 20. A computer-readable storage medium comprising instructions that, when executed by a processor, cause the processor to perform acts comprising: receiving tabular data loaded into a spreadsheet application, the tabular data comprises an entry, the entry comprises a character sequence; responsive to receiving the tabular data, updating a language model of an automatic speech recognition (ASR) system to include the character sequence and a known synonym of the character sequence; receiving a voice signal that is representative of a spoken utterance, the spoken utterance includes the character sequence or the known synonym of the character sequence; decoding the voice signal, wherein decoding the voice signal comprises identifying that the spoken utterance includes the character sequence or the known synonym of the character sequence; and performing an operation relative to the tabular data based upon the decoding of the voice signal.
Described herein are various technologies pertaining to performing an operation relative to tabular data based upon voice input. An ASR system includes a language model that is customized based upon content of the tabular data. The ASR system receives a voice signal that is representative of speech of a user. The ASR system creates a transcription of the voice signal based upon the ASR being customized with the content of the tabular data. The operation relative to the tabular data is performed based upon the transcription of the voice signal.1. A computing device comprising: a processor; and a memory that comprises an application that is executed by the processor, the application comprises: tabular data loaded into the application, the tabular data comprises a text string; an executor component that executes a computing operation relative to the tabular data, the executor component comprises: an automatic speech recognition (ASR) system that is customized based upon the text string being included in the tabular data, the ASR system receives a voice signal that is representative of voice input to the application, the voice input includes a reference to the text string, the ASR system generates a transcription of the voice signal; and a table manipulation system that is in communication with the ASR system, the table manipulation system receives the transcription of the voice signal from the ASR system and performs the computing operation relative to the tabular data based upon the transcription of the voice signal. 2. The computing device of claim 1 being a client computing device, the client computing device being one of a tablet computing device or a mobile telephone. 3. The computing device of claim 1 being a server computing device, the ASR system of the application receives the voice signal from a client computing device in network communication with the server computing device. 4. The computing device of claim 1, the application being a spreadsheet application, the tabular data loaded into a spreadsheet of the spreadsheet application. 5. The computing device of claim 1, the ASR system comprises an acoustic model that models phones in a language of the spoken utterance, a lexicon model that defines probabilities over elements that include respective sequences of phones, and a language model that defines probabilities over sequences of elements, the language model updated to include at least one of the text string or a synonym of the text string responsive to the text string being loaded into the application. 6. The computing device of claim 1, wherein the ASR system is customized to constrain potential interpretations of the voice input. 7. The computing device of claim 1, wherein the voice input is a natural language query, and wherein the table manipulation system performs the computing operation based upon the natural language query. 8. The computing device of claim 1, the computing operation being one of a sort of the tabular data, a filter of the tabular data, a mathematical operation performed over entries in the tabular data, a visualization of the tabular data, or an augmentation of the tabular data. 9. The computing device of claim 8, the computing operation is the augmentation of the tabular data, and wherein the table manipulation system generates a query based upon the transcription, the search system executes a search over a network accessible index of tables based upon query augments the tabular data loaded into the application with additional tabular data included in the index of tables. 10. The computing device of claim 1, the table manipulation system generates a program based upon the transcription, the table manipulation system executes the program over at least a portion of the tabular data loaded into the application to perform the computing operation. 11. A method comprising: receiving tabular data that has been loaded into a computer-executable application; responsive to receiving the tabular data, updating a language model of an automatic speech recognition (ASR) system based upon the tabular data; receiving a voice signal that is indicative of an operation to be performed with respect to the tabular data; decoding the voice signal based upon the updating of the language model of the ASR system; and performing the operation based upon the decoding of the voice signal. 12. The method of claim 11 executed by a client computing device, wherein the voice signal represents a query, and wherein the operation comprises: augmenting the tabular data with additional data retrieved from a data source that is accessible to the client computing device by way of a network, the augmenting based upon the query; and performing a subsequent operation over the tabular data and the additional data based upon the query. 13. The method of claim 11, the computer-executable application being a web browser. 14. The method of claim 11, the tabular data comprises an entry, the entry comprises a character sequence, and wherein updating the language model comprises including the character sequence in the language model. 15. The method of claim 14, the entry being one of a column header or a row header. 16. The method of claim 14, the language model comprises a partially completed phrase that is representative of historically observed commands, wherein updating the language model comprises including the character sequence in the language model to complete the partially completed phrase. 17. The method of claim 16, wherein updating the language model further comprises: assigning a type to the character sequence based upon the tabular data, the type indicating that the character sequence is representative of one of people, places, or things in the tabular data; and assigning a probability value to the completed phrase based upon the type assigned to the character sequence. 18. The method of claim 11, wherein receiving the voice signal comprises: receiving an indication that a graphical button has been selected; and monitoring output of a microphone responsive to receiving the indication. 19. The method of claim 11, wherein the voice signal represents a query, the performing of the operation causes updated tabular data to be generated, the method further comprising: subsequent to performing the operation based upon the query, receiving a second voice signal that is representative of a second query; decoding the second voice signal based upon the updating of the language model of the ASR system; and performing a second operation relative to the updated tabular data, the second operation performed based upon the query and the second query. 20. A computer-readable storage medium comprising instructions that, when executed by a processor, cause the processor to perform acts comprising: receiving tabular data loaded into a spreadsheet application, the tabular data comprises an entry, the entry comprises a character sequence; responsive to receiving the tabular data, updating a language model of an automatic speech recognition (ASR) system to include the character sequence and a known synonym of the character sequence; receiving a voice signal that is representative of a spoken utterance, the spoken utterance includes the character sequence or the known synonym of the character sequence; decoding the voice signal, wherein decoding the voice signal comprises identifying that the spoken utterance includes the character sequence or the known synonym of the character sequence; and performing an operation relative to the tabular data based upon the decoding of the voice signal.
2,600
9,913
9,913
14,686,770
2,658
An email-like user interface displays a list of user logs determined based on user-specified list criteria to user logs received in a natural language (NL) training environment. The list comprise a subset of the received user logs in order to minimize the number of actions required to configure and train the NL configuration system in a semi-supervised manner, thereby improving the quality and accuracy of NL configuration system. To determine a list of user logs relevant for training the user logs can be filtered, sorted, grouped and searched within the email-like user interface. A training interface to a network of instances that comprises a plurality of NL configuration systems leverages a crowd-sourcing community of developers in order to efficiently create a customizable NL configuration system.
1. A computer-implemented method comprising: receiving a plurality of user logs from a plurality of runtime systems or applications, each user log comprising a natural language expression and an intent, the intent being predicted by the natural language configuration system based on the natural language expression; determining at least one of a filtered, grouped, or sorted list of user logs, the list comprising one or more of the received user logs; displaying a plurality of action panels in an inbox view, each action panel being associated with one or more different user log of the list of user logs and comprising the natural language expression and the intent of the associated user log; displaying for each action panel an option to validate or dismiss the associated user log; and in response to receiving a selection of the option to validate the user log, configuring and training the natural language configuration system based on the natural language expression and the intent of the validated user log. 2. The method of claim 1, wherein each action panel further comprises a confidence score calculated by the natural language configuration system when predicting the intent based on the natural language expression. 3. The method of claim 1, wherein the list of user logs is determined by filtering the received plurality of user logs based on a filter criterion and list criteria comprise the filter criterion. 4. The method of claim 1, wherein the list of user logs is determined by grouping the received plurality of user logs based on attributes of the user logs and the list criteria comprise the attributes of the user logs. 5. The method of claim 1, wherein the list of user logs is sorted based on attributes of the user logs and the list criteria comprise the attributes of the user logs. 6. The method of claim 1, wherein associating each action panel with a different user log is in the order of the different user logs within the list of user logs. 7. The method of claim 1, wherein one or more user logs from the plurality of received user logs were created by instances of natural language configuration systems included in network of instances. 8. The method of claim 7, wherein one or more user logs from the plurality of received user logs comprises an update to a forked instance of the natural language configuration system. 9. A computer program product comprising a non-transitory computer readable storage medium for version control for application development, the non-transitory computer readable storage medium storing instructions for: receiving a plurality of user logs from a plurality of runtime systems or applications, each user log comprising a natural language expression and an intent, the intent being predicted by the natural language configuration system based on the natural language expression; determining at least one of a filtered, grouped, or sorted list of user logs, the list comprising one or more of the received user logs; displaying a plurality of action panels in an inbox view, each action panel being associated with one or more different user log of the list of user logs and comprising the natural language expression and the intent of the associated user log; displaying for each action panel an option to validate or dismiss the associated user log; and in response to receiving a selection of the option to validate the user log, configuring and training the natural language configuration system based on the natural language expression and the intent of the validated user log. 10. The computer program product of claim 9, wherein each action panel further comprises a confidence score calculated by the natural language configuration system when predicting the intent based on the natural language expression. 11. The computer program product of claim 9, wherein the list of user logs is determined by filtering the received plurality of user logs based on a filter criterion and list criteria comprise the filter criterion. 12. The computer program product of claim 9, wherein the list of user logs is determined by grouping the received plurality of user logs based on attributes of the user logs and the list criteria comprise the attributes of the user logs. 13. The computer program product of claim 9, wherein the list of user logs is sorted based on attributes of the user logs and the list criteria comprise the attributes of the user logs. 14. The computer program product of claim 9, wherein associating each action panel with a different user log is in the order of the different user logs within the list of user logs. 15. The computer program product of claim 9, wherein one or more user logs from the plurality of received user logs were created by instances of natural language configuration systems included in network of instances. 16. The computer program product of claim 15, wherein one or more user logs from the plurality of received user logs comprises an update to a forked instance of the natural language configuration system.
An email-like user interface displays a list of user logs determined based on user-specified list criteria to user logs received in a natural language (NL) training environment. The list comprise a subset of the received user logs in order to minimize the number of actions required to configure and train the NL configuration system in a semi-supervised manner, thereby improving the quality and accuracy of NL configuration system. To determine a list of user logs relevant for training the user logs can be filtered, sorted, grouped and searched within the email-like user interface. A training interface to a network of instances that comprises a plurality of NL configuration systems leverages a crowd-sourcing community of developers in order to efficiently create a customizable NL configuration system.1. A computer-implemented method comprising: receiving a plurality of user logs from a plurality of runtime systems or applications, each user log comprising a natural language expression and an intent, the intent being predicted by the natural language configuration system based on the natural language expression; determining at least one of a filtered, grouped, or sorted list of user logs, the list comprising one or more of the received user logs; displaying a plurality of action panels in an inbox view, each action panel being associated with one or more different user log of the list of user logs and comprising the natural language expression and the intent of the associated user log; displaying for each action panel an option to validate or dismiss the associated user log; and in response to receiving a selection of the option to validate the user log, configuring and training the natural language configuration system based on the natural language expression and the intent of the validated user log. 2. The method of claim 1, wherein each action panel further comprises a confidence score calculated by the natural language configuration system when predicting the intent based on the natural language expression. 3. The method of claim 1, wherein the list of user logs is determined by filtering the received plurality of user logs based on a filter criterion and list criteria comprise the filter criterion. 4. The method of claim 1, wherein the list of user logs is determined by grouping the received plurality of user logs based on attributes of the user logs and the list criteria comprise the attributes of the user logs. 5. The method of claim 1, wherein the list of user logs is sorted based on attributes of the user logs and the list criteria comprise the attributes of the user logs. 6. The method of claim 1, wherein associating each action panel with a different user log is in the order of the different user logs within the list of user logs. 7. The method of claim 1, wherein one or more user logs from the plurality of received user logs were created by instances of natural language configuration systems included in network of instances. 8. The method of claim 7, wherein one or more user logs from the plurality of received user logs comprises an update to a forked instance of the natural language configuration system. 9. A computer program product comprising a non-transitory computer readable storage medium for version control for application development, the non-transitory computer readable storage medium storing instructions for: receiving a plurality of user logs from a plurality of runtime systems or applications, each user log comprising a natural language expression and an intent, the intent being predicted by the natural language configuration system based on the natural language expression; determining at least one of a filtered, grouped, or sorted list of user logs, the list comprising one or more of the received user logs; displaying a plurality of action panels in an inbox view, each action panel being associated with one or more different user log of the list of user logs and comprising the natural language expression and the intent of the associated user log; displaying for each action panel an option to validate or dismiss the associated user log; and in response to receiving a selection of the option to validate the user log, configuring and training the natural language configuration system based on the natural language expression and the intent of the validated user log. 10. The computer program product of claim 9, wherein each action panel further comprises a confidence score calculated by the natural language configuration system when predicting the intent based on the natural language expression. 11. The computer program product of claim 9, wherein the list of user logs is determined by filtering the received plurality of user logs based on a filter criterion and list criteria comprise the filter criterion. 12. The computer program product of claim 9, wherein the list of user logs is determined by grouping the received plurality of user logs based on attributes of the user logs and the list criteria comprise the attributes of the user logs. 13. The computer program product of claim 9, wherein the list of user logs is sorted based on attributes of the user logs and the list criteria comprise the attributes of the user logs. 14. The computer program product of claim 9, wherein associating each action panel with a different user log is in the order of the different user logs within the list of user logs. 15. The computer program product of claim 9, wherein one or more user logs from the plurality of received user logs were created by instances of natural language configuration systems included in network of instances. 16. The computer program product of claim 15, wherein one or more user logs from the plurality of received user logs comprises an update to a forked instance of the natural language configuration system.
2,600
9,914
9,914
13,432,171
2,689
An improved cooking range includes an improved notification system that is configured to detect an operated state of the range and to periodically output a notification representative of a duration of time that the range has remained in the operated state. The notification may include the audible outputting of an sound tag representative of one or more spoken words that indicate a duration of time that the apparatus has remained in the operated state and/or an operational level of the range. The notification system can additionally be configured to detect a predetermined condition such as a flame or an excessive ambient temperature in the vicinity of the range and output an audible and/or visible warning notification. The notification system can be built into the range or can be in the form of a system that can be retrofitted to an existing range. The system enhances operational safety.
1. A method of providing an indication regarding an apparatus that is structured to be switched between one state and an operated state, the method comprising: detecting at least one of: that the apparatus has been switched to the operated state, and that the apparatus is in the operated state; and periodically outputting a notification representative of a duration of time that the apparatus has remained in the operated state. 2. The method of claim 1, further comprising outputting as at least a portion of the notification an audible sound tag representative of at least a first spoken word that corresponds with the duration of time that the apparatus has remained in the operated state. 3. The method of claim 2, further comprising: determining an operational level of the apparatus; and outputting as at least a portion of the notification an audible sound tag representative of at least a first spoken word that corresponds with the operational level. 4. The method of claim 3, further comprising detecting the operational level by detecting at least one of: a rotation of a rotatable device away from an initial rotational position; and a current rotational position of the device. 5. The method of claim 1, further comprising: detecting in the vicinity of the apparatus an existence of a predetermined condition; and outputting another notification representative of the predetermined condition. 6. The method of claim 5, further comprising: detecting as the predetermined condition a flame in the vicinity of the apparatus; and outputting as the another notification at least a first spoken word that is representative of the existence of the flame. 7. The method of claim 6, further comprising performing at least one of: triggering a fire extinguisher of the apparatus; and operating a utility shutoff connected with the apparatus. 8. The method of claim 5, further comprising detecting as the predetermined condition a parameter that is in the vicinity of the apparatus and that is at a level that exceeds a predetermined level. 9. The method of claim 8, further comprising detecting as the predetermined condition an ambient temperature in the vicinity of the apparatus that has exceeded a predetermined temperature. 10. The method of claim 1, further comprising: employing the first device in the detecting; wirelessly communicating at least one of a signal from the first device and a signal to the second device; and employing the second device in the outputting. 11. The method of claim 1, further comprising detecting that the apparatus is in the operated state by detecting at least one of: a rotation of a device away from an initial rotational position; and a current rotational position of the device. 12. The method of claim 1, further comprising prior to the detecting and the outputting: detecting a touch input on a controller of the apparatus; and providing an audible notification representative of an identity of the controller. 13. The method of claim 1, further comprising: wirelessly receiving from a remote device a command representative of a change to the operated state; and changing the operated state in accordance with the command. 14. A notification system for use in conjunction with an apparatus that is structured to be switched between one state and an operated state, the notification system comprising: a processor apparatus comprising a processor and a storage; an input apparatus structured to provide input signals to the processor apparatus; an output apparatus structured to receive output signals from the processor apparatus; the storage having stored therein a number of routines which, when executed on the processor, cause the notification system to perform operations comprising: detecting at least one of: that the apparatus has been switched to the operated state, and that the apparatus is in the operated state; and periodically outputting a notification representative of a duration of time that the apparatus has remained in the operated state. 15. The notification system of claim 14 wherein the storage further has stored therein a number of sound tags which, when output by the output apparatus, are in the form of one or more audible spoken words, and wherein the operations further comprise outputting as at least a portion of the notification an audible sound tag representative of at least a first spoken word that corresponds with the duration of time that the apparatus has remained in the operated state. 16. The notification system of claim 15 wherein the operations further comprise: determining an operational level of the apparatus; and outputting as at least a portion of the notification an audible sound tag representative of at least a first spoken word that corresponds with the operational level. 17. The notification system of claim 16 wherein the input apparatus comprises a rotatable device, and wherein the operations further comprise detecting the operational level by detecting at least one of: a rotation of the device away from an initial rotational position; and a current rotational position of the device. 18. The notification system of claim 14 wherein the operations further comprise: detecting in the vicinity of the apparatus an existence of a predetermined condition; and outputting another notification representative of the predetermined condition. 19. The notification system of claim 18 wherein the input apparatus comprises a flame detector, and wherein the operations further comprise: detecting as the predetermined condition a flame in the vicinity of the apparatus; and outputting as the another notification at least a first spoken word that is representative of the existence of the flame. 20. The notification system of claim 18, further comprising detecting as the predetermined condition a parameter that is in the vicinity of the apparatus and that is at a level that exceeds a predetermined level. 21. The notification system of claim 20 wherein the input apparatus comprises a temperature sensor, and wherein the operations further comprise detecting as the predetermined condition an ambient temperature in the vicinity of the apparatus that has exceeded a predetermined temperature. 22. The notification system of claim 14 wherein the input apparatus comprises a first device and the output apparatus comprises a second device, and wherein at least one of the input apparatus and the output apparatus further comprises a wireless communication device, and wherein the operations further comprise: employing the first device in the detecting; wirelessly communicating at least one of a signal from the first device and a signal to the second device; and employing the second device in the outputting. 23. The notification system of claim 14, further comprising detecting that the apparatus is in the operated state by detecting at least one of: a rotation of a device away from an initial rotational position; and a current rotational position of the device. 24. The notification system of claim 14 wherein the input apparatus comprises a sensor apparatus and a support, at least a portion of the sensor apparatus being disposed on the support, the support being structured to be mounted to a portion of a controller of the apparatus, at least a portion of the sensor apparatus being structured to be employed in detecting an operational level of the apparatus. 25. The notification system of claim 24 wherein the sensor apparatus comprises a rotational sensor, wherein the support is structured to be mounted on a rotatable input shaft of the controller, and wherein the processor apparatus is structured to detect as being indicative of the operational level of the apparatus at least one of: a current rotational position of the support, and a change in rotational position of the support. 26. The notification system of claim 25 wherein the sensor apparatus further comprises another sensor that is structured to be employed in detecting in the vicinity of the apparatus an existence of a predetermined condition, and wherein the operations further comprise outputting another notification representative of the predetermined condition. 27. The notification system of claim 26 wherein the another sensor is at least one of: a temperature sensor that is structured to be employed in detecting as the predetermined condition an ambient temperature in the vicinity of the apparatus that has exceeded a predetermined temperature; and a flame detector that is structured to be employed in detecting as the predetermined condition an existence of a flame in the vicinity of the apparatus. 28. The notification system of claim 24 wherein at least a portion of the processor apparatus is disposed on the support and at least a portion of the output apparatus is disposed on the support, the output apparatus comprising an output element that is disposed on the support and that is structured to perform the periodic outputting of the notification. 29. The notification system of claim 24 wherein the output apparatus comprises an output element and another support, the output element being disposed on the another support, the input apparatus being one of a plurality of input apparatuses that are similar to one another and that are in communication with the output apparatus, the output element being structured to perform the periodic outputting of the notification. 30. The notification system of claim 24 wherein the input apparatus comprises a plurality of the sensor apparatuses and a plurality of the supports, at least a portion of each of the plurality of sensor apparatuses being disposed on a corresponding one of the plurality of supports, the plurality of supports each being structured to be retrofitted to a portion of corresponding controller of a plurality of the controllers of the apparatus, the apparatus being structured to be simultaneously operable at a plurality of operational levels, at least a portion of each of the plurality of sensor apparatuses being structured to be employed in detecting an operational level from among the plurality of operational levels of the apparatus, the output apparatus being structure to periodically output a notification representative of a duration of time that the apparatus has remained in at least one operational level. 31. The notification system of claim 30 wherein the output apparatus comprises an output element and another support, the output element being disposed on the another support, the output apparatus being in wireless communication with each of the plurality of sensor apparatuses and being structured to output as the notification representative of the duration of time an output representative of a wireless signal originated from any of the plurality of plurality of the sensor apparatuses. 32. The notification system of claim 14 wherein at least one of the input apparatus and the output apparatus comprises a wireless transceiver apparatus that is structured to communicate wirelessly with a remote device. 33. The notification system of claim 32 wherein the output apparatus comprises an actuator apparatus structured that is to actuate an additional device responsive to a command received by the wireless transceiver apparatus. 34. A cooking apparatus that comprises the notification system of claim 14 and that is structured to be switched between one state and an operated state, the cooking apparatus being structured to generate cooking heat when in the operated state.
An improved cooking range includes an improved notification system that is configured to detect an operated state of the range and to periodically output a notification representative of a duration of time that the range has remained in the operated state. The notification may include the audible outputting of an sound tag representative of one or more spoken words that indicate a duration of time that the apparatus has remained in the operated state and/or an operational level of the range. The notification system can additionally be configured to detect a predetermined condition such as a flame or an excessive ambient temperature in the vicinity of the range and output an audible and/or visible warning notification. The notification system can be built into the range or can be in the form of a system that can be retrofitted to an existing range. The system enhances operational safety.1. A method of providing an indication regarding an apparatus that is structured to be switched between one state and an operated state, the method comprising: detecting at least one of: that the apparatus has been switched to the operated state, and that the apparatus is in the operated state; and periodically outputting a notification representative of a duration of time that the apparatus has remained in the operated state. 2. The method of claim 1, further comprising outputting as at least a portion of the notification an audible sound tag representative of at least a first spoken word that corresponds with the duration of time that the apparatus has remained in the operated state. 3. The method of claim 2, further comprising: determining an operational level of the apparatus; and outputting as at least a portion of the notification an audible sound tag representative of at least a first spoken word that corresponds with the operational level. 4. The method of claim 3, further comprising detecting the operational level by detecting at least one of: a rotation of a rotatable device away from an initial rotational position; and a current rotational position of the device. 5. The method of claim 1, further comprising: detecting in the vicinity of the apparatus an existence of a predetermined condition; and outputting another notification representative of the predetermined condition. 6. The method of claim 5, further comprising: detecting as the predetermined condition a flame in the vicinity of the apparatus; and outputting as the another notification at least a first spoken word that is representative of the existence of the flame. 7. The method of claim 6, further comprising performing at least one of: triggering a fire extinguisher of the apparatus; and operating a utility shutoff connected with the apparatus. 8. The method of claim 5, further comprising detecting as the predetermined condition a parameter that is in the vicinity of the apparatus and that is at a level that exceeds a predetermined level. 9. The method of claim 8, further comprising detecting as the predetermined condition an ambient temperature in the vicinity of the apparatus that has exceeded a predetermined temperature. 10. The method of claim 1, further comprising: employing the first device in the detecting; wirelessly communicating at least one of a signal from the first device and a signal to the second device; and employing the second device in the outputting. 11. The method of claim 1, further comprising detecting that the apparatus is in the operated state by detecting at least one of: a rotation of a device away from an initial rotational position; and a current rotational position of the device. 12. The method of claim 1, further comprising prior to the detecting and the outputting: detecting a touch input on a controller of the apparatus; and providing an audible notification representative of an identity of the controller. 13. The method of claim 1, further comprising: wirelessly receiving from a remote device a command representative of a change to the operated state; and changing the operated state in accordance with the command. 14. A notification system for use in conjunction with an apparatus that is structured to be switched between one state and an operated state, the notification system comprising: a processor apparatus comprising a processor and a storage; an input apparatus structured to provide input signals to the processor apparatus; an output apparatus structured to receive output signals from the processor apparatus; the storage having stored therein a number of routines which, when executed on the processor, cause the notification system to perform operations comprising: detecting at least one of: that the apparatus has been switched to the operated state, and that the apparatus is in the operated state; and periodically outputting a notification representative of a duration of time that the apparatus has remained in the operated state. 15. The notification system of claim 14 wherein the storage further has stored therein a number of sound tags which, when output by the output apparatus, are in the form of one or more audible spoken words, and wherein the operations further comprise outputting as at least a portion of the notification an audible sound tag representative of at least a first spoken word that corresponds with the duration of time that the apparatus has remained in the operated state. 16. The notification system of claim 15 wherein the operations further comprise: determining an operational level of the apparatus; and outputting as at least a portion of the notification an audible sound tag representative of at least a first spoken word that corresponds with the operational level. 17. The notification system of claim 16 wherein the input apparatus comprises a rotatable device, and wherein the operations further comprise detecting the operational level by detecting at least one of: a rotation of the device away from an initial rotational position; and a current rotational position of the device. 18. The notification system of claim 14 wherein the operations further comprise: detecting in the vicinity of the apparatus an existence of a predetermined condition; and outputting another notification representative of the predetermined condition. 19. The notification system of claim 18 wherein the input apparatus comprises a flame detector, and wherein the operations further comprise: detecting as the predetermined condition a flame in the vicinity of the apparatus; and outputting as the another notification at least a first spoken word that is representative of the existence of the flame. 20. The notification system of claim 18, further comprising detecting as the predetermined condition a parameter that is in the vicinity of the apparatus and that is at a level that exceeds a predetermined level. 21. The notification system of claim 20 wherein the input apparatus comprises a temperature sensor, and wherein the operations further comprise detecting as the predetermined condition an ambient temperature in the vicinity of the apparatus that has exceeded a predetermined temperature. 22. The notification system of claim 14 wherein the input apparatus comprises a first device and the output apparatus comprises a second device, and wherein at least one of the input apparatus and the output apparatus further comprises a wireless communication device, and wherein the operations further comprise: employing the first device in the detecting; wirelessly communicating at least one of a signal from the first device and a signal to the second device; and employing the second device in the outputting. 23. The notification system of claim 14, further comprising detecting that the apparatus is in the operated state by detecting at least one of: a rotation of a device away from an initial rotational position; and a current rotational position of the device. 24. The notification system of claim 14 wherein the input apparatus comprises a sensor apparatus and a support, at least a portion of the sensor apparatus being disposed on the support, the support being structured to be mounted to a portion of a controller of the apparatus, at least a portion of the sensor apparatus being structured to be employed in detecting an operational level of the apparatus. 25. The notification system of claim 24 wherein the sensor apparatus comprises a rotational sensor, wherein the support is structured to be mounted on a rotatable input shaft of the controller, and wherein the processor apparatus is structured to detect as being indicative of the operational level of the apparatus at least one of: a current rotational position of the support, and a change in rotational position of the support. 26. The notification system of claim 25 wherein the sensor apparatus further comprises another sensor that is structured to be employed in detecting in the vicinity of the apparatus an existence of a predetermined condition, and wherein the operations further comprise outputting another notification representative of the predetermined condition. 27. The notification system of claim 26 wherein the another sensor is at least one of: a temperature sensor that is structured to be employed in detecting as the predetermined condition an ambient temperature in the vicinity of the apparatus that has exceeded a predetermined temperature; and a flame detector that is structured to be employed in detecting as the predetermined condition an existence of a flame in the vicinity of the apparatus. 28. The notification system of claim 24 wherein at least a portion of the processor apparatus is disposed on the support and at least a portion of the output apparatus is disposed on the support, the output apparatus comprising an output element that is disposed on the support and that is structured to perform the periodic outputting of the notification. 29. The notification system of claim 24 wherein the output apparatus comprises an output element and another support, the output element being disposed on the another support, the input apparatus being one of a plurality of input apparatuses that are similar to one another and that are in communication with the output apparatus, the output element being structured to perform the periodic outputting of the notification. 30. The notification system of claim 24 wherein the input apparatus comprises a plurality of the sensor apparatuses and a plurality of the supports, at least a portion of each of the plurality of sensor apparatuses being disposed on a corresponding one of the plurality of supports, the plurality of supports each being structured to be retrofitted to a portion of corresponding controller of a plurality of the controllers of the apparatus, the apparatus being structured to be simultaneously operable at a plurality of operational levels, at least a portion of each of the plurality of sensor apparatuses being structured to be employed in detecting an operational level from among the plurality of operational levels of the apparatus, the output apparatus being structure to periodically output a notification representative of a duration of time that the apparatus has remained in at least one operational level. 31. The notification system of claim 30 wherein the output apparatus comprises an output element and another support, the output element being disposed on the another support, the output apparatus being in wireless communication with each of the plurality of sensor apparatuses and being structured to output as the notification representative of the duration of time an output representative of a wireless signal originated from any of the plurality of plurality of the sensor apparatuses. 32. The notification system of claim 14 wherein at least one of the input apparatus and the output apparatus comprises a wireless transceiver apparatus that is structured to communicate wirelessly with a remote device. 33. The notification system of claim 32 wherein the output apparatus comprises an actuator apparatus structured that is to actuate an additional device responsive to a command received by the wireless transceiver apparatus. 34. A cooking apparatus that comprises the notification system of claim 14 and that is structured to be switched between one state and an operated state, the cooking apparatus being structured to generate cooking heat when in the operated state.
2,600
9,915
9,915
14,770,087
2,665
A road region detection method is provided. The method includes: obtaining a first image captured by a camera at a first time point and a second image captured by the camera at a second time point (S 101 ), converting the first and second images into a first top view and a second top view, respectively (S 103 ), obtaining a movement vector matrix which substantially represents movement of a road region relative to the camera between the first and second time points (S 105 ), and determining whether a candidate point belongs to the road region by determining whether a position change of the candidate point between the first and second top views conforms to the movement vector matrix. The accuracy and efficiency may be improved.
1. A method for detecting road regions, comprising: obtaining a first image captured by a camera at a first time point and a second image captured by the camera at a second time point; converting the first image and the second image into a first top view and a second top view, respectively; obtaining a movement vector matrix which substantially represents movement of a road region relative to the camera between the first time point and the second time point; and determining whether a candidate point belongs to the road region by determining whether a position change of the candidate point between the first top view and the second top view conforms to the movement vector matrix. 2. The method according to claim 1, wherein a scale of the first top view and the second top view is substantially similar to a real-world scale. 3. The method according to claim 1, wherein the movement vector matrix is obtained by: obtaining a rotation matrix R1 and a translation matrix T1 which substantially represent movement of the camera between the first time point and the second time point; and obtaining the movement vector matrix, comprising a rotation matrix R2 and a translation matrix T2, based on R1, T1, and one or more extrinsic parameters of the camera. 4. The method according to claim 3, wherein, if the camera's pitch angle α equals to zero, then R2 equals −R1, and T2 equals −T1. 5. The method according to claim 3, wherein, if the camera's pitch angle α does not equal to zero, then R2 equals −R1, and T2 equals - ( cos   α 0 0 0 1 0 0 0 sin   α ) * T 1 . 6. The method according to claim 1, wherein the movement vector matrix is obtained by: Identifying a group of feature points on the first top view; tracking the feature points on the second top view; and calculating R2 and T2 by solving an objective function: arg   min R 2 , T 2  ∑ { I 2  ( X T   2 , Y T   2 ) - I 1  [ f  ( X T   1 , Y T   1 ) ] } 2 , where argmin defines a group of feature points of an argument for which the function Σ{I2(XT2,YT2)−I1[f(XT1,YT1)]}2 attains a minimum value, where I2(XT2,YT2) defines a set of coordinates indicating the position of a feature point on the second top view, where I1[f(XT1,YT1)] defines a set of coordinates calculated based on: f  ( X T   1 , Y T   1 ) = ( R 2 T 2 0 T 1 )  ( X T   1 Y T   1 ) .  where   ( X T   1 Y T   1 ) defines a set of coordinates indicating the position of the feature point on the first top view. 7. The method according to claim 1, wherein determining whether a candidate point belongs to the road region comprises: obtaining a first set of coordinates of the candidate point on the first top view; obtaining a second set of coordinates of the candidate point on the second top view; calculating a third set of coordinates using the first set of coordinates and the movement vector matrix; calculating a distance between the second set of coordinates and the third set of coordinates; and determining whether the candidate point belongs to the road region by determining whether the distance is less than a predetermined threshold value. 8. A system for detecting road regions, comprising: a processing device configured to: obtain a first image captured by a camera at a first time point and a second image captured by the camera at a second time point; convert the first image and the second image into a first top view and a second top view, respectively; obtain a movement vector matrix which substantially represents movement of a road region relative to the camera between the first time point and the second time point; and determine whether a candidate point belongs to the road region by determining whether a position change of the candidate point between the first top view and the second top view conforms to the movement vector matrix. 9. The system according to claim 8, wherein a scale of the first top view and the second top view is substantially similar to a real-world scale. 10. The system according to claim 8, wherein the processing device is further configured to: obtain a rotation matrix R1 and a translation matrix T1 which substantially represent movement of the camera between the first time point and the second time point; and obtain the movement vector matrix, comprising a rotation matrix R2 and a translation matrix T2, based on R1, T1, and one or more extrinsic parameters of the camera. 11. The system according to claim 10, wherein, if the camera's pitch angle α equals to zero, then R2 equals −R1, and T2 equals −T1. 12. The system according to claim 10, wherein, if the camera's pitch angle α does not equal to zero, then R2 equals −R1, and T2 equals - ( cos   α 0 0 0 1 0 0 0 sin   α ) * T 1 . 13. The system according to claim 8, wherein the processing device is further configured to: Identify a group of feature points on the first top view; track the feature points on the second top view; and calculate R2 and T2 by solving an objective function: arg   min R 2 , T 2  ∑ { I 2  ( X T   2 , Y T   2 ) - I 1  [ f  ( X T   1 , Y T   1 ) ] } 2 , where argmin defines a group of feature points of an argument for which the function Σ{I2(XT2,YT2)−I1[f(XT1,YT1)]}2 attains a minimum value, where I2(XT2,YT2) defines a set of coordinates indicating the position of a feature point on the second top view, where I1[f(XT1,YT1)] defines a set of coordinates calculated based on: f  ( X T   1 , Y T   1 ) = ( R 2 T 2 0 T 1 )  ( X T   1 Y T   1 ) .  where   ( X T   1 Y T   1 ) defines a set of coordinates indicating the position of the feature point on the first top view. 14. The system according to claim 8, wherein the processing device is further configured to: obtain a first set of coordinates of the candidate point on the first top view; obtain a second set of coordinates of the candidate point on the second top view; calculate a third set of coordinates using the first set of coordinates and the movement vector matrix; calculate a distance between the second set of coordinates and the third set of coordinates; and determine whether the candidate point belongs to the road region by determining whether the distance is less than a predetermined threshold value. 15. A system for detecting road regions, comprising: means for obtaining a first image captured by a camera at a first time point and a second image captured by the camera at a second time point; means for converting the first image and the second image into a first top view and a second top view, respectively; means for obtaining a movement vector matrix which substantially represents movement of a road region relative to the camera between the first time point and the second time point; and means for determining whether a candidate point belongs to the road region by determining whether a position change of the candidate point between the first top view and the second top view conforms to the movement vector matrix.
A road region detection method is provided. The method includes: obtaining a first image captured by a camera at a first time point and a second image captured by the camera at a second time point (S 101 ), converting the first and second images into a first top view and a second top view, respectively (S 103 ), obtaining a movement vector matrix which substantially represents movement of a road region relative to the camera between the first and second time points (S 105 ), and determining whether a candidate point belongs to the road region by determining whether a position change of the candidate point between the first and second top views conforms to the movement vector matrix. The accuracy and efficiency may be improved.1. A method for detecting road regions, comprising: obtaining a first image captured by a camera at a first time point and a second image captured by the camera at a second time point; converting the first image and the second image into a first top view and a second top view, respectively; obtaining a movement vector matrix which substantially represents movement of a road region relative to the camera between the first time point and the second time point; and determining whether a candidate point belongs to the road region by determining whether a position change of the candidate point between the first top view and the second top view conforms to the movement vector matrix. 2. The method according to claim 1, wherein a scale of the first top view and the second top view is substantially similar to a real-world scale. 3. The method according to claim 1, wherein the movement vector matrix is obtained by: obtaining a rotation matrix R1 and a translation matrix T1 which substantially represent movement of the camera between the first time point and the second time point; and obtaining the movement vector matrix, comprising a rotation matrix R2 and a translation matrix T2, based on R1, T1, and one or more extrinsic parameters of the camera. 4. The method according to claim 3, wherein, if the camera's pitch angle α equals to zero, then R2 equals −R1, and T2 equals −T1. 5. The method according to claim 3, wherein, if the camera's pitch angle α does not equal to zero, then R2 equals −R1, and T2 equals - ( cos   α 0 0 0 1 0 0 0 sin   α ) * T 1 . 6. The method according to claim 1, wherein the movement vector matrix is obtained by: Identifying a group of feature points on the first top view; tracking the feature points on the second top view; and calculating R2 and T2 by solving an objective function: arg   min R 2 , T 2  ∑ { I 2  ( X T   2 , Y T   2 ) - I 1  [ f  ( X T   1 , Y T   1 ) ] } 2 , where argmin defines a group of feature points of an argument for which the function Σ{I2(XT2,YT2)−I1[f(XT1,YT1)]}2 attains a minimum value, where I2(XT2,YT2) defines a set of coordinates indicating the position of a feature point on the second top view, where I1[f(XT1,YT1)] defines a set of coordinates calculated based on: f  ( X T   1 , Y T   1 ) = ( R 2 T 2 0 T 1 )  ( X T   1 Y T   1 ) .  where   ( X T   1 Y T   1 ) defines a set of coordinates indicating the position of the feature point on the first top view. 7. The method according to claim 1, wherein determining whether a candidate point belongs to the road region comprises: obtaining a first set of coordinates of the candidate point on the first top view; obtaining a second set of coordinates of the candidate point on the second top view; calculating a third set of coordinates using the first set of coordinates and the movement vector matrix; calculating a distance between the second set of coordinates and the third set of coordinates; and determining whether the candidate point belongs to the road region by determining whether the distance is less than a predetermined threshold value. 8. A system for detecting road regions, comprising: a processing device configured to: obtain a first image captured by a camera at a first time point and a second image captured by the camera at a second time point; convert the first image and the second image into a first top view and a second top view, respectively; obtain a movement vector matrix which substantially represents movement of a road region relative to the camera between the first time point and the second time point; and determine whether a candidate point belongs to the road region by determining whether a position change of the candidate point between the first top view and the second top view conforms to the movement vector matrix. 9. The system according to claim 8, wherein a scale of the first top view and the second top view is substantially similar to a real-world scale. 10. The system according to claim 8, wherein the processing device is further configured to: obtain a rotation matrix R1 and a translation matrix T1 which substantially represent movement of the camera between the first time point and the second time point; and obtain the movement vector matrix, comprising a rotation matrix R2 and a translation matrix T2, based on R1, T1, and one or more extrinsic parameters of the camera. 11. The system according to claim 10, wherein, if the camera's pitch angle α equals to zero, then R2 equals −R1, and T2 equals −T1. 12. The system according to claim 10, wherein, if the camera's pitch angle α does not equal to zero, then R2 equals −R1, and T2 equals - ( cos   α 0 0 0 1 0 0 0 sin   α ) * T 1 . 13. The system according to claim 8, wherein the processing device is further configured to: Identify a group of feature points on the first top view; track the feature points on the second top view; and calculate R2 and T2 by solving an objective function: arg   min R 2 , T 2  ∑ { I 2  ( X T   2 , Y T   2 ) - I 1  [ f  ( X T   1 , Y T   1 ) ] } 2 , where argmin defines a group of feature points of an argument for which the function Σ{I2(XT2,YT2)−I1[f(XT1,YT1)]}2 attains a minimum value, where I2(XT2,YT2) defines a set of coordinates indicating the position of a feature point on the second top view, where I1[f(XT1,YT1)] defines a set of coordinates calculated based on: f  ( X T   1 , Y T   1 ) = ( R 2 T 2 0 T 1 )  ( X T   1 Y T   1 ) .  where   ( X T   1 Y T   1 ) defines a set of coordinates indicating the position of the feature point on the first top view. 14. The system according to claim 8, wherein the processing device is further configured to: obtain a first set of coordinates of the candidate point on the first top view; obtain a second set of coordinates of the candidate point on the second top view; calculate a third set of coordinates using the first set of coordinates and the movement vector matrix; calculate a distance between the second set of coordinates and the third set of coordinates; and determine whether the candidate point belongs to the road region by determining whether the distance is less than a predetermined threshold value. 15. A system for detecting road regions, comprising: means for obtaining a first image captured by a camera at a first time point and a second image captured by the camera at a second time point; means for converting the first image and the second image into a first top view and a second top view, respectively; means for obtaining a movement vector matrix which substantially represents movement of a road region relative to the camera between the first time point and the second time point; and means for determining whether a candidate point belongs to the road region by determining whether a position change of the candidate point between the first top view and the second top view conforms to the movement vector matrix.
2,600
9,916
9,916
14,811,618
2,645
A front-end receiver includes a first mixer of a first channel, a second mixer of a second channel, and a switching circuit that is configured to select the first mixer or the second mixer during a particular time period. Upon being selected, one of the first mixer or the second mixer is configured to deliver a down-converted signal that down-converts a respective RF signal of either the first or second reception channel. As the tasks of down-conversion and multiplexing are combined at the mixer level, the first and second reception channels may share a baseband circuit while being able to provide a well-balanced metrics of channel isolation, low noise figure, and linearity.
1. An integrated circuit comprising: a first input port configured to receive a first radio frequency (RF) signal having a first carrier frequency; a second input port configured to receive a second RF signal having a second carrier frequency; a first mixer coupled with the first input port, the first mixer having a first output lead configured to deliver a first down-converted signal by reducing the first carrier frequency of the first RF signal; a second mixer coupled with the second input port, the second mixer having a second output lead configured to deliver a second down-converted signal by reducing the second carrier frequency of the second RF signal; and a convergent node coupled with the first output lead and the second output lead, the convergent node receiving the first down-converted signal only when the first mixer is selected, and the convergent node receiving the second down-converted signal only when the second mixer is selected. 2. The integrated circuit of claim 1, further comprising: a switching circuit coupled to the first mixer and the second mixer, the switching circuit configured to receive a channel selection signal and activate only one of the first mixer or the second mixer based on the channel selection signal. 3. The integrated circuit of claim 1, further comprising: a switching circuit coupled to the first output lead and the second output lead, the switching circuit configured to receive a channel selection signal and selectively connect only one of the first output lead or the second output lead to the convergent node based on the channel selection signal. 4. The integrated circuit of claim 1, further comprising: a local oscillator selectively coupled with the first mixer or the second mixer, the local oscillator configured to generate a local oscillation signal having a local oscillation frequency; and a switching circuit coupled to the local oscillator, the switching circuit configured to receive a channel selection signal and direct the local oscillation signal to only one of the first mixer or the second mixer based on the channel selection signal. 5. The integrated circuit of claim 1, wherein the first mixer includes a first mixer pass gate having: a first input node coupled with the first input port to receive the first RF signal; a first control gate configured to receive a local oscillation signal only when the first mixer is selected; and a first output node coupled with the first output lead, the first output node configured to output the first down-converted signal based on the first RF signal and the received local oscillation signal. 6. The integrated circuit of claim 5, wherein the second mixer includes a second mixer pass gate having: a second input node coupled with the second input port to receive the second RF signal; a second control gate configured to receive the local oscillation signal only when the second mixer is selected; a second output node coupled with the second output lead, the second output node configured to output the second down-converted signal based on the second RF signal and the received local oscillation signal. 7. The integrated circuit of claim 1, wherein: the first RF signal includes a first differential signal and a second differential signal representing an opposite polarity of the first differential signal; and the first mixer includes: a first pass gate configured to generate a first in-phase signal by passing the first differential signal at a local oscillation frequency based on a first local oscillation signal; a second pass gate configured to generate a first quadrature signal by passing the first differential signal at the local oscillation frequency based on a second local oscillation signal having a 90-degree phase delay from the first local oscillation signal; a third pass gate configured to generate a second in-phase signal by passing the second differential signal at the local oscillation frequency based on a third local oscillation signal having a 180-degree phase delay from the first local oscillation signal; and a fourth pass gate configured to generate a second quadrature signal by passing the second differential signal at the local oscillation frequency based on a fourth local oscillation signal having a 270-degree phase delay from the first local oscillation signal; and the first down-converted signal includes the first in-phase signal, the second in-phase signal, the first quadrature signal, and the second quadrature signal. 8. The integrated circuit of claim 1, wherein: the first down-converted signal includes a first down-converted frequency by subtracting a local oscillation frequency from the first carrier frequency; and the second down-converted signal includes a second down-converted frequency by subtracting the local oscillation frequency from the second carrier frequency. 9. An integrated circuit comprising: a first input port configured to receive a first radio frequency (RF) signal having a first carrier frequency; a second input port configured to receive a second RF signal having a second carrier frequency; a first mixer coupled with the first input port, the first mixer having a first output lead configured to deliver a first down-converted signal down-converting the first RF signal only when the first mixer is enabled; a second mixer coupled with the second input port, the second mixer having a second output lead configured to deliver a second down-converted signal down-converting the second RF signal only when the second mixer is enabled; and a switching circuit coupled with the first mixer and the second mixer, the switching circuit configured to receive a channel selection signal, and the switching circuit configured to selectively enable one of the first mixer or the second mixer based on the channel selection signal. 10. The integrated circuit of claim 9, further comprising: a convergent node joining the first output lead and the second output lead; and a baseband circuit coupled with the convergent node to receive the first down-converted signal when the first mixer is selected or the second down-converted signal when the second mixer is selected. 11. The integrated circuit of claim 9, wherein the switching circuit is configured to: generate a first enable signal when the first mixer is selected by the channel selection signal, the first enable signal enabling the first mixer; and generate a second enable signal when the second mixer is selected by the channel selection signal, the second enable signal enabling the second mixer. 12. The integrated circuit of claim 9, wherein: the switching circuit is configured to receive a local oscillation signal, the switching circuit is configured to relay the local oscillation signal to one of the first mixer or the second mixer based on the channel selection signal; upon receiving the local oscillation signal, the first mixer is enabled to generate the first down-converted signal by mixing the first RF signal with the received local oscillation signal; and upon receiving the local oscillation signal, the second mixer is enabled to generate the second down-converted signal by mixing the second RF signal with the received local oscillation signal. 13. The integrated circuit of claim 9, wherein: the first mixer includes: an in-phase pass gate configured to generate an in-phase signal by passing the first RF signal at a local oscillation frequency based on a first local oscillation signal; and a quadrature pass gate configured to generate a quadrature signal by passing the first RF signal at the local oscillation frequency based on a second local oscillation signal having a 90-degree phase delay from the first local oscillation signal; and the switching circuit is configured to deliver the first and second local oscillation signals to the first mixer when the first mixer is selected, and the switching circuit is configured to block the first and second local oscillation signals from reaching the first mixer when the first mixer is not selected. 14. The integrated circuit of claim 9, wherein: the second mixer includes: an in-phase pass gate configured to generate an in-phase signal by passing the second RF signal at a local oscillation frequency based on a first local oscillation signal; and a quadrature pass gate configured to generate a quadrature signal by passing the second RF signal at the local oscillation frequency based on a second local oscillation signal having a 90-degree phase delay from the first local oscillation signal; and the switching circuit is configured to deliver the first and second local oscillation signals to the second mixer when the second mixer is selected, and the switching circuit is configured to block the first and second local oscillation signals from reaching the second mixer when the second mixer is not selected. 15. The integrated circuit of claim 9, wherein: the first down-converted signal includes a first down-converted frequency by subtracting a local oscillation frequency from the first carrier frequency; and the second down-converted signal includes a second down-converted frequency by subtracting the local oscillation frequency from the second carrier frequency. 16. A front-end (FE) receiver comprising: an antenna configured to receive a first radio frequency (RF) signal during a first time period and a second RF signal during a second time period outside of the first time period, the first RF signal having a first carrier frequency, and the second RF signal having a second carrier frequency; a first mixer coupled with the antenna, the first mixer having a first output lead configured to deliver a first down-converted signal down-converting the first RF signal only when the first mixer is enabled; a second mixer coupled with the antenna, the second mixer having a second output lead configured to deliver a second down-converted signal down-converting the second RF signal only when the second mixer is enabled; and a switching circuit coupled with the first mixer and the second mixer, the switching circuit configured to receive a channel selection signal, and the switching circuit configured to selectively enable one of the first mixer or the second mixer based on the channel selection signal. 17. The FE receiver of claim 16, further comprising: a convergent node joining the first output lead and the second output lead; and a baseband circuit coupled with the convergent node to receive the first down-converted signal when the first mixer is selected or the second down-converted signal when the second mixer is selected. 18. The FE receiver of claim 16, wherein: the switching circuit is configured to receive a local oscillation signal, the switching circuit is configured to relay the local oscillation signal to one of the first mixer or the second mixer based on the channel selection signal; upon receiving the local oscillation signal, the first mixer is enabled to generate the first down-converted signal by mixing the first RF signal with the received local oscillation signal; and upon receiving the local oscillation signal, the second mixer is enabled to generate the second down-converted signal by mixing the second RF signal with the received local oscillation signal. 19. The FE receiver of claim 16, wherein: the first mixer includes: a first in-phase pass gate configured to generate a first in-phase signal by passing the first RF signal at a local oscillation frequency based on a first local oscillation signal; and a first quadrature pass gate configured to generate a first quadrature signal by passing the first RF signal at the local oscillation frequency based on a second local oscillation signal having a 90-degree phase delay from the first local oscillation signal; the second mixer includes: a second in-phase pass gate configured to generate a second in-phase signal by passing the second RF signal at a local oscillation frequency based on the first local oscillation signal; and a second quadrature pass gate configured to generate a second quadrature signal by passing the second RF signal at the local oscillation frequency based on the second local oscillation signal; and the switching circuit is configured to: deliver the first and second local oscillation signals to the first mixer only when the first mixer is selected; and deliver the first and second local oscillation signals to the second mixer only when the second mixer is selected. 20. The FE receiver of claim 16, wherein: the first down-converted signal includes a first down-converted frequency by subtracting a local oscillation frequency from the first carrier frequency; and the second down-converted signal includes a second down-converted frequency by subtracting the local oscillation frequency from the second carrier frequency.
A front-end receiver includes a first mixer of a first channel, a second mixer of a second channel, and a switching circuit that is configured to select the first mixer or the second mixer during a particular time period. Upon being selected, one of the first mixer or the second mixer is configured to deliver a down-converted signal that down-converts a respective RF signal of either the first or second reception channel. As the tasks of down-conversion and multiplexing are combined at the mixer level, the first and second reception channels may share a baseband circuit while being able to provide a well-balanced metrics of channel isolation, low noise figure, and linearity.1. An integrated circuit comprising: a first input port configured to receive a first radio frequency (RF) signal having a first carrier frequency; a second input port configured to receive a second RF signal having a second carrier frequency; a first mixer coupled with the first input port, the first mixer having a first output lead configured to deliver a first down-converted signal by reducing the first carrier frequency of the first RF signal; a second mixer coupled with the second input port, the second mixer having a second output lead configured to deliver a second down-converted signal by reducing the second carrier frequency of the second RF signal; and a convergent node coupled with the first output lead and the second output lead, the convergent node receiving the first down-converted signal only when the first mixer is selected, and the convergent node receiving the second down-converted signal only when the second mixer is selected. 2. The integrated circuit of claim 1, further comprising: a switching circuit coupled to the first mixer and the second mixer, the switching circuit configured to receive a channel selection signal and activate only one of the first mixer or the second mixer based on the channel selection signal. 3. The integrated circuit of claim 1, further comprising: a switching circuit coupled to the first output lead and the second output lead, the switching circuit configured to receive a channel selection signal and selectively connect only one of the first output lead or the second output lead to the convergent node based on the channel selection signal. 4. The integrated circuit of claim 1, further comprising: a local oscillator selectively coupled with the first mixer or the second mixer, the local oscillator configured to generate a local oscillation signal having a local oscillation frequency; and a switching circuit coupled to the local oscillator, the switching circuit configured to receive a channel selection signal and direct the local oscillation signal to only one of the first mixer or the second mixer based on the channel selection signal. 5. The integrated circuit of claim 1, wherein the first mixer includes a first mixer pass gate having: a first input node coupled with the first input port to receive the first RF signal; a first control gate configured to receive a local oscillation signal only when the first mixer is selected; and a first output node coupled with the first output lead, the first output node configured to output the first down-converted signal based on the first RF signal and the received local oscillation signal. 6. The integrated circuit of claim 5, wherein the second mixer includes a second mixer pass gate having: a second input node coupled with the second input port to receive the second RF signal; a second control gate configured to receive the local oscillation signal only when the second mixer is selected; a second output node coupled with the second output lead, the second output node configured to output the second down-converted signal based on the second RF signal and the received local oscillation signal. 7. The integrated circuit of claim 1, wherein: the first RF signal includes a first differential signal and a second differential signal representing an opposite polarity of the first differential signal; and the first mixer includes: a first pass gate configured to generate a first in-phase signal by passing the first differential signal at a local oscillation frequency based on a first local oscillation signal; a second pass gate configured to generate a first quadrature signal by passing the first differential signal at the local oscillation frequency based on a second local oscillation signal having a 90-degree phase delay from the first local oscillation signal; a third pass gate configured to generate a second in-phase signal by passing the second differential signal at the local oscillation frequency based on a third local oscillation signal having a 180-degree phase delay from the first local oscillation signal; and a fourth pass gate configured to generate a second quadrature signal by passing the second differential signal at the local oscillation frequency based on a fourth local oscillation signal having a 270-degree phase delay from the first local oscillation signal; and the first down-converted signal includes the first in-phase signal, the second in-phase signal, the first quadrature signal, and the second quadrature signal. 8. The integrated circuit of claim 1, wherein: the first down-converted signal includes a first down-converted frequency by subtracting a local oscillation frequency from the first carrier frequency; and the second down-converted signal includes a second down-converted frequency by subtracting the local oscillation frequency from the second carrier frequency. 9. An integrated circuit comprising: a first input port configured to receive a first radio frequency (RF) signal having a first carrier frequency; a second input port configured to receive a second RF signal having a second carrier frequency; a first mixer coupled with the first input port, the first mixer having a first output lead configured to deliver a first down-converted signal down-converting the first RF signal only when the first mixer is enabled; a second mixer coupled with the second input port, the second mixer having a second output lead configured to deliver a second down-converted signal down-converting the second RF signal only when the second mixer is enabled; and a switching circuit coupled with the first mixer and the second mixer, the switching circuit configured to receive a channel selection signal, and the switching circuit configured to selectively enable one of the first mixer or the second mixer based on the channel selection signal. 10. The integrated circuit of claim 9, further comprising: a convergent node joining the first output lead and the second output lead; and a baseband circuit coupled with the convergent node to receive the first down-converted signal when the first mixer is selected or the second down-converted signal when the second mixer is selected. 11. The integrated circuit of claim 9, wherein the switching circuit is configured to: generate a first enable signal when the first mixer is selected by the channel selection signal, the first enable signal enabling the first mixer; and generate a second enable signal when the second mixer is selected by the channel selection signal, the second enable signal enabling the second mixer. 12. The integrated circuit of claim 9, wherein: the switching circuit is configured to receive a local oscillation signal, the switching circuit is configured to relay the local oscillation signal to one of the first mixer or the second mixer based on the channel selection signal; upon receiving the local oscillation signal, the first mixer is enabled to generate the first down-converted signal by mixing the first RF signal with the received local oscillation signal; and upon receiving the local oscillation signal, the second mixer is enabled to generate the second down-converted signal by mixing the second RF signal with the received local oscillation signal. 13. The integrated circuit of claim 9, wherein: the first mixer includes: an in-phase pass gate configured to generate an in-phase signal by passing the first RF signal at a local oscillation frequency based on a first local oscillation signal; and a quadrature pass gate configured to generate a quadrature signal by passing the first RF signal at the local oscillation frequency based on a second local oscillation signal having a 90-degree phase delay from the first local oscillation signal; and the switching circuit is configured to deliver the first and second local oscillation signals to the first mixer when the first mixer is selected, and the switching circuit is configured to block the first and second local oscillation signals from reaching the first mixer when the first mixer is not selected. 14. The integrated circuit of claim 9, wherein: the second mixer includes: an in-phase pass gate configured to generate an in-phase signal by passing the second RF signal at a local oscillation frequency based on a first local oscillation signal; and a quadrature pass gate configured to generate a quadrature signal by passing the second RF signal at the local oscillation frequency based on a second local oscillation signal having a 90-degree phase delay from the first local oscillation signal; and the switching circuit is configured to deliver the first and second local oscillation signals to the second mixer when the second mixer is selected, and the switching circuit is configured to block the first and second local oscillation signals from reaching the second mixer when the second mixer is not selected. 15. The integrated circuit of claim 9, wherein: the first down-converted signal includes a first down-converted frequency by subtracting a local oscillation frequency from the first carrier frequency; and the second down-converted signal includes a second down-converted frequency by subtracting the local oscillation frequency from the second carrier frequency. 16. A front-end (FE) receiver comprising: an antenna configured to receive a first radio frequency (RF) signal during a first time period and a second RF signal during a second time period outside of the first time period, the first RF signal having a first carrier frequency, and the second RF signal having a second carrier frequency; a first mixer coupled with the antenna, the first mixer having a first output lead configured to deliver a first down-converted signal down-converting the first RF signal only when the first mixer is enabled; a second mixer coupled with the antenna, the second mixer having a second output lead configured to deliver a second down-converted signal down-converting the second RF signal only when the second mixer is enabled; and a switching circuit coupled with the first mixer and the second mixer, the switching circuit configured to receive a channel selection signal, and the switching circuit configured to selectively enable one of the first mixer or the second mixer based on the channel selection signal. 17. The FE receiver of claim 16, further comprising: a convergent node joining the first output lead and the second output lead; and a baseband circuit coupled with the convergent node to receive the first down-converted signal when the first mixer is selected or the second down-converted signal when the second mixer is selected. 18. The FE receiver of claim 16, wherein: the switching circuit is configured to receive a local oscillation signal, the switching circuit is configured to relay the local oscillation signal to one of the first mixer or the second mixer based on the channel selection signal; upon receiving the local oscillation signal, the first mixer is enabled to generate the first down-converted signal by mixing the first RF signal with the received local oscillation signal; and upon receiving the local oscillation signal, the second mixer is enabled to generate the second down-converted signal by mixing the second RF signal with the received local oscillation signal. 19. The FE receiver of claim 16, wherein: the first mixer includes: a first in-phase pass gate configured to generate a first in-phase signal by passing the first RF signal at a local oscillation frequency based on a first local oscillation signal; and a first quadrature pass gate configured to generate a first quadrature signal by passing the first RF signal at the local oscillation frequency based on a second local oscillation signal having a 90-degree phase delay from the first local oscillation signal; the second mixer includes: a second in-phase pass gate configured to generate a second in-phase signal by passing the second RF signal at a local oscillation frequency based on the first local oscillation signal; and a second quadrature pass gate configured to generate a second quadrature signal by passing the second RF signal at the local oscillation frequency based on the second local oscillation signal; and the switching circuit is configured to: deliver the first and second local oscillation signals to the first mixer only when the first mixer is selected; and deliver the first and second local oscillation signals to the second mixer only when the second mixer is selected. 20. The FE receiver of claim 16, wherein: the first down-converted signal includes a first down-converted frequency by subtracting a local oscillation frequency from the first carrier frequency; and the second down-converted signal includes a second down-converted frequency by subtracting the local oscillation frequency from the second carrier frequency.
2,600
9,917
9,917
15,306,806
2,644
According to an aspect, there is provided a method of operating a first radio access node in a communication network, the method comprising determining whether a first base key that is used to determine a first encryption key for encrypting communications between a communication device and the first radio access node can be used by a second radio access node for determining a second encryption key for encrypting communications between the communication device and the second radio access node; and if the first base key can be used by the second radio access node, sending the first base key to the second radio access node during handover of the communication device from the first radio access node to the second radio access node.
1. A method of operating a first radio access node in a communication network, the method comprising: determining whether a first base key that is used to determine a first encryption key for encrypting communications between a communication device and the first radio access node can be used by a second radio access node for determining a second encryption key for encrypting communications between the communication device and the second radio access node; and if the first base key can be used by the second radio access node, sending the first base key to the second radio access node during handover of the communication device from the first radio access node to the second radio access node. 2. The method as defined in claim 1, wherein the method further comprises the step of: if the first base key can be used by the second radio access node, sending an indication to the communication device that the first base key is to be used for determining a second encryption key for encrypting communications between the communication device and the second radio access node. 3. The method as defined in claim 1, wherein if it is determined that the first base key cannot be used by the second radio access node, the method further comprises the steps of: determining a second base key from the first base key; and sending the second base key to the second radio access node during handover of the communication device from the first radio access node to the second radio access node. 4. The method as defined in claim 1, wherein if it is determined that the first base key cannot be used by the second radio access node, the method further comprises the step of: sending an indication to the communication device to cause the communication device to determine a second base key from the first base key for use with the second radio access node. 5. The method as defined in claim 1, wherein the step of determining whether the first base key can be used by a second radio access node comprises determining that the first base key can be used by the second radio access node if the first radio access node and the second radio access node are part of the same security zone. 6. The method as defined in claim 5, wherein the first radio access node and the second radio access node are part of the same security zone if the first radio access node and the second radio access node are: (a) running as separate virtual machines on the same hardware; (b) two containers within the same virtual machine; (c) implemented on boards in the same physical rack; (d) determined by a security policy as belonging to the same security zone; or (e) physically located in the same site. 7. The method as defined in claim 1, wherein the step of determining whether the first base key can be used by a second radio access node comprises: sending a request for information on the second radio access node to another node in the communication network; and receiving information on the second radio access node from said another node, the information indicating whether the first base key can be used by the second radio access node. 8. The method as defined in claim 1, wherein the step of determining whether the first base key can be used by a second radio access node comprises: examining a list or local configuration at the first radio access node. 9. The method as defined in claim 1, wherein the step of sending the first base key to the second radio access node during handover further comprises sending an indication of an encryption key generation algorithm that was used to determine the first encryption key from the first base key. 10. The method as defined in claim 1, wherein the first radio access node and the second radio access node share a Packet Data Convergence Protocol, PDCP, state. 11. A method of operating a communication device, the method comprising: on handover of the communication device from a first radio access node in a communication network to a second radio access node in the communication network, receiving an indication of whether a first base key that was used to determine a first encryption key for encrypting communications between the communication device and the first radio access node can be used for determining a second encryption key for encrypting communications between the communication device and the second radio access node; if the received indication indicates that the first base key can be used for determining a second encryption key for encrypting communications between the communication device and the second radio access node, determining a second encryption key for encrypting communications between the communication device and the second radio access node from the first base key; otherwise, determining a second base key from the first base key; and determining a second encryption key for encrypting communications between the communication device and the second radio access node from the second base key. 12. The method as defined in claim 11, wherein the indication is received in a message relating to the handover of the communication device from the first radio access node to the second radio access node. 13. The method as defined in claim 11, wherein the indication is received from the first radio access node or the second radio access node. 14.-40. (canceled) 41. A first radio access node for use in a communication network, wherein the first radio access node comprises a processor and a memory, said memory containing instructions executable by said processor whereby said first radio access node is operative to: determine whether a first base key that is used to determine a first encryption key for encrypting communications between a communication device and the first radio access node can be used by a second radio access node for determining a second encryption key for encrypting communications between the communication device and the second radio access node; and send the first base key to the second radio access node during handover of the communication device from the first radio access node to the second radio access node if the first base key can be used by the second radio access node. 42. The first radio access node as defined in claim 41, wherein the first radio access node is further operative to: send an indication to the communication device that the first base key is to be used for determining a second encryption key for encrypting communications between the communication device and the second radio access node if the first base key can be used by the second radio access node. 43. The first radio access node as defined in claim 41, wherein the first radio access node is further operative to: determine a second base key from the first base key if it is determined that the first base key cannot be used by the second radio access node; and send the second base key to the second radio access node during handover of the communication device from the first radio access node to the second radio access node. 44. The first radio access node as defined in claim 41, wherein the first radio access node is further operative to: send an indication to the communication device to cause the communication device to determine a second base key from the first base key for use with the second radio access node if it is determined that the first base key cannot be used by the second radio access node. 45. The first radio access node as defined in claim 41, wherein the first radio access node is operative to determine whether the first base key can be used by a second radio access node by determining that the first base key can be used by the second radio access node if the first radio access node and the second radio access node are part of the same security zone. 46. The first radio access node as defined in claim 45, wherein the first radio access node and the second radio access node are part of the same security zone if the first radio access node and the second radio access node are: (a) running as separate virtual machines on the same hardware; (b) two containers within the same virtual machine; (c) implemented on boards in the same physical rack; (d) determined by a security policy as belonging to the same security zone; or (e) physically located in the same site. 47. The first radio access node as defined in claim 41, wherein the first radio access node is operative to determine whether the first base key can be used by a second radio access node by: sending a request for information on the second radio access node to another node in the communication network; and receiving information on the second radio access node from said another node, the information indicating whether the first base key can be used by the second radio access node. 48. The first radio access node as defined in claim 41, wherein the first radio access node is operative to determine whether the first base key can be used by a second radio access node by: examining a list or local configuration at the first radio access node. 49. The first radio access node as defined in claim 41, wherein the first radio access node is further operative to send an indication of an encryption key generation algorithm that was used to determine the first encryption key from the first base key. 50. The first radio access node as defined in claim 41, wherein the first radio access node and the second radio access node share a Packet Data Convergence Protocol, PDCP, state. 51. A communication device, wherein the communication device comprises a processor and a memory, said memory containing instructions executable by said processor whereby said communication device is operative to: receive an indication of whether a first base key that was used to determine a first encryption key for encrypting communications between the communication device and a first radio access node in a communication network can be used for determining a second encryption key for encrypting communications between the communication device and a second radio access node in the communication network on handover of the communication device from the first radio access node to the second radio access node; determine a second encryption key from the first base key if the received indication indicates that the first base key can be used for determining a second encryption key; determine a second base key from the first base key if the received indication does not indicate that the first base key can be used for determining a second encryption key; and determine a second encryption key for encrypting communications between the communication device and the second radio access node from the second base key. 52. The communication device as defined in claim 51, wherein the indication is received in a message relating to the handover of the communication device from the first radio access node to the second radio access node. 53. The communication device as defined in claim 51, wherein the indication is received from the first radio access node or the second radio access node. 54-81. (canceled)
According to an aspect, there is provided a method of operating a first radio access node in a communication network, the method comprising determining whether a first base key that is used to determine a first encryption key for encrypting communications between a communication device and the first radio access node can be used by a second radio access node for determining a second encryption key for encrypting communications between the communication device and the second radio access node; and if the first base key can be used by the second radio access node, sending the first base key to the second radio access node during handover of the communication device from the first radio access node to the second radio access node.1. A method of operating a first radio access node in a communication network, the method comprising: determining whether a first base key that is used to determine a first encryption key for encrypting communications between a communication device and the first radio access node can be used by a second radio access node for determining a second encryption key for encrypting communications between the communication device and the second radio access node; and if the first base key can be used by the second radio access node, sending the first base key to the second radio access node during handover of the communication device from the first radio access node to the second radio access node. 2. The method as defined in claim 1, wherein the method further comprises the step of: if the first base key can be used by the second radio access node, sending an indication to the communication device that the first base key is to be used for determining a second encryption key for encrypting communications between the communication device and the second radio access node. 3. The method as defined in claim 1, wherein if it is determined that the first base key cannot be used by the second radio access node, the method further comprises the steps of: determining a second base key from the first base key; and sending the second base key to the second radio access node during handover of the communication device from the first radio access node to the second radio access node. 4. The method as defined in claim 1, wherein if it is determined that the first base key cannot be used by the second radio access node, the method further comprises the step of: sending an indication to the communication device to cause the communication device to determine a second base key from the first base key for use with the second radio access node. 5. The method as defined in claim 1, wherein the step of determining whether the first base key can be used by a second radio access node comprises determining that the first base key can be used by the second radio access node if the first radio access node and the second radio access node are part of the same security zone. 6. The method as defined in claim 5, wherein the first radio access node and the second radio access node are part of the same security zone if the first radio access node and the second radio access node are: (a) running as separate virtual machines on the same hardware; (b) two containers within the same virtual machine; (c) implemented on boards in the same physical rack; (d) determined by a security policy as belonging to the same security zone; or (e) physically located in the same site. 7. The method as defined in claim 1, wherein the step of determining whether the first base key can be used by a second radio access node comprises: sending a request for information on the second radio access node to another node in the communication network; and receiving information on the second radio access node from said another node, the information indicating whether the first base key can be used by the second radio access node. 8. The method as defined in claim 1, wherein the step of determining whether the first base key can be used by a second radio access node comprises: examining a list or local configuration at the first radio access node. 9. The method as defined in claim 1, wherein the step of sending the first base key to the second radio access node during handover further comprises sending an indication of an encryption key generation algorithm that was used to determine the first encryption key from the first base key. 10. The method as defined in claim 1, wherein the first radio access node and the second radio access node share a Packet Data Convergence Protocol, PDCP, state. 11. A method of operating a communication device, the method comprising: on handover of the communication device from a first radio access node in a communication network to a second radio access node in the communication network, receiving an indication of whether a first base key that was used to determine a first encryption key for encrypting communications between the communication device and the first radio access node can be used for determining a second encryption key for encrypting communications between the communication device and the second radio access node; if the received indication indicates that the first base key can be used for determining a second encryption key for encrypting communications between the communication device and the second radio access node, determining a second encryption key for encrypting communications between the communication device and the second radio access node from the first base key; otherwise, determining a second base key from the first base key; and determining a second encryption key for encrypting communications between the communication device and the second radio access node from the second base key. 12. The method as defined in claim 11, wherein the indication is received in a message relating to the handover of the communication device from the first radio access node to the second radio access node. 13. The method as defined in claim 11, wherein the indication is received from the first radio access node or the second radio access node. 14.-40. (canceled) 41. A first radio access node for use in a communication network, wherein the first radio access node comprises a processor and a memory, said memory containing instructions executable by said processor whereby said first radio access node is operative to: determine whether a first base key that is used to determine a first encryption key for encrypting communications between a communication device and the first radio access node can be used by a second radio access node for determining a second encryption key for encrypting communications between the communication device and the second radio access node; and send the first base key to the second radio access node during handover of the communication device from the first radio access node to the second radio access node if the first base key can be used by the second radio access node. 42. The first radio access node as defined in claim 41, wherein the first radio access node is further operative to: send an indication to the communication device that the first base key is to be used for determining a second encryption key for encrypting communications between the communication device and the second radio access node if the first base key can be used by the second radio access node. 43. The first radio access node as defined in claim 41, wherein the first radio access node is further operative to: determine a second base key from the first base key if it is determined that the first base key cannot be used by the second radio access node; and send the second base key to the second radio access node during handover of the communication device from the first radio access node to the second radio access node. 44. The first radio access node as defined in claim 41, wherein the first radio access node is further operative to: send an indication to the communication device to cause the communication device to determine a second base key from the first base key for use with the second radio access node if it is determined that the first base key cannot be used by the second radio access node. 45. The first radio access node as defined in claim 41, wherein the first radio access node is operative to determine whether the first base key can be used by a second radio access node by determining that the first base key can be used by the second radio access node if the first radio access node and the second radio access node are part of the same security zone. 46. The first radio access node as defined in claim 45, wherein the first radio access node and the second radio access node are part of the same security zone if the first radio access node and the second radio access node are: (a) running as separate virtual machines on the same hardware; (b) two containers within the same virtual machine; (c) implemented on boards in the same physical rack; (d) determined by a security policy as belonging to the same security zone; or (e) physically located in the same site. 47. The first radio access node as defined in claim 41, wherein the first radio access node is operative to determine whether the first base key can be used by a second radio access node by: sending a request for information on the second radio access node to another node in the communication network; and receiving information on the second radio access node from said another node, the information indicating whether the first base key can be used by the second radio access node. 48. The first radio access node as defined in claim 41, wherein the first radio access node is operative to determine whether the first base key can be used by a second radio access node by: examining a list or local configuration at the first radio access node. 49. The first radio access node as defined in claim 41, wherein the first radio access node is further operative to send an indication of an encryption key generation algorithm that was used to determine the first encryption key from the first base key. 50. The first radio access node as defined in claim 41, wherein the first radio access node and the second radio access node share a Packet Data Convergence Protocol, PDCP, state. 51. A communication device, wherein the communication device comprises a processor and a memory, said memory containing instructions executable by said processor whereby said communication device is operative to: receive an indication of whether a first base key that was used to determine a first encryption key for encrypting communications between the communication device and a first radio access node in a communication network can be used for determining a second encryption key for encrypting communications between the communication device and a second radio access node in the communication network on handover of the communication device from the first radio access node to the second radio access node; determine a second encryption key from the first base key if the received indication indicates that the first base key can be used for determining a second encryption key; determine a second base key from the first base key if the received indication does not indicate that the first base key can be used for determining a second encryption key; and determine a second encryption key for encrypting communications between the communication device and the second radio access node from the second base key. 52. The communication device as defined in claim 51, wherein the indication is received in a message relating to the handover of the communication device from the first radio access node to the second radio access node. 53. The communication device as defined in claim 51, wherein the indication is received from the first radio access node or the second radio access node. 54-81. (canceled)
2,600
9,918
9,918
14,677,866
2,669
Some aspects of the present disclosure relate to tissue parameter mapping. In one embodiment of the present disclosure, a method includes receiving undersampled k-space data corresponding to a dynamic physiological process in an area of interest of a subject. The method also includes estimating, from the undersampled k-space data, one or more respective tissue parameter values representing a respective state of the dynamic process at each point in time of a predetermined plurality of points in time during the acquisition. The estimation includes unscented Kalman filtering. The method also includes generating one or more tissue parameter maps using the respective plurality of estimated tissue parameter values.
1. A method for T2 mapping, comprising: acquiring, by a magnetic resonance imaging (MRI) system, undersampled k-space data corresponding to a dynamic physiological process in an area of interest of a subject; estimating, from the undersampled k-space data, one or more respective T2 values representing a respective state of the dynamic process at each point in time of a predetermined plurality of points in time during the acquisition, wherein the estimation comprises unscented Kalman filtering; and generating one or more T2 maps using the respective plurality of estimated T2 values. 2. The method of claim 1, wherein the unscented Kalman filtering comprises: performing a state transition function associated with one or more transitions between the states of the dynamic process; and performing a measurement function associated with a relationship between the one or more estimated T2 values and an acquired signal corresponding to the undersampled k-space data. 3. The method of claim 2, wherein the measurement function models T2 encoding and Fourier encoding steps. 4. The method of claim 2, wherein the state transition function comprises combining the one or more respective T2 values associated with the respective state of the dynamic process with a noise value associated with the respective state. 5. The method of claim 2, wherein the measurement function comprises combining a noise value associated with the MRI system with a product of an undersampling pattern at a particular state of the states of the dynamic process, a Fourier transform operator, a coil sensitivity map associated with the MRI system, and a T2-weighted image at the particular state. 6. The method of claim 1, wherein acquiring the undersampled k-space data comprises using a multiple contrast spin echo sequence, each echo being configured to acquire a phase encoding value selected according to a predetermined undersampling pattern. 7. The method of claim 6, wherein the predetermined undersampling pattern comprises a plurality of phase-encoding lines and a plurality of outer k-space lines at each echo, the plurality of phase-encoding lines having the same quantity of phase-encoding lines at each echo. 8. The method of claim 1, wherein generating the one or more T2 maps comprises generating the one or more T2 maps directly from the undersampled k-space data. 9. The method of claim 8, further comprising generating one or more T2-weighted images based on the one or more T2 maps. 10. The method of claim 1, wherein: estimating the one or more respective T2 values further comprises estimating, from the undersampled k-space data, one or more respective proton density values representing the respective states of the dynamic process at each point in time of the predetermined plurality of points in time during the acquisition; and generating the one or more T2 maps further comprises generating the one or more T2 maps using the respective plurality of estimated T2 values and the respective plurality of estimated proton density values. 11. A system for tissue parameter mapping, comprising: a data collection device configured to collect undersampled k-space data corresponding to a dynamic physiological process in an area of interest of a subject; and an image processing device coupled to the data collection device, the image processing device comprising: an estimating module configured to estimate, from the undersampled k-space data, one or more tissue parameter values associated with a state of the dynamic process at each of a predetermined plurality of points in time during the acquisition, and a generating module configured to generate one or more tissue parameter maps using the respective plurality of estimated tissue parameter values. 12. The system for tissue parameter mapping of claim 11, wherein the data collection device comprises a magnetic resonance imaging (MRI) device configured to acquire the undersampled k-space data. 13. The system for tissue parameter mapping of claim 12, wherein the image processing device comprises at least one processor configured to execute computer-readable instructions to cause a computing device to perform functions comprising acquiring the undersampled k-space data, estimating the one or more tissue parameter values, and generating of the one or more tissue parameter maps. 14. The system for tissue parameter mapping of claim 11, wherein the estimation comprises unscented Kalman filtering. 15. The system for tissue parameter mapping of claim 14, wherein the unscented Kalman filtering comprises: performing a state transition function associated with one or more transitions between the states of the dynamic process; and performing a measurement function associated with a relationship between the one or more estimated tissue parameter values and an acquired signal corresponding to the undersampled k-space data. 16. The system for tissue parameter mapping of claim 15, wherein the measurement function models tissue parameter encoding and Fourier encoding steps. 17. The system for tissue parameter mapping of claim 15, wherein the state transition function comprises combining the one or more respective tissue parameter values associated with the respective state of the dynamic process with a noise value associated with the respective state. 18. The system for tissue parameter mapping of claim 15, wherein the measurement function comprises combining a noise value associated with the data collection device with a product of an undersampling pattern at a particular state of the states of the dynamic process, a Fourier transform operator, a coil sensitivity map associated with the data collection device, and a tissue parameter-weighted image at the particular state. 19. The system for tissue parameter mapping of claim 11, wherein collecting the undersampled k-space data comprises using a multiple contrast spin echo sequence, each echo being configured to acquire a phase encoding value selected according to a predetermined undersampling pattern. 20. The system for tissue parameter mapping of claim 19, wherein the predetermined undersampling pattern comprises a plurality of phase-encoding lines and a plurality of outer k-space lines at each echo, the plurality of phase-encoding lines having the same quantity of phase-encoding lines at each echo. 21. The system for tissue parameter mapping of claim 11, wherein generating the one or more tissue parameter maps comprises generating the one or more tissue parameter maps directly from the undersampled k-space data. 22. The system for tissue parameter mapping of claim 21, wherein the generating module is further configured to generate one or more tissue parameter-weighted images based on the one or more tissue parameter maps. 23. A method for tissue parameter mapping, comprising: receiving undersampled k-space data corresponding to a dynamic physiological process in an area of interest of a subject; estimating, from the undersampled k-space data, one or more respective tissue parameter values representing a respective state of the dynamic process at each point in time of a predetermined plurality of points in time during the acquisition, wherein the estimation comprises unscented Kalman filtering; and generating one or more tissue parameter maps using the respective plurality of estimated tissue parameter values. 24. The method of claim 23, wherein the one or more tissue parameter values comprises one or more of T1, T2, T2*, perfusion parameter, and diffusion parameter values, and the one or more tissue parameter maps comprises one or more of T1, T2, T2*, perfusion parameter, and diffusion parameter maps. 25. The method of claim 23, wherein the estimation comprises simultaneously estimating a plurality of the tissue parameter values. 26. The method of claim 23, wherein receiving the undersampled k-space data comprises acquiring the undersampled k-space data using a magnetic resonance imaging (MRI) device. 27. The method of claim 23, further comprising: estimating, from the undersampled k-space data, a second tissue parameter value representing a respective state of the dynamic process at each point in time of a predetermined plurality of points in time during the acquisition, wherein the second tissue parameter estimation comprises unscented Kalman filtering; and generating one or more of a second tissue parameter map using the respective plurality of estimated second tissue parameter values. 28. The method of claim 23, wherein the unscented Kalman filtering comprises: performing a state transition function associated with one or more transitions between the states of the dynamic process; and performing a measurement function associated with a relationship between the one or more estimated tissue parameter values and an acquired signal corresponding to the undersampled k-space data. 29. The method of claim 28, wherein the measurement function models tissue parameter encoding and Fourier encoding steps. 30. The method of claim 28, wherein the state transition function comprises combining the one or more respective tissue parameter values associated with the respective state of the dynamic process with a noise value associated with the respective state. 31. The method of claim 28, wherein the measurement function comprises combining a noise value associated with the data collection device with a product of an undersampling pattern at a particular state of the states of the dynamic process, a Fourier transform operator, a coil sensitivity map associated with the data collection device, and a tissue parameter-weighted image at the particular state. 32. The method of claim 28, wherein receiving the undersampled k-space data comprises using a multiple contrast spin echo sequence, each echo being configured to acquire a phase encoding value selected according to a predetermined undersampling pattern. 33. The method of claim 32, wherein the predetermined undersampling pattern comprises a plurality of phase-encoding lines and a plurality of outer k-space lines at each echo, the plurality of phase-encoding lines having the same quantity of phase-encoding lines at each echo. 34. The method of claim 23, wherein generating the one or more tissue parameter maps comprises generating the one or more tissue parameter maps directly from the undersampled k-space data. 35. The method of claim 23, further comprising generating one or more tissue parameter-weighted images based on the one or more tissue parameter maps. 36. A non-transitory computer-readable storage medium having stored computer-executable instructions that, when executed by one or more processors, cause a computer to perform functions comprising: receiving undersampled k-space data corresponding to a dynamic physiological process in an area of interest of a subject; estimating, from the undersampled k-space data, one or more respective tissue parameter values representing a respective state of the dynamic process at each point in time of a predetermined plurality of points in time during the acquisition, wherein the estimation comprises unscented Kalman filtering; and generating one or more tissue parameter maps using the respective plurality of estimated tissue parameter values. 37. The non-transitory computer-readable storage medium of claim 36, wherein the one or more tissue parameter values comprises one or more of T1, T2, T2*, perfusion parameter, and diffusion parameter values, and the one or more tissue parameter maps comprises one or more of T1, T2, T2*, perfusion parameter, and diffusion parameter maps. 38. The non-transitory computer-readable storage medium of claim 36, wherein the estimation comprises simultaneously estimating a plurality of the tissue parameter values. 39. The non-transitory computer-readable storage medium of claim 36, wherein receiving the undersampled k-space data comprises acquiring the undersampled k-space data using a magnetic resonance imaging (MRI) device. 40. The non-transitory computer-readable storage medium of claim 36, wherein the functions performed by the computer further comprise: estimating, from the undersampled k-space data, a second tissue parameter value representing a respective state of the dynamic process at each point in time of a predetermined plurality of points in time during the acquisition, wherein the second tissue parameter estimation comprises unscented Kalman filtering; and generating one or more of a second tissue parameter map using the respective plurality of estimated second tissue parameter values. 41. The non-transitory computer-readable storage medium of claim 36, wherein the unscented Kalman filtering comprises: performing a state transition function associated with one or more transitions between the states of the dynamic process; and performing a measurement function associated with a relationship between the one or more estimated tissue parameter values and an acquired signal corresponding to the undersampled k-space data. 42. The non-transitory computer-readable storage medium of claim 41, wherein the measurement function models tissue parameter encoding and Fourier encoding steps. 43. The non-transitory computer-readable storage medium of claim 41, wherein the state transition function comprises combining the one or more respective tissue parameter values associated with the respective state of the dynamic process with a noise value associated with the respective state. 44. The non-transitory computer-readable storage medium of claim 41, wherein the measurement function comprises combining a noise value associated with the data collection device with a product of an undersampling pattern at a particular state of the states of the dynamic process, a Fourier transform operator, a coil sensitivity map associated with the data collection device, and a tissue parameter-weighted image at the particular state. 45. The non-transitory computer-readable storage medium of claim 41, wherein receiving the undersampled k-space data comprises using a multiple contrast spin echo sequence, each echo being configured to acquire a phase encoding value selected according to a predetermined undersampling pattern. 46. The non-transitory computer-readable storage medium of claim 45, wherein the predetermined undersampling pattern comprises a plurality of phase-encoding lines and a plurality of outer k-space lines at each echo, the plurality of phase-encoding lines having the same quantity of phase-encoding lines at each echo. 47. The non-transitory computer-readable storage medium of claim 36, wherein generating the one or more tissue parameter maps comprises generating the one or more tissue parameter maps directly from the undersampled k-space data. 48. The non-transitory computer-readable storage medium of claim 36, wherein the functions performed by the computer further comprise generating one or more tissue parameter-weighted images based on the one or more tissue parameter maps.
Some aspects of the present disclosure relate to tissue parameter mapping. In one embodiment of the present disclosure, a method includes receiving undersampled k-space data corresponding to a dynamic physiological process in an area of interest of a subject. The method also includes estimating, from the undersampled k-space data, one or more respective tissue parameter values representing a respective state of the dynamic process at each point in time of a predetermined plurality of points in time during the acquisition. The estimation includes unscented Kalman filtering. The method also includes generating one or more tissue parameter maps using the respective plurality of estimated tissue parameter values.1. A method for T2 mapping, comprising: acquiring, by a magnetic resonance imaging (MRI) system, undersampled k-space data corresponding to a dynamic physiological process in an area of interest of a subject; estimating, from the undersampled k-space data, one or more respective T2 values representing a respective state of the dynamic process at each point in time of a predetermined plurality of points in time during the acquisition, wherein the estimation comprises unscented Kalman filtering; and generating one or more T2 maps using the respective plurality of estimated T2 values. 2. The method of claim 1, wherein the unscented Kalman filtering comprises: performing a state transition function associated with one or more transitions between the states of the dynamic process; and performing a measurement function associated with a relationship between the one or more estimated T2 values and an acquired signal corresponding to the undersampled k-space data. 3. The method of claim 2, wherein the measurement function models T2 encoding and Fourier encoding steps. 4. The method of claim 2, wherein the state transition function comprises combining the one or more respective T2 values associated with the respective state of the dynamic process with a noise value associated with the respective state. 5. The method of claim 2, wherein the measurement function comprises combining a noise value associated with the MRI system with a product of an undersampling pattern at a particular state of the states of the dynamic process, a Fourier transform operator, a coil sensitivity map associated with the MRI system, and a T2-weighted image at the particular state. 6. The method of claim 1, wherein acquiring the undersampled k-space data comprises using a multiple contrast spin echo sequence, each echo being configured to acquire a phase encoding value selected according to a predetermined undersampling pattern. 7. The method of claim 6, wherein the predetermined undersampling pattern comprises a plurality of phase-encoding lines and a plurality of outer k-space lines at each echo, the plurality of phase-encoding lines having the same quantity of phase-encoding lines at each echo. 8. The method of claim 1, wherein generating the one or more T2 maps comprises generating the one or more T2 maps directly from the undersampled k-space data. 9. The method of claim 8, further comprising generating one or more T2-weighted images based on the one or more T2 maps. 10. The method of claim 1, wherein: estimating the one or more respective T2 values further comprises estimating, from the undersampled k-space data, one or more respective proton density values representing the respective states of the dynamic process at each point in time of the predetermined plurality of points in time during the acquisition; and generating the one or more T2 maps further comprises generating the one or more T2 maps using the respective plurality of estimated T2 values and the respective plurality of estimated proton density values. 11. A system for tissue parameter mapping, comprising: a data collection device configured to collect undersampled k-space data corresponding to a dynamic physiological process in an area of interest of a subject; and an image processing device coupled to the data collection device, the image processing device comprising: an estimating module configured to estimate, from the undersampled k-space data, one or more tissue parameter values associated with a state of the dynamic process at each of a predetermined plurality of points in time during the acquisition, and a generating module configured to generate one or more tissue parameter maps using the respective plurality of estimated tissue parameter values. 12. The system for tissue parameter mapping of claim 11, wherein the data collection device comprises a magnetic resonance imaging (MRI) device configured to acquire the undersampled k-space data. 13. The system for tissue parameter mapping of claim 12, wherein the image processing device comprises at least one processor configured to execute computer-readable instructions to cause a computing device to perform functions comprising acquiring the undersampled k-space data, estimating the one or more tissue parameter values, and generating of the one or more tissue parameter maps. 14. The system for tissue parameter mapping of claim 11, wherein the estimation comprises unscented Kalman filtering. 15. The system for tissue parameter mapping of claim 14, wherein the unscented Kalman filtering comprises: performing a state transition function associated with one or more transitions between the states of the dynamic process; and performing a measurement function associated with a relationship between the one or more estimated tissue parameter values and an acquired signal corresponding to the undersampled k-space data. 16. The system for tissue parameter mapping of claim 15, wherein the measurement function models tissue parameter encoding and Fourier encoding steps. 17. The system for tissue parameter mapping of claim 15, wherein the state transition function comprises combining the one or more respective tissue parameter values associated with the respective state of the dynamic process with a noise value associated with the respective state. 18. The system for tissue parameter mapping of claim 15, wherein the measurement function comprises combining a noise value associated with the data collection device with a product of an undersampling pattern at a particular state of the states of the dynamic process, a Fourier transform operator, a coil sensitivity map associated with the data collection device, and a tissue parameter-weighted image at the particular state. 19. The system for tissue parameter mapping of claim 11, wherein collecting the undersampled k-space data comprises using a multiple contrast spin echo sequence, each echo being configured to acquire a phase encoding value selected according to a predetermined undersampling pattern. 20. The system for tissue parameter mapping of claim 19, wherein the predetermined undersampling pattern comprises a plurality of phase-encoding lines and a plurality of outer k-space lines at each echo, the plurality of phase-encoding lines having the same quantity of phase-encoding lines at each echo. 21. The system for tissue parameter mapping of claim 11, wherein generating the one or more tissue parameter maps comprises generating the one or more tissue parameter maps directly from the undersampled k-space data. 22. The system for tissue parameter mapping of claim 21, wherein the generating module is further configured to generate one or more tissue parameter-weighted images based on the one or more tissue parameter maps. 23. A method for tissue parameter mapping, comprising: receiving undersampled k-space data corresponding to a dynamic physiological process in an area of interest of a subject; estimating, from the undersampled k-space data, one or more respective tissue parameter values representing a respective state of the dynamic process at each point in time of a predetermined plurality of points in time during the acquisition, wherein the estimation comprises unscented Kalman filtering; and generating one or more tissue parameter maps using the respective plurality of estimated tissue parameter values. 24. The method of claim 23, wherein the one or more tissue parameter values comprises one or more of T1, T2, T2*, perfusion parameter, and diffusion parameter values, and the one or more tissue parameter maps comprises one or more of T1, T2, T2*, perfusion parameter, and diffusion parameter maps. 25. The method of claim 23, wherein the estimation comprises simultaneously estimating a plurality of the tissue parameter values. 26. The method of claim 23, wherein receiving the undersampled k-space data comprises acquiring the undersampled k-space data using a magnetic resonance imaging (MRI) device. 27. The method of claim 23, further comprising: estimating, from the undersampled k-space data, a second tissue parameter value representing a respective state of the dynamic process at each point in time of a predetermined plurality of points in time during the acquisition, wherein the second tissue parameter estimation comprises unscented Kalman filtering; and generating one or more of a second tissue parameter map using the respective plurality of estimated second tissue parameter values. 28. The method of claim 23, wherein the unscented Kalman filtering comprises: performing a state transition function associated with one or more transitions between the states of the dynamic process; and performing a measurement function associated with a relationship between the one or more estimated tissue parameter values and an acquired signal corresponding to the undersampled k-space data. 29. The method of claim 28, wherein the measurement function models tissue parameter encoding and Fourier encoding steps. 30. The method of claim 28, wherein the state transition function comprises combining the one or more respective tissue parameter values associated with the respective state of the dynamic process with a noise value associated with the respective state. 31. The method of claim 28, wherein the measurement function comprises combining a noise value associated with the data collection device with a product of an undersampling pattern at a particular state of the states of the dynamic process, a Fourier transform operator, a coil sensitivity map associated with the data collection device, and a tissue parameter-weighted image at the particular state. 32. The method of claim 28, wherein receiving the undersampled k-space data comprises using a multiple contrast spin echo sequence, each echo being configured to acquire a phase encoding value selected according to a predetermined undersampling pattern. 33. The method of claim 32, wherein the predetermined undersampling pattern comprises a plurality of phase-encoding lines and a plurality of outer k-space lines at each echo, the plurality of phase-encoding lines having the same quantity of phase-encoding lines at each echo. 34. The method of claim 23, wherein generating the one or more tissue parameter maps comprises generating the one or more tissue parameter maps directly from the undersampled k-space data. 35. The method of claim 23, further comprising generating one or more tissue parameter-weighted images based on the one or more tissue parameter maps. 36. A non-transitory computer-readable storage medium having stored computer-executable instructions that, when executed by one or more processors, cause a computer to perform functions comprising: receiving undersampled k-space data corresponding to a dynamic physiological process in an area of interest of a subject; estimating, from the undersampled k-space data, one or more respective tissue parameter values representing a respective state of the dynamic process at each point in time of a predetermined plurality of points in time during the acquisition, wherein the estimation comprises unscented Kalman filtering; and generating one or more tissue parameter maps using the respective plurality of estimated tissue parameter values. 37. The non-transitory computer-readable storage medium of claim 36, wherein the one or more tissue parameter values comprises one or more of T1, T2, T2*, perfusion parameter, and diffusion parameter values, and the one or more tissue parameter maps comprises one or more of T1, T2, T2*, perfusion parameter, and diffusion parameter maps. 38. The non-transitory computer-readable storage medium of claim 36, wherein the estimation comprises simultaneously estimating a plurality of the tissue parameter values. 39. The non-transitory computer-readable storage medium of claim 36, wherein receiving the undersampled k-space data comprises acquiring the undersampled k-space data using a magnetic resonance imaging (MRI) device. 40. The non-transitory computer-readable storage medium of claim 36, wherein the functions performed by the computer further comprise: estimating, from the undersampled k-space data, a second tissue parameter value representing a respective state of the dynamic process at each point in time of a predetermined plurality of points in time during the acquisition, wherein the second tissue parameter estimation comprises unscented Kalman filtering; and generating one or more of a second tissue parameter map using the respective plurality of estimated second tissue parameter values. 41. The non-transitory computer-readable storage medium of claim 36, wherein the unscented Kalman filtering comprises: performing a state transition function associated with one or more transitions between the states of the dynamic process; and performing a measurement function associated with a relationship between the one or more estimated tissue parameter values and an acquired signal corresponding to the undersampled k-space data. 42. The non-transitory computer-readable storage medium of claim 41, wherein the measurement function models tissue parameter encoding and Fourier encoding steps. 43. The non-transitory computer-readable storage medium of claim 41, wherein the state transition function comprises combining the one or more respective tissue parameter values associated with the respective state of the dynamic process with a noise value associated with the respective state. 44. The non-transitory computer-readable storage medium of claim 41, wherein the measurement function comprises combining a noise value associated with the data collection device with a product of an undersampling pattern at a particular state of the states of the dynamic process, a Fourier transform operator, a coil sensitivity map associated with the data collection device, and a tissue parameter-weighted image at the particular state. 45. The non-transitory computer-readable storage medium of claim 41, wherein receiving the undersampled k-space data comprises using a multiple contrast spin echo sequence, each echo being configured to acquire a phase encoding value selected according to a predetermined undersampling pattern. 46. The non-transitory computer-readable storage medium of claim 45, wherein the predetermined undersampling pattern comprises a plurality of phase-encoding lines and a plurality of outer k-space lines at each echo, the plurality of phase-encoding lines having the same quantity of phase-encoding lines at each echo. 47. The non-transitory computer-readable storage medium of claim 36, wherein generating the one or more tissue parameter maps comprises generating the one or more tissue parameter maps directly from the undersampled k-space data. 48. The non-transitory computer-readable storage medium of claim 36, wherein the functions performed by the computer further comprise generating one or more tissue parameter-weighted images based on the one or more tissue parameter maps.
2,600
9,919
9,919
15,143,982
2,656
In embodiments of the present invention improved capabilities are described for methods and systems of grammar checking comprising a grammar checking facility for analyzing, in a cloud computing environment, a source-supplied text, wherein a source provides the grammar checking facility with the source-supplied text from a personal computing device to the grammar checking facility. The grammar checking facility performs an analysis of the source-supplied text to identify grammatical errors, and the grammar checking facility provides the source with at least one identified grammatical error along with a writing reference guide for the source which includes, for the at least one identified grammatical error, a corresponding explanation of the at least one identified grammatical error.
1. A system of grammar checking, comprising: a grammar checking facility for analyzing, in a cloud computing environment, a source-supplied text, wherein a source provides the grammar checking facility with the source-supplied text from a personal computing device to the grammar checking facility, wherein the personal computing device is located remotely from the grammar checking facility, wherein the grammar checking facility performs an analysis of the source-supplied text to identify grammatical errors, and the grammar checking facility provides the source with grammar checked text with at least one identified grammatical error along with a writing reference guide which includes, for the at least one identified grammatical error, a corresponding portion of the source-supplied text comprising the at least one identified grammatical error embedded into a corresponding explanation of the at least one identified grammatical error. 2. The system of claim 1, wherein the source is one of a user, a device, and a computer program. 3. The system of claim 1, wherein the grammar checked text includes annotations that enable the source to learn the sources of grammatical errors found by the grammar checking facility. 4. The system of claim 1, wherein the cloud-computing environment includes implementing the grammar checking facility on multiple servers. 5. The system of claim 4, wherein the cloud-computing environment implements the grammar checking facility on multiple networked computing devices. 6. The system of claim 4, wherein the multiple servers are in a virtualized computing environment. 7. A method of grammar checking, the method comprising: providing a grammar checking facility for analyzing, in a cloud computing environment, a source-supplied text; receiving, by the grammar checking facility from a source, source-supplied text from a personal computing device located remotely from the grammar checking facility, performing, by the grammar checking facility, an analysis of the source-supplied text to identify grammatical errors, providing, by the grammar checking facility, the source with a grammar checked text with at least one identified grammatical error along with a writing reference guide which includes, for the at least one identified grammatical error, a corresponding portion of the source-supplied text comprising the at least one identified grammatical error embedded into a corresponding explanation of the at least one identified grammatical error. 8. The method of claim 7, wherein the source is one of a user, a device, and a computer program. 9. The method of claim 7, wherein the grammar checked text includes annotations that enable the source to learn the sources of grammatical errors found by the grammar checking facility. 10. The method of claim 7, wherein the cloud-computing environment includes implementing the grammar checking facility on multiple servers. 11. The method of claim 10, wherein the cloud-computing environment implements the grammar checking facility on multiple networked computing devices. 12. The method of claim 10, wherein the multiple servers are in a virtualized computing environment. 13. A method of grammar checking, comprising: providing a computer-based grammar checking facility comprising a text processing engine to grammar check a body of text provided by a source in order to improve the grammatical correctness of the body of text; and linking at least one grammatical rule from a rules database to a generic reference content comprising a generic explanation of the at least one grammatical rule, where the text processing engine operably applies the at least one grammatical rule to the source-provided body of text to determine at least one grammatical error and, for the at least one grammatical error, synthesize feedback that includes the generic reference content and a customized feedback, the customized feedback embedding the source-provided body of text causing the at least one grammatical error into the explanation. 14. The method of claim 13, further comprising using the at least one grammatical error to teach grammar to the source, wherein the teaching is conveyed through presenting at least one explanation comprising the body of text in an explanation; and incorporating past source mistakes to assemble a writing reference guide customized for the source. 15. The method of claim 14, wherein the writing guide content is selected based on the source's specific writing problems. 16. The method of claim 13, wherein the text processing engine accounts for the genre of the text in determining the at least one grammatical error. 17. The method of claim 13, wherein the text processing engine accounts for the context of the text in determining the at least one grammatical error. 18. The method of claim 13, wherein the text processing engine accounts for the type of source in determining the at least one grammatical error. 19. The method of claim 13, wherein the text processing engine accounts for the persona of the source in determining the at least one grammatical error. 20. The method of claim 13, wherein the text processing engine accounts for the type of user in determining the at least one grammatical error.
In embodiments of the present invention improved capabilities are described for methods and systems of grammar checking comprising a grammar checking facility for analyzing, in a cloud computing environment, a source-supplied text, wherein a source provides the grammar checking facility with the source-supplied text from a personal computing device to the grammar checking facility. The grammar checking facility performs an analysis of the source-supplied text to identify grammatical errors, and the grammar checking facility provides the source with at least one identified grammatical error along with a writing reference guide for the source which includes, for the at least one identified grammatical error, a corresponding explanation of the at least one identified grammatical error.1. A system of grammar checking, comprising: a grammar checking facility for analyzing, in a cloud computing environment, a source-supplied text, wherein a source provides the grammar checking facility with the source-supplied text from a personal computing device to the grammar checking facility, wherein the personal computing device is located remotely from the grammar checking facility, wherein the grammar checking facility performs an analysis of the source-supplied text to identify grammatical errors, and the grammar checking facility provides the source with grammar checked text with at least one identified grammatical error along with a writing reference guide which includes, for the at least one identified grammatical error, a corresponding portion of the source-supplied text comprising the at least one identified grammatical error embedded into a corresponding explanation of the at least one identified grammatical error. 2. The system of claim 1, wherein the source is one of a user, a device, and a computer program. 3. The system of claim 1, wherein the grammar checked text includes annotations that enable the source to learn the sources of grammatical errors found by the grammar checking facility. 4. The system of claim 1, wherein the cloud-computing environment includes implementing the grammar checking facility on multiple servers. 5. The system of claim 4, wherein the cloud-computing environment implements the grammar checking facility on multiple networked computing devices. 6. The system of claim 4, wherein the multiple servers are in a virtualized computing environment. 7. A method of grammar checking, the method comprising: providing a grammar checking facility for analyzing, in a cloud computing environment, a source-supplied text; receiving, by the grammar checking facility from a source, source-supplied text from a personal computing device located remotely from the grammar checking facility, performing, by the grammar checking facility, an analysis of the source-supplied text to identify grammatical errors, providing, by the grammar checking facility, the source with a grammar checked text with at least one identified grammatical error along with a writing reference guide which includes, for the at least one identified grammatical error, a corresponding portion of the source-supplied text comprising the at least one identified grammatical error embedded into a corresponding explanation of the at least one identified grammatical error. 8. The method of claim 7, wherein the source is one of a user, a device, and a computer program. 9. The method of claim 7, wherein the grammar checked text includes annotations that enable the source to learn the sources of grammatical errors found by the grammar checking facility. 10. The method of claim 7, wherein the cloud-computing environment includes implementing the grammar checking facility on multiple servers. 11. The method of claim 10, wherein the cloud-computing environment implements the grammar checking facility on multiple networked computing devices. 12. The method of claim 10, wherein the multiple servers are in a virtualized computing environment. 13. A method of grammar checking, comprising: providing a computer-based grammar checking facility comprising a text processing engine to grammar check a body of text provided by a source in order to improve the grammatical correctness of the body of text; and linking at least one grammatical rule from a rules database to a generic reference content comprising a generic explanation of the at least one grammatical rule, where the text processing engine operably applies the at least one grammatical rule to the source-provided body of text to determine at least one grammatical error and, for the at least one grammatical error, synthesize feedback that includes the generic reference content and a customized feedback, the customized feedback embedding the source-provided body of text causing the at least one grammatical error into the explanation. 14. The method of claim 13, further comprising using the at least one grammatical error to teach grammar to the source, wherein the teaching is conveyed through presenting at least one explanation comprising the body of text in an explanation; and incorporating past source mistakes to assemble a writing reference guide customized for the source. 15. The method of claim 14, wherein the writing guide content is selected based on the source's specific writing problems. 16. The method of claim 13, wherein the text processing engine accounts for the genre of the text in determining the at least one grammatical error. 17. The method of claim 13, wherein the text processing engine accounts for the context of the text in determining the at least one grammatical error. 18. The method of claim 13, wherein the text processing engine accounts for the type of source in determining the at least one grammatical error. 19. The method of claim 13, wherein the text processing engine accounts for the persona of the source in determining the at least one grammatical error. 20. The method of claim 13, wherein the text processing engine accounts for the type of user in determining the at least one grammatical error.
2,600
9,920
9,920
14,737,183
2,616
Processes for reviewing and editing a computer-generated animation are provided. In one example process, multiple images representing segments of a computer-generated animation may be displayed. In response to a selection of one or more of the images, geometry data associated with the corresponding segment(s) of computer-generated animation may be accessed. An editable geometric representation of the selected segment(s) of computer-generated animation may be displayed based on the accessed geometry data. In some examples, previously rendered representations and/or geometric representations of the same or other segments of the computer-generated animation may be concurrently displayed adjacent to, overlaid with, or in any other desired manner with the displayed geometric representation of the selected segment(s) of computer-generated animation.
1. A computer-implemented method for reviewing and editing a computer-generated animation, the method comprising: causing, by one or more processors, a display of an interface comprising a plurality of partitions representing a plurality of segments of the computer-generated animation; receiving a user selection of a first partition of the plurality of partitions; accessing geometry data associated with a first selected segment of the computer-generated animation corresponding to the selected first partition; and causing a display of a geometric representation of the first selected segment. 2. The computer-implemented method of claim 1, wherein the plurality of segments comprises a plurality of contiguous shots of the computer-generated animation. 3. The method of claim 1, wherein each of the plurality of segments comprises a plurality of previously rendered frames of animation. 4. The method of claim 1, wherein the geometry data comprises one or more of an animation graph, a character rig, an animation curve, and a geometric representation of a scene used to render frames of animation of the first selected segment. 5. The method of claim 1, wherein accessing geometry data associated with the first selected segment comprises loading the geometry data in a memory accessible by the one or more processors. 6. The method of claim 1, wherein the method further comprises: receiving a user modification to the geometric representation of the first selected segment; and causing a display of a modified geometric representation of the first selected segment based on the received user modification. 7. The method of claim 6, wherein the method further comprises: receiving a request to store the user modification of the geometric representation of the first selected segment; and storing the user modification of the geometry representation of the first selected segment. 8. The method of claim 1, wherein the display of the geometric representation of the first selected segment is displayed concurrently with the plurality of partitions. 9. The method of claim 1, wherein the method further comprises: receiving a user selection of a second partition of the plurality of partitions; and causing a display of a previously rendered representation of a second selected segment corresponding to the selected second partition. 10. The method of claim 9, wherein the display of the previously rendered representation of the second selected segment is displayed adjacent to the geometric representation of the first selected segment. 11. The method of claim 9, wherein the geometric representation of the first selected segment is displayed overlaid on the display of the previously rendered representation of the second selected segment. 12. The method of claim 9, wherein the display of the previously rendered representation of the second selected segment is displayed sequentially in time with the geometric representation of the first selected segment. 13. A non-transitory computer-readable storage medium for reviewing and editing a computer-generated animation, the non-transitory computer-readable storage medium comprising computer-executable instructions for: causing, by one or more processors, a display of an interface comprising a plurality of partitions representing a plurality of segments of the computer-generated animation; receiving a user selection of a first partition of the plurality of partitions; accessing geometry data associated with a first selected segment of the computer-generated animation corresponding to the selected first partition; and causing a display of a geometric representation of the first selected segment. 14. The non-transitory computer-readable storage medium of claim 13, wherein the plurality of segments comprises a plurality of contiguous shots of the computer-generated animation. 15. The non-transitory computer-readable storage medium of claim 13, wherein each of the plurality of segments comprises a plurality of previously rendered frames of animation. 16. The non-transitory computer-readable storage medium of claim 13, wherein the geometry data comprises one or more of an animation graph, a character rig, an animation curve, and a geometric representation of a scene used to render frames of animation of the first selected segment. 17. The non-transitory computer-readable storage medium of claim 13, wherein accessing geometry data associated with the first selected segment comprises loading the geometry data in a memory accessible by the one or more processors. 18. The non-transitory computer-readable storage medium of claim 13, further comprising instructions for: receiving a user modification to the geometric representation of the first selected segment; and causing a display of a modified geometric representation of the first selected segment based on the received user modification. 19. The non-transitory computer-readable storage medium of claim 18, further comprising instructions for: receiving a request to store the user modification of the geometric representation of the first selected segment; and storing the user modification of the geometry representation of the first selected segment. 20. The non-transitory computer-readable storage medium of claim 13, wherein the display of the geometric representation of the first selected segment is displayed concurrently with the plurality of partitions. 21. The non-transitory computer-readable storage medium of claim 13, further comprising instructions for: receiving a user selection of a second partition of the plurality of partitions; and causing a display of a previously rendered representation of a second selected segment corresponding to the selected second partition. 22. The non-transitory computer-readable storage medium of claim 21, wherein the display of the previously rendered representation of the second selected segment is displayed adjacent to the geometric representation of the first selected segment. 23. The non-transitory computer-readable storage medium of claim 21, wherein the geometric representation of the first selected segment is displayed overlaid on the display of the previously rendered representation of the second selected segment. 24. The non-transitory computer-readable storage medium of claim 21, wherein the display of the previously rendered representation of the second selected segment is displayed sequentially in time with the geometric representation of the first selected segment. 25. A system for reviewing and editing a computer-generated animation, the system comprising: a display; and one or more processors coupled to the display and configured to: cause, on the display, a display of an interface comprising a plurality of partitions representing a plurality of segments of the computer-generated animation; receive a user selection of a first partition of the plurality of partitions; access geometry data associated with a first selected segment of the computer-generated animation corresponding to the selected first partition; and cause, on the display, a display of a geometric representation of the first selected segment. 26. The system of claim 25, wherein the plurality of segments comprises a plurality of contiguous shots of the computer-generated animation. 27. The system of claim 25, wherein each of the plurality of segments comprises a plurality of previously rendered frames of animation. 28. The system of claim 25, wherein the geometry data comprises one or more of an animation graph, a character rig, an animation curve, and a geometric representation of a scene used to render frames of animation of the first selected segment. 29. The system of claim 25, wherein accessing geometry data associated with the first selected segment comprises loading the geometry data in a memory accessible by the one or more processors. 30. The system of claim 25, wherein the one or more processors are further configured to: receive a user modification to the geometric representation of the first selected segment; and cause, on the display, a display of a modified geometric representation of the first selected segment based on the received user modification. 31. The system of claim 30, wherein the one or more processors are further configured to: receive a request to store the user modification of the geometric representation of the first selected segment; and store the user modification of the geometry representation of the first selected segment. 32. The system of claim 25, wherein the display of the geometric representation of the first selected segment is displayed concurrently with the plurality of partitions. 33. The system of claim 25, wherein the one or more processors are further configured to: receive a user selection of a second partition of the plurality of partitions; and cause, on the display, a display of a previously rendered representation of a second selected segment corresponding to the selected second partition. 34. The system of claim 33, wherein the display of the previously rendered representation of the second selected segment is displayed adjacent to the geometric representation of the first selected segment. 35. The system of claim 33, wherein the geometric representation of the first selected segment is displayed overlaid on the display of the previously rendered representation of the second selected segment. 36. The system of claim 33, wherein the display of the previously rendered representation of the second selected segment is displayed sequentially in time with the geometric representation of the first selected segment.
Processes for reviewing and editing a computer-generated animation are provided. In one example process, multiple images representing segments of a computer-generated animation may be displayed. In response to a selection of one or more of the images, geometry data associated with the corresponding segment(s) of computer-generated animation may be accessed. An editable geometric representation of the selected segment(s) of computer-generated animation may be displayed based on the accessed geometry data. In some examples, previously rendered representations and/or geometric representations of the same or other segments of the computer-generated animation may be concurrently displayed adjacent to, overlaid with, or in any other desired manner with the displayed geometric representation of the selected segment(s) of computer-generated animation.1. A computer-implemented method for reviewing and editing a computer-generated animation, the method comprising: causing, by one or more processors, a display of an interface comprising a plurality of partitions representing a plurality of segments of the computer-generated animation; receiving a user selection of a first partition of the plurality of partitions; accessing geometry data associated with a first selected segment of the computer-generated animation corresponding to the selected first partition; and causing a display of a geometric representation of the first selected segment. 2. The computer-implemented method of claim 1, wherein the plurality of segments comprises a plurality of contiguous shots of the computer-generated animation. 3. The method of claim 1, wherein each of the plurality of segments comprises a plurality of previously rendered frames of animation. 4. The method of claim 1, wherein the geometry data comprises one or more of an animation graph, a character rig, an animation curve, and a geometric representation of a scene used to render frames of animation of the first selected segment. 5. The method of claim 1, wherein accessing geometry data associated with the first selected segment comprises loading the geometry data in a memory accessible by the one or more processors. 6. The method of claim 1, wherein the method further comprises: receiving a user modification to the geometric representation of the first selected segment; and causing a display of a modified geometric representation of the first selected segment based on the received user modification. 7. The method of claim 6, wherein the method further comprises: receiving a request to store the user modification of the geometric representation of the first selected segment; and storing the user modification of the geometry representation of the first selected segment. 8. The method of claim 1, wherein the display of the geometric representation of the first selected segment is displayed concurrently with the plurality of partitions. 9. The method of claim 1, wherein the method further comprises: receiving a user selection of a second partition of the plurality of partitions; and causing a display of a previously rendered representation of a second selected segment corresponding to the selected second partition. 10. The method of claim 9, wherein the display of the previously rendered representation of the second selected segment is displayed adjacent to the geometric representation of the first selected segment. 11. The method of claim 9, wherein the geometric representation of the first selected segment is displayed overlaid on the display of the previously rendered representation of the second selected segment. 12. The method of claim 9, wherein the display of the previously rendered representation of the second selected segment is displayed sequentially in time with the geometric representation of the first selected segment. 13. A non-transitory computer-readable storage medium for reviewing and editing a computer-generated animation, the non-transitory computer-readable storage medium comprising computer-executable instructions for: causing, by one or more processors, a display of an interface comprising a plurality of partitions representing a plurality of segments of the computer-generated animation; receiving a user selection of a first partition of the plurality of partitions; accessing geometry data associated with a first selected segment of the computer-generated animation corresponding to the selected first partition; and causing a display of a geometric representation of the first selected segment. 14. The non-transitory computer-readable storage medium of claim 13, wherein the plurality of segments comprises a plurality of contiguous shots of the computer-generated animation. 15. The non-transitory computer-readable storage medium of claim 13, wherein each of the plurality of segments comprises a plurality of previously rendered frames of animation. 16. The non-transitory computer-readable storage medium of claim 13, wherein the geometry data comprises one or more of an animation graph, a character rig, an animation curve, and a geometric representation of a scene used to render frames of animation of the first selected segment. 17. The non-transitory computer-readable storage medium of claim 13, wherein accessing geometry data associated with the first selected segment comprises loading the geometry data in a memory accessible by the one or more processors. 18. The non-transitory computer-readable storage medium of claim 13, further comprising instructions for: receiving a user modification to the geometric representation of the first selected segment; and causing a display of a modified geometric representation of the first selected segment based on the received user modification. 19. The non-transitory computer-readable storage medium of claim 18, further comprising instructions for: receiving a request to store the user modification of the geometric representation of the first selected segment; and storing the user modification of the geometry representation of the first selected segment. 20. The non-transitory computer-readable storage medium of claim 13, wherein the display of the geometric representation of the first selected segment is displayed concurrently with the plurality of partitions. 21. The non-transitory computer-readable storage medium of claim 13, further comprising instructions for: receiving a user selection of a second partition of the plurality of partitions; and causing a display of a previously rendered representation of a second selected segment corresponding to the selected second partition. 22. The non-transitory computer-readable storage medium of claim 21, wherein the display of the previously rendered representation of the second selected segment is displayed adjacent to the geometric representation of the first selected segment. 23. The non-transitory computer-readable storage medium of claim 21, wherein the geometric representation of the first selected segment is displayed overlaid on the display of the previously rendered representation of the second selected segment. 24. The non-transitory computer-readable storage medium of claim 21, wherein the display of the previously rendered representation of the second selected segment is displayed sequentially in time with the geometric representation of the first selected segment. 25. A system for reviewing and editing a computer-generated animation, the system comprising: a display; and one or more processors coupled to the display and configured to: cause, on the display, a display of an interface comprising a plurality of partitions representing a plurality of segments of the computer-generated animation; receive a user selection of a first partition of the plurality of partitions; access geometry data associated with a first selected segment of the computer-generated animation corresponding to the selected first partition; and cause, on the display, a display of a geometric representation of the first selected segment. 26. The system of claim 25, wherein the plurality of segments comprises a plurality of contiguous shots of the computer-generated animation. 27. The system of claim 25, wherein each of the plurality of segments comprises a plurality of previously rendered frames of animation. 28. The system of claim 25, wherein the geometry data comprises one or more of an animation graph, a character rig, an animation curve, and a geometric representation of a scene used to render frames of animation of the first selected segment. 29. The system of claim 25, wherein accessing geometry data associated with the first selected segment comprises loading the geometry data in a memory accessible by the one or more processors. 30. The system of claim 25, wherein the one or more processors are further configured to: receive a user modification to the geometric representation of the first selected segment; and cause, on the display, a display of a modified geometric representation of the first selected segment based on the received user modification. 31. The system of claim 30, wherein the one or more processors are further configured to: receive a request to store the user modification of the geometric representation of the first selected segment; and store the user modification of the geometry representation of the first selected segment. 32. The system of claim 25, wherein the display of the geometric representation of the first selected segment is displayed concurrently with the plurality of partitions. 33. The system of claim 25, wherein the one or more processors are further configured to: receive a user selection of a second partition of the plurality of partitions; and cause, on the display, a display of a previously rendered representation of a second selected segment corresponding to the selected second partition. 34. The system of claim 33, wherein the display of the previously rendered representation of the second selected segment is displayed adjacent to the geometric representation of the first selected segment. 35. The system of claim 33, wherein the geometric representation of the first selected segment is displayed overlaid on the display of the previously rendered representation of the second selected segment. 36. The system of claim 33, wherein the display of the previously rendered representation of the second selected segment is displayed sequentially in time with the geometric representation of the first selected segment.
2,600
9,921
9,921
15,412,294
2,619
In general, techniques are described for performing multi-layer image fetching using a single hardware image fetcher pipeline of a display processor. A device comprising a layer buffer, and a display processor may be configured to perform the techniques. The layer buffer may be configured to store two or more independent layers. The display processor may include a single hardware image fetcher pipeline. The single hardware image fetcher pipeline may be configured to concurrently retrieve, from the layer buffer, two or more independent layers, concurrently process the two or more independent layers, and concurrently output, by two or more outputs of the single hardware image fetcher pipeline, the two or more processed independent layers for composition to form one of the frames to be displayed by one or more display units.
1. A method of displaying frames, the method comprising: concurrently retrieving, from a layer buffer and by a single hardware image fetcher pipeline of a display processor, two or more independent layers; concurrently processing, by the single hardware image fetcher pipeline, the two or more independent layers; and concurrently outputting, by two or more outputs of the single hardware image fetcher pipeline, the two or more processed independent layers for composition to form one of the frames to be displayed by one or more display units. 2. The method of claim 1, wherein concurrently processing the two or more independent layers comprises: performing a first operation with respect to a first one of the two or more layers; and performing a second, different operation with respect to a second one of the two or more layers concurrent to performing the first operation. 3. The method of claim 2, wherein the first operation comprises one of a vertical flip operation, a horizontal flip operation, and a clipping operation, and wherein the second operation comprises a different one of the vertical flip operation, the horizontal flip operation, and the clipping operation. 4. The method of claim 1, further comprising: receiving, by a crossbar of the display processor, the two or more processed layers; and outputting, by the crossbar, the two or more processed layers to two different mixing units of the display processor or to a same one of the two different mixing units of the display processor. 5. The method of claim 4, wherein the crossbar comprises a crossbar having a non-blocking switch network architecture. 6. The method of claim 1, wherein at least one of the two or more independent layers is split between two or more of the display units, and wherein retrieving the two or more independent layers comprises: retrieving a portion of a first one of the two or more independent layers that is to be displayed to a first one of the two or more displays; and retrieving a portion of a second one of the two or more independent layers that is to be displayed to the first one of the two or more displays. 7. The method of claim 1, wherein at least one of the two or more independent layers are between two or more of the display units, and wherein retrieving the two or more independent layers comprises: retrieving a first portion of a first one of the two or more independent layers that is to be displayed to a first one of the two or more displays; and retrieving a second portion of the first one of the two or more independent layers that is to be displayed to a second one of the two or more displays. 8. The method of claim 1, wherein at least two of the two or more independent layers overlap in the one of the frames to be displayed by the one or more display units. 9. The method of claim 1, wherein at least two of the two or more independent layers are side-by-side in the one of the frames to be displayed by the one or more display units. 10. The method of claim 1, wherein at least two of the two or more independent layers are adjacent to each other such that there are no intervening pixels between the at least two of the two or more independent layers in the one of the frames to be displayed by the one or more display units. 11. The method of claim 1, wherein at least two of the two or more independent layers are oriented one above the other in the one of the frames to be displayed by the one or more display units. 12. The method of claim 1, displaying, by the one or more display units, the one of the frames. 13. A device configured to display frames, the device comprising: a layer buffer configured to store two or more independent layers; and a display processor including a single hardware image fetcher pipeline configured to: concurrently retrieve, from the layer buffer, two or more independent layers, concurrently process the two or more independent layers; and concurrently output, by two or more outputs of the single hardware image fetcher pipeline, the two or more processed independent layers for composition to form one of the frames to be displayed by one or more display units. 14. The device of claim 13, wherein the single hardware image fetcher pipeline is configured to: perform a first operation with respect to a first one of the two or more layers; and perform a second, different operation with respect to a second one of the two or more layers concurrent to performing the first operation. 15. The device of claim 14, wherein the first operation comprises one of a vertical flip operation, a horizontal flip operation, and a clipping operation, and wherein the second operation comprises a different one of the vertical flip operation, the horizontal flip operation, and the clipping operation. 16. The device of claim 13, wherein the display processor further comprises a crossbar configured to: receive the two or more processed layers; and output the two or more processed layers to two different mixing units of the display processor or to a same one of the two different mixing units of the display processor. 17. The device of claim 16, wherein the crossbar comprises a crossbar having a non-blocking switch network architecture. 18. The device of claim 13, wherein at least one of the two or more independent layers is split between two or more of the display units, and wherein the single hardware image fetcher pipeline is configured to: retrieve a portion of a first one of the two or more independent layers that is to be displayed to a first one of the two or more displays; and retrieve a portion of a second one of the two or more independent layers that is to be displayed to the first one of the two or more displays. 19. The device of claim 13, wherein at least one of the two or more independent layers are is between two or more of the display units, and wherein the single hardware image fetcher pipeline is configured to: retrieve a first portion of a first one of the two or more independent layers that is to be displayed to a first one of the two or more displays; and retrieve a second portion of the first one of the two or more independent layers that is to be displayed to a second one of the two or more displays. 20. The device of claim 13, wherein at least two of the two or more independent layers overlap in the one of the frames to be displayed by the one or more display units. 21. The device of claim 13, wherein at least two of the two or more independent layers are side-by-side in the one of the frames to be displayed by the one or more display units. 22. The device of claim 13, wherein at least two of the two or more independent layers are adjacent to each other such that there are no intervening pixels between the at least two of the two or more independent layers in the one of the frames to be displayed by the one or more display units. 23. The device of claim 13, wherein at least two of the two or more independent layers are oriented one above the other in the one of the frames to be displayed by the one or more display units. 24. The device of claim 13, wherein the device is coupled to the one or more display units, the one or more display units configured to display the one of the frames. 25. A device for displaying frames, the device comprising: a means for storing two or more independent layers; and a single means for concurrently retrieving, from the means for storing, two or more independent layers, concurrently processing the two or more independent layers, and concurrently outputting, by two or more outputs of the single means, the two or more processed independent layers for composition to form one of the frames to be displayed by one or more display units. 26. The device of claim 25, wherein the means for concurrently processing the two or more independent layers comprises: means for performing a first operation with respect to a first one of the two or more layers; and means for performing a second, different operation with respect to a second one of the two or more layers concurrent to performing the first operation. 27. The device of claim 26, wherein the first operation comprises one of a vertical flip operation, a horizontal flip operation, and a clipping operation, and wherein the second operation comprises a different one of the vertical flip operation, the horizontal flip operation, and the clipping operation. 28. The device of claim 25, further comprising: means for receiving the two or more processed layers; and means for outputting the two or more processed layers to two different mixing units of the display processor or to a same one of the two different mixing units of the display processor. 29. The device of claim 28, wherein the crossbar comprises a crossbar having a non-blocking switch network architecture. 30. A non-transitory computer-readable storage medium having stored thereon instructions that, when executed, cause a single hardware image fetcher pipeline of a display processor to: concurrently retrieve, from a layer buffer, two or more independent layers; concurrently process the two or more independent layers; and concurrently output, by two or more outputs of the single hardware image fetcher pipeline, the two or more processed independent layers for composition to form a frame to be displayed by one or more display units.
In general, techniques are described for performing multi-layer image fetching using a single hardware image fetcher pipeline of a display processor. A device comprising a layer buffer, and a display processor may be configured to perform the techniques. The layer buffer may be configured to store two or more independent layers. The display processor may include a single hardware image fetcher pipeline. The single hardware image fetcher pipeline may be configured to concurrently retrieve, from the layer buffer, two or more independent layers, concurrently process the two or more independent layers, and concurrently output, by two or more outputs of the single hardware image fetcher pipeline, the two or more processed independent layers for composition to form one of the frames to be displayed by one or more display units.1. A method of displaying frames, the method comprising: concurrently retrieving, from a layer buffer and by a single hardware image fetcher pipeline of a display processor, two or more independent layers; concurrently processing, by the single hardware image fetcher pipeline, the two or more independent layers; and concurrently outputting, by two or more outputs of the single hardware image fetcher pipeline, the two or more processed independent layers for composition to form one of the frames to be displayed by one or more display units. 2. The method of claim 1, wherein concurrently processing the two or more independent layers comprises: performing a first operation with respect to a first one of the two or more layers; and performing a second, different operation with respect to a second one of the two or more layers concurrent to performing the first operation. 3. The method of claim 2, wherein the first operation comprises one of a vertical flip operation, a horizontal flip operation, and a clipping operation, and wherein the second operation comprises a different one of the vertical flip operation, the horizontal flip operation, and the clipping operation. 4. The method of claim 1, further comprising: receiving, by a crossbar of the display processor, the two or more processed layers; and outputting, by the crossbar, the two or more processed layers to two different mixing units of the display processor or to a same one of the two different mixing units of the display processor. 5. The method of claim 4, wherein the crossbar comprises a crossbar having a non-blocking switch network architecture. 6. The method of claim 1, wherein at least one of the two or more independent layers is split between two or more of the display units, and wherein retrieving the two or more independent layers comprises: retrieving a portion of a first one of the two or more independent layers that is to be displayed to a first one of the two or more displays; and retrieving a portion of a second one of the two or more independent layers that is to be displayed to the first one of the two or more displays. 7. The method of claim 1, wherein at least one of the two or more independent layers are between two or more of the display units, and wherein retrieving the two or more independent layers comprises: retrieving a first portion of a first one of the two or more independent layers that is to be displayed to a first one of the two or more displays; and retrieving a second portion of the first one of the two or more independent layers that is to be displayed to a second one of the two or more displays. 8. The method of claim 1, wherein at least two of the two or more independent layers overlap in the one of the frames to be displayed by the one or more display units. 9. The method of claim 1, wherein at least two of the two or more independent layers are side-by-side in the one of the frames to be displayed by the one or more display units. 10. The method of claim 1, wherein at least two of the two or more independent layers are adjacent to each other such that there are no intervening pixels between the at least two of the two or more independent layers in the one of the frames to be displayed by the one or more display units. 11. The method of claim 1, wherein at least two of the two or more independent layers are oriented one above the other in the one of the frames to be displayed by the one or more display units. 12. The method of claim 1, displaying, by the one or more display units, the one of the frames. 13. A device configured to display frames, the device comprising: a layer buffer configured to store two or more independent layers; and a display processor including a single hardware image fetcher pipeline configured to: concurrently retrieve, from the layer buffer, two or more independent layers, concurrently process the two or more independent layers; and concurrently output, by two or more outputs of the single hardware image fetcher pipeline, the two or more processed independent layers for composition to form one of the frames to be displayed by one or more display units. 14. The device of claim 13, wherein the single hardware image fetcher pipeline is configured to: perform a first operation with respect to a first one of the two or more layers; and perform a second, different operation with respect to a second one of the two or more layers concurrent to performing the first operation. 15. The device of claim 14, wherein the first operation comprises one of a vertical flip operation, a horizontal flip operation, and a clipping operation, and wherein the second operation comprises a different one of the vertical flip operation, the horizontal flip operation, and the clipping operation. 16. The device of claim 13, wherein the display processor further comprises a crossbar configured to: receive the two or more processed layers; and output the two or more processed layers to two different mixing units of the display processor or to a same one of the two different mixing units of the display processor. 17. The device of claim 16, wherein the crossbar comprises a crossbar having a non-blocking switch network architecture. 18. The device of claim 13, wherein at least one of the two or more independent layers is split between two or more of the display units, and wherein the single hardware image fetcher pipeline is configured to: retrieve a portion of a first one of the two or more independent layers that is to be displayed to a first one of the two or more displays; and retrieve a portion of a second one of the two or more independent layers that is to be displayed to the first one of the two or more displays. 19. The device of claim 13, wherein at least one of the two or more independent layers are is between two or more of the display units, and wherein the single hardware image fetcher pipeline is configured to: retrieve a first portion of a first one of the two or more independent layers that is to be displayed to a first one of the two or more displays; and retrieve a second portion of the first one of the two or more independent layers that is to be displayed to a second one of the two or more displays. 20. The device of claim 13, wherein at least two of the two or more independent layers overlap in the one of the frames to be displayed by the one or more display units. 21. The device of claim 13, wherein at least two of the two or more independent layers are side-by-side in the one of the frames to be displayed by the one or more display units. 22. The device of claim 13, wherein at least two of the two or more independent layers are adjacent to each other such that there are no intervening pixels between the at least two of the two or more independent layers in the one of the frames to be displayed by the one or more display units. 23. The device of claim 13, wherein at least two of the two or more independent layers are oriented one above the other in the one of the frames to be displayed by the one or more display units. 24. The device of claim 13, wherein the device is coupled to the one or more display units, the one or more display units configured to display the one of the frames. 25. A device for displaying frames, the device comprising: a means for storing two or more independent layers; and a single means for concurrently retrieving, from the means for storing, two or more independent layers, concurrently processing the two or more independent layers, and concurrently outputting, by two or more outputs of the single means, the two or more processed independent layers for composition to form one of the frames to be displayed by one or more display units. 26. The device of claim 25, wherein the means for concurrently processing the two or more independent layers comprises: means for performing a first operation with respect to a first one of the two or more layers; and means for performing a second, different operation with respect to a second one of the two or more layers concurrent to performing the first operation. 27. The device of claim 26, wherein the first operation comprises one of a vertical flip operation, a horizontal flip operation, and a clipping operation, and wherein the second operation comprises a different one of the vertical flip operation, the horizontal flip operation, and the clipping operation. 28. The device of claim 25, further comprising: means for receiving the two or more processed layers; and means for outputting the two or more processed layers to two different mixing units of the display processor or to a same one of the two different mixing units of the display processor. 29. The device of claim 28, wherein the crossbar comprises a crossbar having a non-blocking switch network architecture. 30. A non-transitory computer-readable storage medium having stored thereon instructions that, when executed, cause a single hardware image fetcher pipeline of a display processor to: concurrently retrieve, from a layer buffer, two or more independent layers; concurrently process the two or more independent layers; and concurrently output, by two or more outputs of the single hardware image fetcher pipeline, the two or more processed independent layers for composition to form a frame to be displayed by one or more display units.
2,600
9,922
9,922
14,555,510
2,624
An apparatus includes a multiplexed liquid crystal display (LCD) controller. The LCD controller operates in at least first and second phases of operation. The LCD controller drives a first plurality of signal lines to a first set of voltages during the first phase of operation and to a second set of voltages during the second phase of operation. The LCD controller selectively couples to a node at least some of the plurality of signal lines between the first and second phases of operation depending on data provided to the LCD controller.
1. An apparatus, comprising a multiplexed liquid crystal display (LCD) controller operating in at least first and second phases of operation, the LCD controller to drive a first plurality of signal lines to a first set of voltages during the first phase of operation and to a second set of voltages during the second phase of operation, wherein the LCD controller selectively couples to a node at least some of the first plurality of signal lines between the first and second phases of operation depending on data provided to the LCD controller. 2. The apparatus according to claim 1, wherein the first plurality of signal lines comprises a plurality of segment lines. 3. The apparatus according to claim 1, wherein a period of time between the first and second phases of operation comprises a reset period. 4. The apparatus according to claim 1, wherein the at least some of the first plurality of signal lines are floated between the first and second phases of operation if data for a segment excited during the first phase of operation matches data for a segment to be excited during the second phase of operation. 5. The apparatus according to claim 4, wherein the at least some of the first plurality of signal lines are coupled to the node between the first and second phases of operation if data for a segment excited during the first phase of operation differs from data for a segment to be excited during the second phase of operation. 6. The apparatus according to claim 1, wherein an order of scanning a second plurality of signal lines is selected depending on the data provided to the LCD controller. 7. The apparatus according to claim 6, wherein the second plurality of signal lines comprises a plurality of common lines. 8. The apparatus according to claim 6, wherein the order of scanning a second plurality of signal lines is selected depending on whether data for a segment excited during the first phase of operation differs from data for a segment to be excited during the second phase of operation. 9. The apparatus according to claim 1, wherein the LCD controller selectively couples the first plurality of signal lines to (a) a ground potential; or (b) a majority voltage of a plurality of common lines for the first phase of operation; or (c) a majority voltage of a plurality of segment lines for the second phase of operation. 10. An apparatus, comprising: a multiplexed liquid crystal display (LCD), having at least first and second phases of operation; and a controller coupled to the LCD, wherein the controller is to selectively perform segment resetting between the first and second phases of operation of the LCD depending on data provided to the LCD controller. 11. The apparatus according to claim 10, wherein the controller performs segment resetting by selectively coupling a plurality of segment lines of the LCD to a voltage if data for a segment excited during the first phase of operation differs from data for a segment to be excited during the second phase of operation. 12. The apparatus according to claim 11, wherein the controller floats the plurality of segment lines of the LCD between the first and second phases of operation if data for a segment excited during the first phase of operation matches data for a segment to be excited during the second phase of operation. 13. The apparatus according to claim 12, wherein an order of scanning a second plurality of signal lines of the LCD is selected depending on the data provided to the LCD controller. 14. The apparatus according to claim 13, wherein the second plurality of signal lines of the LCD comprises common lines of the LCD. 15. The apparatus according to claim 11, wherein the voltage comprises (a) a ground potential of the apparatus, (b) a bias voltage, (c) a majority voltage of common lines of the LCD, or (d) a majority voltage of the segment lines of the LCD. 16. The apparatus according to claim 10, wherein the controller performs segment resetting by selectively coupling a plurality of segment lines of the LCD to a plurality of common lines of the LCD. 17. A method of operating a liquid crystal display (LCD), the method comprising: operating the LCD in a first phase of operation; after operating the LCD in the first phase of operation, selectively performing segment resetting based on data provided to the LCD controller; and operating the LCD in a second phase of operation after performing selective segment resetting. 18. The method according to claim 17, wherein performing segment resetting further comprises selectively coupling a plurality of segment lines of the LCD to a voltage if data for a segment excited during the first phase of operation differs from data for a segment to be excited during the second phase of operation. 19. The method according to claim 18, wherein the controller floats the plurality of segment lines of the LCD between the first and second phases of operation if data for a segment excited during the first phase of operation matches data for a segment to be excited during the second phase of operation. 20. The method according to claim 19, further comprising selecting an order of scanning a plurality of common lines of the LCD depending on the data provided to the LCD controller.
An apparatus includes a multiplexed liquid crystal display (LCD) controller. The LCD controller operates in at least first and second phases of operation. The LCD controller drives a first plurality of signal lines to a first set of voltages during the first phase of operation and to a second set of voltages during the second phase of operation. The LCD controller selectively couples to a node at least some of the plurality of signal lines between the first and second phases of operation depending on data provided to the LCD controller.1. An apparatus, comprising a multiplexed liquid crystal display (LCD) controller operating in at least first and second phases of operation, the LCD controller to drive a first plurality of signal lines to a first set of voltages during the first phase of operation and to a second set of voltages during the second phase of operation, wherein the LCD controller selectively couples to a node at least some of the first plurality of signal lines between the first and second phases of operation depending on data provided to the LCD controller. 2. The apparatus according to claim 1, wherein the first plurality of signal lines comprises a plurality of segment lines. 3. The apparatus according to claim 1, wherein a period of time between the first and second phases of operation comprises a reset period. 4. The apparatus according to claim 1, wherein the at least some of the first plurality of signal lines are floated between the first and second phases of operation if data for a segment excited during the first phase of operation matches data for a segment to be excited during the second phase of operation. 5. The apparatus according to claim 4, wherein the at least some of the first plurality of signal lines are coupled to the node between the first and second phases of operation if data for a segment excited during the first phase of operation differs from data for a segment to be excited during the second phase of operation. 6. The apparatus according to claim 1, wherein an order of scanning a second plurality of signal lines is selected depending on the data provided to the LCD controller. 7. The apparatus according to claim 6, wherein the second plurality of signal lines comprises a plurality of common lines. 8. The apparatus according to claim 6, wherein the order of scanning a second plurality of signal lines is selected depending on whether data for a segment excited during the first phase of operation differs from data for a segment to be excited during the second phase of operation. 9. The apparatus according to claim 1, wherein the LCD controller selectively couples the first plurality of signal lines to (a) a ground potential; or (b) a majority voltage of a plurality of common lines for the first phase of operation; or (c) a majority voltage of a plurality of segment lines for the second phase of operation. 10. An apparatus, comprising: a multiplexed liquid crystal display (LCD), having at least first and second phases of operation; and a controller coupled to the LCD, wherein the controller is to selectively perform segment resetting between the first and second phases of operation of the LCD depending on data provided to the LCD controller. 11. The apparatus according to claim 10, wherein the controller performs segment resetting by selectively coupling a plurality of segment lines of the LCD to a voltage if data for a segment excited during the first phase of operation differs from data for a segment to be excited during the second phase of operation. 12. The apparatus according to claim 11, wherein the controller floats the plurality of segment lines of the LCD between the first and second phases of operation if data for a segment excited during the first phase of operation matches data for a segment to be excited during the second phase of operation. 13. The apparatus according to claim 12, wherein an order of scanning a second plurality of signal lines of the LCD is selected depending on the data provided to the LCD controller. 14. The apparatus according to claim 13, wherein the second plurality of signal lines of the LCD comprises common lines of the LCD. 15. The apparatus according to claim 11, wherein the voltage comprises (a) a ground potential of the apparatus, (b) a bias voltage, (c) a majority voltage of common lines of the LCD, or (d) a majority voltage of the segment lines of the LCD. 16. The apparatus according to claim 10, wherein the controller performs segment resetting by selectively coupling a plurality of segment lines of the LCD to a plurality of common lines of the LCD. 17. A method of operating a liquid crystal display (LCD), the method comprising: operating the LCD in a first phase of operation; after operating the LCD in the first phase of operation, selectively performing segment resetting based on data provided to the LCD controller; and operating the LCD in a second phase of operation after performing selective segment resetting. 18. The method according to claim 17, wherein performing segment resetting further comprises selectively coupling a plurality of segment lines of the LCD to a voltage if data for a segment excited during the first phase of operation differs from data for a segment to be excited during the second phase of operation. 19. The method according to claim 18, wherein the controller floats the plurality of segment lines of the LCD between the first and second phases of operation if data for a segment excited during the first phase of operation matches data for a segment to be excited during the second phase of operation. 20. The method according to claim 19, further comprising selecting an order of scanning a plurality of common lines of the LCD depending on the data provided to the LCD controller.
2,600
9,923
9,923
15,418,687
2,656
An audio device with a number of microphones that are configured into a microphone array. An audio signal processing system in communication with the microphone array is configured to derive a plurality of audio signals from the plurality of microphones, use prior audio data to operate a filter topology that processes audio signals so as to make the array more sensitive to desired sounds than to undesired sounds, categorize received sounds as one of desired sounds or undesired sounds, and use the categorized received sounds and the categories of the received sounds to modify the filter topology.
1. An audio device, comprising: a plurality of spatially-separated microphones that are configured into a microphone array, wherein the microphones are adapted to receive sound; and a processing system in communication with the microphone array and configured to: derive a plurality of audio signals from the plurality of microphones; use prior audio data to operate a filter topology that processes audio signals so as to make the array more sensitive to desired sound than to undesired sound; categorize received sounds as one of desired sounds or undesired sounds; and use the categorized received sounds, and the categories of the received sounds, to modify the filter topology. 2. The audio device of claim 1, further comprising a detection system that is configured to detect a type of sound source from which audio signals are being derived. 3. The audio device of claim 2, wherein the audio signals derived from a certain type of sound source are not used to modify the filter topology. 4. The audio device of claim 3, wherein the certain type of sound source comprises a voice-based sound source. 5. The audio device of claim 2, wherein the detection system comprises a voice activity detector that is configured to be used to detect a voice-based sound source. 6. The audio device of claim 1, wherein the audio signal processing system is further configured to compute a confidence score for received sounds, wherein the confidence score is used in the modification of the filter topology. 7. The audio device of claim 6, wherein the confidence score is used to weight the contribution of the received sounds to the modification of the filter topology. 8. The audio device of claim 6, wherein computing the confidence score is based on a degree of confidence that received sounds include a wakeup word. 9. The audio device of claim 1, wherein received sounds are collected over time, and categorized received sounds that are collected over a particular time-period are used to modify the filter topology 10. The audio device of claim 1, wherein the received sound collection time-period is fixed. 11. The audio device of claim 1, wherein older received sounds have less effect on filter topology modification than do newer collected received sounds. 12. The audio device of claim 11, wherein the effect of collected received sounds on the filter topology modification decays at a constant rate. 13. The audio device of claim 1, further comprising a detection system that is configured to detect a change in the environment of the audio device. 14. The audio device of claim 13, wherein which of the collected received sounds that are used to modify the filter topology, is based on the detected change in the environment. 15. The audio device of claim 14, wherein when a change in the environment of the audio device is detected, received sounds that were collected before the change in the environment of the audio device was detected, are no longer used to modify the filter topology. 16. The audio device of claim 1, further comprising a communication system that is configured to transmit audio signals to a server. 17. The audio device of claim 16, wherein the communication system is further configured to receive modified filter topology parameters from the server. 18. The audio device of claim 17, wherein a modified filter topology is based on a combination of the modified filter topology parameters received from the server, and categorized received sounds. 19. The audio device of claim 1, wherein the audio signals comprise multi-channel representations of sound fields detected by the microphone array, comprising at least one channel for each microphone. 20. The audio device of claim 19, wherein the audio signals further comprise metadata. 21. The audio device of claim 1, wherein the audio signals comprise multi-channel audio recordings. 22. The audio device of claim 1, wherein the audio signals comprise cross-power spectral density matrices. 23. The audio device of claim 1, where desired and undesired sounds modify the filter topology differently. 24. An audio device, comprising: a plurality of spatially-separated microphones that are configured into a microphone array, wherein the microphones are adapted to receive sound; and a processing system in communication with the microphone array and configured to: derive a plurality of audio signals from the plurality of microphones; use prior audio data to operate a filter topology that processes audio signals so as to make the array more sensitive to desired sound than to undesired sound; categorize received sounds as one of desired sounds or undesired sounds; determine a confidence score for received sounds; and use the categorized received sounds, the categories of the received sounds, and the confidence score, to modify the filter topology, wherein received sounds are collected over time, and categorized received sounds that are collected over a particular time-period are used to modify the filter topology. 25. An audio device, comprising: a plurality of spatially-separated microphones that are configured into a microphone array, wherein the microphones are adapted to receive sound; a sound source detection system that is configured to detect a type of sound source from which audio signals are being derived; an environmental change detection system that is configured to detect a change in the environment of the audio device; and a processing system in communication with the microphone array, the sound source detection system, and the environmental change detection system, and configured to: derive a plurality of audio signals from the plurality of microphones; use prior audio data to operate a filter topology that processes audio signals so as to make the array more sensitive to desired sound than to undesired sound; categorize received sounds as one of desired sounds or undesired sounds; determine a confidence score for received sounds; and use the categorized received sounds, the categories of the received sounds, and the confidence score, to modify the filter topology, wherein received sounds are collected over time, and categorized received sounds that are collected over a particular time-period are used to modify the filter topology. 26. The audio device of claim 25, further comprising a communication system that is configured to transmit audio signals to a server, and wherein the audio signals comprise multi-channel representations of sound fields detected by the microphone array, comprising at least one channel for each microphone.
An audio device with a number of microphones that are configured into a microphone array. An audio signal processing system in communication with the microphone array is configured to derive a plurality of audio signals from the plurality of microphones, use prior audio data to operate a filter topology that processes audio signals so as to make the array more sensitive to desired sounds than to undesired sounds, categorize received sounds as one of desired sounds or undesired sounds, and use the categorized received sounds and the categories of the received sounds to modify the filter topology.1. An audio device, comprising: a plurality of spatially-separated microphones that are configured into a microphone array, wherein the microphones are adapted to receive sound; and a processing system in communication with the microphone array and configured to: derive a plurality of audio signals from the plurality of microphones; use prior audio data to operate a filter topology that processes audio signals so as to make the array more sensitive to desired sound than to undesired sound; categorize received sounds as one of desired sounds or undesired sounds; and use the categorized received sounds, and the categories of the received sounds, to modify the filter topology. 2. The audio device of claim 1, further comprising a detection system that is configured to detect a type of sound source from which audio signals are being derived. 3. The audio device of claim 2, wherein the audio signals derived from a certain type of sound source are not used to modify the filter topology. 4. The audio device of claim 3, wherein the certain type of sound source comprises a voice-based sound source. 5. The audio device of claim 2, wherein the detection system comprises a voice activity detector that is configured to be used to detect a voice-based sound source. 6. The audio device of claim 1, wherein the audio signal processing system is further configured to compute a confidence score for received sounds, wherein the confidence score is used in the modification of the filter topology. 7. The audio device of claim 6, wherein the confidence score is used to weight the contribution of the received sounds to the modification of the filter topology. 8. The audio device of claim 6, wherein computing the confidence score is based on a degree of confidence that received sounds include a wakeup word. 9. The audio device of claim 1, wherein received sounds are collected over time, and categorized received sounds that are collected over a particular time-period are used to modify the filter topology 10. The audio device of claim 1, wherein the received sound collection time-period is fixed. 11. The audio device of claim 1, wherein older received sounds have less effect on filter topology modification than do newer collected received sounds. 12. The audio device of claim 11, wherein the effect of collected received sounds on the filter topology modification decays at a constant rate. 13. The audio device of claim 1, further comprising a detection system that is configured to detect a change in the environment of the audio device. 14. The audio device of claim 13, wherein which of the collected received sounds that are used to modify the filter topology, is based on the detected change in the environment. 15. The audio device of claim 14, wherein when a change in the environment of the audio device is detected, received sounds that were collected before the change in the environment of the audio device was detected, are no longer used to modify the filter topology. 16. The audio device of claim 1, further comprising a communication system that is configured to transmit audio signals to a server. 17. The audio device of claim 16, wherein the communication system is further configured to receive modified filter topology parameters from the server. 18. The audio device of claim 17, wherein a modified filter topology is based on a combination of the modified filter topology parameters received from the server, and categorized received sounds. 19. The audio device of claim 1, wherein the audio signals comprise multi-channel representations of sound fields detected by the microphone array, comprising at least one channel for each microphone. 20. The audio device of claim 19, wherein the audio signals further comprise metadata. 21. The audio device of claim 1, wherein the audio signals comprise multi-channel audio recordings. 22. The audio device of claim 1, wherein the audio signals comprise cross-power spectral density matrices. 23. The audio device of claim 1, where desired and undesired sounds modify the filter topology differently. 24. An audio device, comprising: a plurality of spatially-separated microphones that are configured into a microphone array, wherein the microphones are adapted to receive sound; and a processing system in communication with the microphone array and configured to: derive a plurality of audio signals from the plurality of microphones; use prior audio data to operate a filter topology that processes audio signals so as to make the array more sensitive to desired sound than to undesired sound; categorize received sounds as one of desired sounds or undesired sounds; determine a confidence score for received sounds; and use the categorized received sounds, the categories of the received sounds, and the confidence score, to modify the filter topology, wherein received sounds are collected over time, and categorized received sounds that are collected over a particular time-period are used to modify the filter topology. 25. An audio device, comprising: a plurality of spatially-separated microphones that are configured into a microphone array, wherein the microphones are adapted to receive sound; a sound source detection system that is configured to detect a type of sound source from which audio signals are being derived; an environmental change detection system that is configured to detect a change in the environment of the audio device; and a processing system in communication with the microphone array, the sound source detection system, and the environmental change detection system, and configured to: derive a plurality of audio signals from the plurality of microphones; use prior audio data to operate a filter topology that processes audio signals so as to make the array more sensitive to desired sound than to undesired sound; categorize received sounds as one of desired sounds or undesired sounds; determine a confidence score for received sounds; and use the categorized received sounds, the categories of the received sounds, and the confidence score, to modify the filter topology, wherein received sounds are collected over time, and categorized received sounds that are collected over a particular time-period are used to modify the filter topology. 26. The audio device of claim 25, further comprising a communication system that is configured to transmit audio signals to a server, and wherein the audio signals comprise multi-channel representations of sound fields detected by the microphone array, comprising at least one channel for each microphone.
2,600
9,924
9,924
15,411,918
2,616
One embodiment of the present invention includes a parallel processing unit (PPU) that performs pixel shading at variable granularities. For effects that vary at a low frequency across a pixel block, a coarse shading unit performs the associated shading operations on a subset of the pixels in the pixel block. By contrast, for effects that vary at a high frequency across the pixel block, fine shading units perform the associated shading operations on each pixel in the pixel block. Because the PPU implements coarse shading units and fine shading units, the PPU may tune the shading rate per-effect based on the frequency of variation across each pixel group. By contrast, conventional PPUs typically compute all effects per-pixel, performing redundant shading operations for low frequency effects. Consequently, to produce similar image quality, the PPU consumes less power and increases the rendering frame rate compared to a conventional PPU.
1. A computer-implemented method, comprising: selecting a first pixel from a plurality of pixels based on which pixels included in the plurality of pixels are covered by a graphics primitive; performing a first pixel shading operation on the first pixel included in the subset of pixels to compute a first coarse shading value; performing a second pixel shading operation on the first pixel to compute a first fine shading value; and computing a first composite shading value for the first pixel based on the first coarse shading value and the first fine shading value. 2. The method of claim 1, further comprising determining a second coarse shading value for a second pixel based on the first coarse shading value. 3. The method of claim 2, wherein determining the second coarse shading value comprises: performing the first pixel shading operation on a third pixel to compute a third coarse shading value for the third pixel; performing an interpolation operation between the first coarse shading value and the third coarse shading value to determine a first estimated value; determining that a low frequency variation exists across a first set of pixels that includes the first pixel, the second pixel, and the third pixel; and setting the second coarse shading value to equal the first estimated value. 4. The method of claim 3, wherein determining that a low frequency variation exists comprises: evaluating at least one of the first coarse shading value, the third coarse shading value, and the first estimated value; and deactivating a first granularity bit that is associated with the first set of pixels. 5. The method of claim 4, wherein the first set of pixels comprises a pixel block that includes a first quad, and deactivating the first granularity bit causes the first quad to be associated with a first warp that includes a first thread that computes the first composite shading value. 6. The method of claim 2, further comprising performing the second pixel shading operation on the second pixel to compute a second fine shading value for the second pixel; and computing a second composite shading value for the second pixel based on the second coarse shading value and the second fine shading value 7. The method of claim 6, wherein computing the second composite shading value comprises combining the second coarse shading value, the second fine shading value, and a third fine shading value. 8. The method of claim 7, wherein determining the third fine shading value comprises: performing a third pixel shading operation on the first pixel to determine a fourth coarse shading value for the first pixel; determining that a high frequency variation exists across a first set of pixels that includes the first pixel and the second pixel; and performing the third pixel shading operation on the second pixel. 9. The method of claim 8, wherein determining that a high frequency variation exists comprises: evaluating the fourth coarse shading value; and activating a first granularity bit that is associated with the first set of pixels. 10. The method of claim 2, wherein a first set of pixels that includes the first pixel and the second pixel comprises a pixel block, and further comprising: receiving fragments associated with the pixel block; and determining a plurality of coarse shading pixels that includes the first pixel and does not include the second pixel. 11. The method of claim 10, wherein the fragments specify a pixel block visibility mask, and determining the plurality of coarse shading pixels comprises selecting the plurality of coarse shading pixels based on the pixel block visibility mask. 12. A non-transitory computer-readable storage medium including instructions that, when executed by a processor, cause the processor to perform the steps of: selecting a first pixel from a plurality of pixels based on which pixels included in the plurality of pixels are covered by a graphics primitive; performing a first pixel shading operation on the first pixel included in the subset of pixels to compute a first coarse shading value; performing a second pixel shading operation on the first pixel to compute a first fine shading value; and computing a first composite shading value for the first pixel based on the first coarse shading value and the first fine shading value. 13. The non-transitory computer-readable medium of claim 12, further comprising determining a second coarse shading value for a second pixel based on the first coarse shading value. 14. The non-transitory computer-readable medium of claim 13, wherein determining the second coarse shading value comprises: performing the first pixel shading operation on a third pixel to compute a third coarse shading value for the third pixel; performing an interpolation operation between the first coarse shading value and the third coarse shading value to determine a first estimated value; determining that a low frequency variation exists across a first set of pixels that includes the first pixel, the second pixel, and the third pixel; and setting the second coarse shading value to equal the first estimated value. 15. The non-transitory computer-readable medium of claim 14, wherein determining that a low frequency variation exists comprises: evaluating at least one of the first coarse shading value, the third coarse shading value, and the first estimated value; and deactivating a first granularity bit that is associated with the first set of pixels. 16. The non-transitory computer-readable medium of claim 15, wherein the first set of pixels comprises a pixel block that includes a first quad, and deactivating the first granularity bit causes the first quad to be associated with a first warp that includes a first thread that computes the first composite shading value. 17. The non-transitory computer-readable medium of claim 13, further comprising performing the second pixel shading operation on the second pixel to compute a second fine shading value for the second pixel; and computing a second composite shading value for the second pixel based on the second coarse shading value and the second fine shading value 18. The non-transitory computer-readable medium of claim 17, wherein computing the second composite shading value comprises combining the second coarse shading value, the second fine shading value, and a third fine shading value. 19. The non-transitory computer-readable medium of claim 18, wherein determining the third fine shading value comprises: performing a third pixel shading operation on the first pixel to determine a fourth coarse shading value for the first pixel; determining that a high frequency variation exists across a first set of pixels that includes the first pixel and the second pixel; and performing the third pixel shading operation on the second pixel. 20. The non-transitory computer-readable medium of claim 19, wherein determining that a high frequency variation exists comprises: evaluating the fourth coarse shading value; and activating a first granularity bit that is associated with the first set of pixels. 21. The non-transitory computer-readable medium of claim 13, wherein a first set of pixels that includes the first pixel and the second pixel comprises a pixel block, and further comprising: receiving fragments associated with the pixel block; and determining a plurality of coarse shading pixels that includes the first pixel and does not include the second pixel. 22. The non-transitory computer-readable medium of claim 21, wherein the fragments specify a pixel block visibility mask, and determining the plurality of coarse shading pixels comprises selecting the plurality of coarse shading pixels based on the pixel block visibility mask. 23. A system, comprising: a memory that includes instructions; and a processing device coupled to the memory and, when executing the instructions, is configured to: select a first pixel from a plurality of pixels based on which pixels included in the plurality of pixels are covered by a graphics primitive; perform a first pixel shading operation on the first pixel included in the subset of pixels to compute a first coarse shading value; perform a second pixel shading operation on the first pixel to compute a first fine shading value; and compute a first composite shading value for the first pixel based on the first coarse shading value and the first fine shading value. 24. The system of claim 23, wherein processing device comprises at least a portion of a graphics processing cluster. 25. The system of claim 23, wherein the processing device is implemented within a graphics processing unit.
One embodiment of the present invention includes a parallel processing unit (PPU) that performs pixel shading at variable granularities. For effects that vary at a low frequency across a pixel block, a coarse shading unit performs the associated shading operations on a subset of the pixels in the pixel block. By contrast, for effects that vary at a high frequency across the pixel block, fine shading units perform the associated shading operations on each pixel in the pixel block. Because the PPU implements coarse shading units and fine shading units, the PPU may tune the shading rate per-effect based on the frequency of variation across each pixel group. By contrast, conventional PPUs typically compute all effects per-pixel, performing redundant shading operations for low frequency effects. Consequently, to produce similar image quality, the PPU consumes less power and increases the rendering frame rate compared to a conventional PPU.1. A computer-implemented method, comprising: selecting a first pixel from a plurality of pixels based on which pixels included in the plurality of pixels are covered by a graphics primitive; performing a first pixel shading operation on the first pixel included in the subset of pixels to compute a first coarse shading value; performing a second pixel shading operation on the first pixel to compute a first fine shading value; and computing a first composite shading value for the first pixel based on the first coarse shading value and the first fine shading value. 2. The method of claim 1, further comprising determining a second coarse shading value for a second pixel based on the first coarse shading value. 3. The method of claim 2, wherein determining the second coarse shading value comprises: performing the first pixel shading operation on a third pixel to compute a third coarse shading value for the third pixel; performing an interpolation operation between the first coarse shading value and the third coarse shading value to determine a first estimated value; determining that a low frequency variation exists across a first set of pixels that includes the first pixel, the second pixel, and the third pixel; and setting the second coarse shading value to equal the first estimated value. 4. The method of claim 3, wherein determining that a low frequency variation exists comprises: evaluating at least one of the first coarse shading value, the third coarse shading value, and the first estimated value; and deactivating a first granularity bit that is associated with the first set of pixels. 5. The method of claim 4, wherein the first set of pixels comprises a pixel block that includes a first quad, and deactivating the first granularity bit causes the first quad to be associated with a first warp that includes a first thread that computes the first composite shading value. 6. The method of claim 2, further comprising performing the second pixel shading operation on the second pixel to compute a second fine shading value for the second pixel; and computing a second composite shading value for the second pixel based on the second coarse shading value and the second fine shading value 7. The method of claim 6, wherein computing the second composite shading value comprises combining the second coarse shading value, the second fine shading value, and a third fine shading value. 8. The method of claim 7, wherein determining the third fine shading value comprises: performing a third pixel shading operation on the first pixel to determine a fourth coarse shading value for the first pixel; determining that a high frequency variation exists across a first set of pixels that includes the first pixel and the second pixel; and performing the third pixel shading operation on the second pixel. 9. The method of claim 8, wherein determining that a high frequency variation exists comprises: evaluating the fourth coarse shading value; and activating a first granularity bit that is associated with the first set of pixels. 10. The method of claim 2, wherein a first set of pixels that includes the first pixel and the second pixel comprises a pixel block, and further comprising: receiving fragments associated with the pixel block; and determining a plurality of coarse shading pixels that includes the first pixel and does not include the second pixel. 11. The method of claim 10, wherein the fragments specify a pixel block visibility mask, and determining the plurality of coarse shading pixels comprises selecting the plurality of coarse shading pixels based on the pixel block visibility mask. 12. A non-transitory computer-readable storage medium including instructions that, when executed by a processor, cause the processor to perform the steps of: selecting a first pixel from a plurality of pixels based on which pixels included in the plurality of pixels are covered by a graphics primitive; performing a first pixel shading operation on the first pixel included in the subset of pixels to compute a first coarse shading value; performing a second pixel shading operation on the first pixel to compute a first fine shading value; and computing a first composite shading value for the first pixel based on the first coarse shading value and the first fine shading value. 13. The non-transitory computer-readable medium of claim 12, further comprising determining a second coarse shading value for a second pixel based on the first coarse shading value. 14. The non-transitory computer-readable medium of claim 13, wherein determining the second coarse shading value comprises: performing the first pixel shading operation on a third pixel to compute a third coarse shading value for the third pixel; performing an interpolation operation between the first coarse shading value and the third coarse shading value to determine a first estimated value; determining that a low frequency variation exists across a first set of pixels that includes the first pixel, the second pixel, and the third pixel; and setting the second coarse shading value to equal the first estimated value. 15. The non-transitory computer-readable medium of claim 14, wherein determining that a low frequency variation exists comprises: evaluating at least one of the first coarse shading value, the third coarse shading value, and the first estimated value; and deactivating a first granularity bit that is associated with the first set of pixels. 16. The non-transitory computer-readable medium of claim 15, wherein the first set of pixels comprises a pixel block that includes a first quad, and deactivating the first granularity bit causes the first quad to be associated with a first warp that includes a first thread that computes the first composite shading value. 17. The non-transitory computer-readable medium of claim 13, further comprising performing the second pixel shading operation on the second pixel to compute a second fine shading value for the second pixel; and computing a second composite shading value for the second pixel based on the second coarse shading value and the second fine shading value 18. The non-transitory computer-readable medium of claim 17, wherein computing the second composite shading value comprises combining the second coarse shading value, the second fine shading value, and a third fine shading value. 19. The non-transitory computer-readable medium of claim 18, wherein determining the third fine shading value comprises: performing a third pixel shading operation on the first pixel to determine a fourth coarse shading value for the first pixel; determining that a high frequency variation exists across a first set of pixels that includes the first pixel and the second pixel; and performing the third pixel shading operation on the second pixel. 20. The non-transitory computer-readable medium of claim 19, wherein determining that a high frequency variation exists comprises: evaluating the fourth coarse shading value; and activating a first granularity bit that is associated with the first set of pixels. 21. The non-transitory computer-readable medium of claim 13, wherein a first set of pixels that includes the first pixel and the second pixel comprises a pixel block, and further comprising: receiving fragments associated with the pixel block; and determining a plurality of coarse shading pixels that includes the first pixel and does not include the second pixel. 22. The non-transitory computer-readable medium of claim 21, wherein the fragments specify a pixel block visibility mask, and determining the plurality of coarse shading pixels comprises selecting the plurality of coarse shading pixels based on the pixel block visibility mask. 23. A system, comprising: a memory that includes instructions; and a processing device coupled to the memory and, when executing the instructions, is configured to: select a first pixel from a plurality of pixels based on which pixels included in the plurality of pixels are covered by a graphics primitive; perform a first pixel shading operation on the first pixel included in the subset of pixels to compute a first coarse shading value; perform a second pixel shading operation on the first pixel to compute a first fine shading value; and compute a first composite shading value for the first pixel based on the first coarse shading value and the first fine shading value. 24. The system of claim 23, wherein processing device comprises at least a portion of a graphics processing cluster. 25. The system of claim 23, wherein the processing device is implemented within a graphics processing unit.
2,600
9,925
9,925
15,840,725
2,622
An electronic device may have a hinge that allows the device to be flexed about a bend axis. A display may span the bend axis. To facilitate bending about the bend axis without damage when the display is cold, a portion of the display that overlaps the bend axis may be selectively heated. The portion of the display that overlaps the bend axis may be self-heated by illuminating pixels in the portion of the display that overlap the bend axis or may be heated using a heating element or other heating structure that provides heat to the portion of the display overlapping the bend axis. Control circuitry may engage a latching mechanism that prevents opening and closing of the electronic device when the temperature of the portion of the display that overlaps the bend axis is below a predetermined temperature.
1. An electronic device, comprising: a housing that bends about a bend axis; a display in the housing; a temperature sensor; and control circuitry configured to heat a portion of the display that overlaps the bend axis by illuminating pixels in the portion of the display in response to temperature information from the temperature sensor. 2. The electronic device defined in claim 1 wherein the housing has a first housing portion and a second housing portion and wherein the first and second housing portions rotate with respect to each other about the bend axis, the electronic device further comprising a magnetic latch having an electromagnet that is controlled by the control circuitry to hold the first housing portion to the second housing portion when the temperature information corresponds to a temperature for the portion of the display that is below a predetermined temperature. 3. The electronic device defined in claim 1 wherein the housing has a first housing portion and a second housing portion and wherein the first and second housing portions rotate with respect to each other about the bend axis, the electronic device further comprising a mechanical latch having an actuator that is controlled by the control circuitry to hold the first housing portion to the second housing portion when the temperature information corresponds to a temperature for the portion of the display that is below a predetermined temperature. 4. The electronic device defined in claim 1 wherein the display is an organic light-emitting diode display having a flexible display portion that overlaps the bend axis. 5. The electronic device defined in claim 1 further comprising an accelerometer, wherein the control circuitry is configured to heat the portion of the display based on information from the accelerometer. 6. The electronic device defined in claim 1 wherein the display has first and second display portions adjacent to the portion of the display that overlaps the bend axis and wherein the control circuitry is configured to heat the portion of the display that overlaps the bend axis by illuminating the pixels in the portion of the display that overlaps the bend axis without illuminating pixels in the first and second display portions. 7. The electronic device defined in claim 1 further comprising a heat spreader layer having portions adjacent to the portion of the display that overlaps the bend axis. 8. The electronic device defined in claim 1 further comprising an ohmic heating element that overlaps the bend axis and that extends across the housing parallel to the bend axis. 9. The electronic device defined in claim 1 wherein the portion of the display overlapping the bend axis has an elongated strip shape and extends between opposing edges of the display. 10. An electronic device, comprising: first and second housing structures coupled by a flexible structure that lies along a bend axis; a display having first, second, and third portions, wherein the first and third portions are coupled respectively to the first and second housing structures and do not overlap the bend axis and wherein the second portion lies between the first and third portions and has a strip shape extending along the bend axis between opposing edges of the display; and control circuitry configured to heat the second portion more than the first and third portions by selectively illuminating pixels in the second portion in response to temperature information. 11. The electronic device defined in claim 10 wherein the control circuitry is configured to heat the second portion by displaying a screen saver in the second portion in response to the temperature information. 12. The electronic device defined in claim 10 further comprising a temperature sensor, wherein the control circuitry is configured to receive the temperature information from the temperature sensor. 13. The electronic device defined in claim 12 further comprising a latching mechanism, wherein the control circuitry is configured to: engage the latching mechanism to hold the first and second housing structures to each other in response to measuring a temperature with the temperature sensor that is below a predetermined temperature; and disengage the latching mechanism to release the first and second housing portions from each other in response to measuring a temperature with the temperature sensor that is above the predetermined temperature. 14. The electronic device defined in claim 13 wherein the latching mechanism comprises an electromagnet coupled to the first housing structure. 15. The electronic device defined in claim 13 wherein the latching mechanism includes an actuator controlled by the control circuitry. 16. The electronic device defined in claim 10 further comprising a motion sensor, wherein the control circuitry is configured to heat the second portion more than the first and third portions by selectively illuminating the pixels in the second portion in response to information from the motion sensor. 17. The electronic device defined in claim 10 further comprising a sensor configured to gather user input, wherein the control circuitry is configured to heat the second portion more than the first and third portions by selectively illuminating the pixels in the second portion in response to the user input. 18. A foldable electronic device, comprising: a foldable display that folds along a bend axis; and control circuitry that illuminates pixels in a portion of the display overlapping the bend axis to heat the portion of the display overlapping the bend axis in response to determining that the portion of the display overlapping the bend axis has a temperature that is below a predetermined temperature. 19. The foldable electronic device defined in claim 18 further comprising: a temperature sensor that measures the temperature of the portion of the display overlapping the bend axis; and a user input sensor, wherein the control circuitry is configured to illuminate the pixels in the portion of the display overlapping the bend axis in response to user input received with the user input sensor. 20. The foldable electronic device defined in claim 19 further comprising: an accelerometer, wherein the control circuitry is configured to illuminate the pixels in the portion of the display overlapping the bend axis in response to data from the accelerometer.
An electronic device may have a hinge that allows the device to be flexed about a bend axis. A display may span the bend axis. To facilitate bending about the bend axis without damage when the display is cold, a portion of the display that overlaps the bend axis may be selectively heated. The portion of the display that overlaps the bend axis may be self-heated by illuminating pixels in the portion of the display that overlap the bend axis or may be heated using a heating element or other heating structure that provides heat to the portion of the display overlapping the bend axis. Control circuitry may engage a latching mechanism that prevents opening and closing of the electronic device when the temperature of the portion of the display that overlaps the bend axis is below a predetermined temperature.1. An electronic device, comprising: a housing that bends about a bend axis; a display in the housing; a temperature sensor; and control circuitry configured to heat a portion of the display that overlaps the bend axis by illuminating pixels in the portion of the display in response to temperature information from the temperature sensor. 2. The electronic device defined in claim 1 wherein the housing has a first housing portion and a second housing portion and wherein the first and second housing portions rotate with respect to each other about the bend axis, the electronic device further comprising a magnetic latch having an electromagnet that is controlled by the control circuitry to hold the first housing portion to the second housing portion when the temperature information corresponds to a temperature for the portion of the display that is below a predetermined temperature. 3. The electronic device defined in claim 1 wherein the housing has a first housing portion and a second housing portion and wherein the first and second housing portions rotate with respect to each other about the bend axis, the electronic device further comprising a mechanical latch having an actuator that is controlled by the control circuitry to hold the first housing portion to the second housing portion when the temperature information corresponds to a temperature for the portion of the display that is below a predetermined temperature. 4. The electronic device defined in claim 1 wherein the display is an organic light-emitting diode display having a flexible display portion that overlaps the bend axis. 5. The electronic device defined in claim 1 further comprising an accelerometer, wherein the control circuitry is configured to heat the portion of the display based on information from the accelerometer. 6. The electronic device defined in claim 1 wherein the display has first and second display portions adjacent to the portion of the display that overlaps the bend axis and wherein the control circuitry is configured to heat the portion of the display that overlaps the bend axis by illuminating the pixels in the portion of the display that overlaps the bend axis without illuminating pixels in the first and second display portions. 7. The electronic device defined in claim 1 further comprising a heat spreader layer having portions adjacent to the portion of the display that overlaps the bend axis. 8. The electronic device defined in claim 1 further comprising an ohmic heating element that overlaps the bend axis and that extends across the housing parallel to the bend axis. 9. The electronic device defined in claim 1 wherein the portion of the display overlapping the bend axis has an elongated strip shape and extends between opposing edges of the display. 10. An electronic device, comprising: first and second housing structures coupled by a flexible structure that lies along a bend axis; a display having first, second, and third portions, wherein the first and third portions are coupled respectively to the first and second housing structures and do not overlap the bend axis and wherein the second portion lies between the first and third portions and has a strip shape extending along the bend axis between opposing edges of the display; and control circuitry configured to heat the second portion more than the first and third portions by selectively illuminating pixels in the second portion in response to temperature information. 11. The electronic device defined in claim 10 wherein the control circuitry is configured to heat the second portion by displaying a screen saver in the second portion in response to the temperature information. 12. The electronic device defined in claim 10 further comprising a temperature sensor, wherein the control circuitry is configured to receive the temperature information from the temperature sensor. 13. The electronic device defined in claim 12 further comprising a latching mechanism, wherein the control circuitry is configured to: engage the latching mechanism to hold the first and second housing structures to each other in response to measuring a temperature with the temperature sensor that is below a predetermined temperature; and disengage the latching mechanism to release the first and second housing portions from each other in response to measuring a temperature with the temperature sensor that is above the predetermined temperature. 14. The electronic device defined in claim 13 wherein the latching mechanism comprises an electromagnet coupled to the first housing structure. 15. The electronic device defined in claim 13 wherein the latching mechanism includes an actuator controlled by the control circuitry. 16. The electronic device defined in claim 10 further comprising a motion sensor, wherein the control circuitry is configured to heat the second portion more than the first and third portions by selectively illuminating the pixels in the second portion in response to information from the motion sensor. 17. The electronic device defined in claim 10 further comprising a sensor configured to gather user input, wherein the control circuitry is configured to heat the second portion more than the first and third portions by selectively illuminating the pixels in the second portion in response to the user input. 18. A foldable electronic device, comprising: a foldable display that folds along a bend axis; and control circuitry that illuminates pixels in a portion of the display overlapping the bend axis to heat the portion of the display overlapping the bend axis in response to determining that the portion of the display overlapping the bend axis has a temperature that is below a predetermined temperature. 19. The foldable electronic device defined in claim 18 further comprising: a temperature sensor that measures the temperature of the portion of the display overlapping the bend axis; and a user input sensor, wherein the control circuitry is configured to illuminate the pixels in the portion of the display overlapping the bend axis in response to user input received with the user input sensor. 20. The foldable electronic device defined in claim 19 further comprising: an accelerometer, wherein the control circuitry is configured to illuminate the pixels in the portion of the display overlapping the bend axis in response to data from the accelerometer.
2,600
9,926
9,926
13,044,895
2,619
On a display configured to provide a photorepresentative view from a user's vantage point of a physical environment in which the user is located, a method is provided comprising receiving, from the user, an input selecting a theme for use in augmenting the photorepresentative view. The method further includes obtaining, optically and in real time, environment information of the physical environment and generating a spatial model of the physical environment based on the environment information. The method further includes identifying, via analysis of the spatial model, one or more features within the spatial model that each corresponds to one or more physical features in the physical environment. The method further includes based on such analysis, displaying, on the display, an augmentation of an identified feature, the augmentation being associated with the theme.
1. On a display configured to provide a photorepresentative view from a user's vantage point of a physical environment in which the user is located, a method of providing theme-based augmenting of the photorepresentative view, the method comprising: receiving, from the user, an input selecting a theme for use in augmenting the photorepresentative view; obtaining, optically and in real time, environment information of the physical environment; generating a spatial model of the physical environment based on the environment information; identifying, via analysis of the spatial model, one or more features within the spatial model that each corresponds to one or more physical features in the physical environment; and based on such analysis, displaying, on the display, an augmentation of an identified feature, the augmentation being associated with the theme. 2. The method of claim 1, further comprising, upon analysis of the spatial model, selecting the augmentation from a plurality of augmentations associated with the theme, such selection being based on the one or more features identified. 3. The method of claim 1, wherein the display is a head-up display. 4. The method of claim 1, wherein the display is a head-mounted display. 5. The method of claim 1, wherein displaying the augmentation comprises displaying an image which, from the user's vantage point, overlays the physical feature in the physical environment corresponding to the identified feature. 6. The method of claim 1, wherein displaying the augmentation comprises applying a virtual skin to the identified feature of the spatial model and displaying an image of the virtual skin which, from the user's vantage point, overlays the corresponding physical feature in the physical environment. 7. The method of claim 1, wherein the display is an optical see-through display configured to provide the photorepresentative view via one or more sufficiently transparent portions of the display through which the physical environment is visible to the user. 8. The method of claim 7, wherein displaying the augmentation comprises overlaying a virtual object onto the identified feature of the spatial model and displaying an image of the virtual object which, from the user's vantage point, overlays the corresponding physical feature in the physical environment visible to the user through the display. 9. The method of claim 1, wherein the display is an immersive display configured to provide the photorepresentative view by displaying a fully rendered image of the spatial model. 10. The method of claim 9, wherein displaying the augmentation comprises modifying one or more features identified within the spatial model, and in response, displaying a fully rendered image of the spatial model which reflects such modification of the spatial model. 11. The method of claim 1, wherein obtaining the environment information comprises detecting the environment information via one or more sensors associated with the display. 12. The method of claim 1, further comprising maintaining displaying of the augmentation as the user moves through the physical environment. 13. A display system for providing theme-based augmentation via display output, comprising: a display configured to provide a photorepresentative view from a user's vantage point of a physical environment in which the user is located; one or more sensors configured to obtain, optically and in real time, environment information of the physical environment; a data-holding subsystem operatively coupled with the display and the one or more sensors, the data-holding subsystem containing instructions executable by a logic subsystem to: receive, from the user, an input selecting a theme for use in augmenting the photorepresentative view; generate a spatial model of the physical environment based on the environment information; identify, via analysis of the spatial model, one or more features within the spatial model that each corresponds to one or more physical features in the physical environment; and based on such analysis, display, on the display, an augmentation of an identified feature, the augmentation being associated with the theme. 14. The display system of claim 13, wherein the display is a head-up display. 15. The display system of claim 13, wherein the display is a head-mounted display. 16. The display system of claim 13, wherein the display is an optical see-through display configured to provide the photorepresentative view via one or more sufficiently transparent portions of the display through which the physical environment is visible to the user. 17. The display system of claim 13, wherein the display is an immersive display configured to provide the photorepresentative view by displaying a fully rendered image of the spatial model. 18. The display system of claim 13, wherein the one or more sensors comprise an image capture device configured to obtain one or more depth images of the physical environment. 19. On a display, a method of providing theme-based augmentation via display output, the method comprising: receiving, from a user, an input selecting a theme for augmenting a photorepresentative view, the photorepresentative view being from a vantage point of the user and of a physical environment in which the user is located; detecting, optically and in real time, environment information of the physical environment via one or more sensors positioned proximal to the display; generating a spatial model of the physical environment based on the environment information; identifying, via analysis of the spatial model, one or more features within the spatial model that each corresponds to one or more physical features in the physical environment; selecting an augmentation from a plurality of augmentations associated with the theme based on the one or more features identified; and in response, displaying, on the display, the augmentation of an identified feature while still providing at least some of the photorepresentative view of the physical environment. 20. The method of claim 19, wherein displaying the augmentation comprises dynamically displaying the augmentation as the user moves within the physical environment to cause consequent change in the photorepresentative view.
On a display configured to provide a photorepresentative view from a user's vantage point of a physical environment in which the user is located, a method is provided comprising receiving, from the user, an input selecting a theme for use in augmenting the photorepresentative view. The method further includes obtaining, optically and in real time, environment information of the physical environment and generating a spatial model of the physical environment based on the environment information. The method further includes identifying, via analysis of the spatial model, one or more features within the spatial model that each corresponds to one or more physical features in the physical environment. The method further includes based on such analysis, displaying, on the display, an augmentation of an identified feature, the augmentation being associated with the theme.1. On a display configured to provide a photorepresentative view from a user's vantage point of a physical environment in which the user is located, a method of providing theme-based augmenting of the photorepresentative view, the method comprising: receiving, from the user, an input selecting a theme for use in augmenting the photorepresentative view; obtaining, optically and in real time, environment information of the physical environment; generating a spatial model of the physical environment based on the environment information; identifying, via analysis of the spatial model, one or more features within the spatial model that each corresponds to one or more physical features in the physical environment; and based on such analysis, displaying, on the display, an augmentation of an identified feature, the augmentation being associated with the theme. 2. The method of claim 1, further comprising, upon analysis of the spatial model, selecting the augmentation from a plurality of augmentations associated with the theme, such selection being based on the one or more features identified. 3. The method of claim 1, wherein the display is a head-up display. 4. The method of claim 1, wherein the display is a head-mounted display. 5. The method of claim 1, wherein displaying the augmentation comprises displaying an image which, from the user's vantage point, overlays the physical feature in the physical environment corresponding to the identified feature. 6. The method of claim 1, wherein displaying the augmentation comprises applying a virtual skin to the identified feature of the spatial model and displaying an image of the virtual skin which, from the user's vantage point, overlays the corresponding physical feature in the physical environment. 7. The method of claim 1, wherein the display is an optical see-through display configured to provide the photorepresentative view via one or more sufficiently transparent portions of the display through which the physical environment is visible to the user. 8. The method of claim 7, wherein displaying the augmentation comprises overlaying a virtual object onto the identified feature of the spatial model and displaying an image of the virtual object which, from the user's vantage point, overlays the corresponding physical feature in the physical environment visible to the user through the display. 9. The method of claim 1, wherein the display is an immersive display configured to provide the photorepresentative view by displaying a fully rendered image of the spatial model. 10. The method of claim 9, wherein displaying the augmentation comprises modifying one or more features identified within the spatial model, and in response, displaying a fully rendered image of the spatial model which reflects such modification of the spatial model. 11. The method of claim 1, wherein obtaining the environment information comprises detecting the environment information via one or more sensors associated with the display. 12. The method of claim 1, further comprising maintaining displaying of the augmentation as the user moves through the physical environment. 13. A display system for providing theme-based augmentation via display output, comprising: a display configured to provide a photorepresentative view from a user's vantage point of a physical environment in which the user is located; one or more sensors configured to obtain, optically and in real time, environment information of the physical environment; a data-holding subsystem operatively coupled with the display and the one or more sensors, the data-holding subsystem containing instructions executable by a logic subsystem to: receive, from the user, an input selecting a theme for use in augmenting the photorepresentative view; generate a spatial model of the physical environment based on the environment information; identify, via analysis of the spatial model, one or more features within the spatial model that each corresponds to one or more physical features in the physical environment; and based on such analysis, display, on the display, an augmentation of an identified feature, the augmentation being associated with the theme. 14. The display system of claim 13, wherein the display is a head-up display. 15. The display system of claim 13, wherein the display is a head-mounted display. 16. The display system of claim 13, wherein the display is an optical see-through display configured to provide the photorepresentative view via one or more sufficiently transparent portions of the display through which the physical environment is visible to the user. 17. The display system of claim 13, wherein the display is an immersive display configured to provide the photorepresentative view by displaying a fully rendered image of the spatial model. 18. The display system of claim 13, wherein the one or more sensors comprise an image capture device configured to obtain one or more depth images of the physical environment. 19. On a display, a method of providing theme-based augmentation via display output, the method comprising: receiving, from a user, an input selecting a theme for augmenting a photorepresentative view, the photorepresentative view being from a vantage point of the user and of a physical environment in which the user is located; detecting, optically and in real time, environment information of the physical environment via one or more sensors positioned proximal to the display; generating a spatial model of the physical environment based on the environment information; identifying, via analysis of the spatial model, one or more features within the spatial model that each corresponds to one or more physical features in the physical environment; selecting an augmentation from a plurality of augmentations associated with the theme based on the one or more features identified; and in response, displaying, on the display, the augmentation of an identified feature while still providing at least some of the photorepresentative view of the physical environment. 20. The method of claim 19, wherein displaying the augmentation comprises dynamically displaying the augmentation as the user moves within the physical environment to cause consequent change in the photorepresentative view.
2,600
9,927
9,927
14,557,991
2,625
Systems and methods for intelligent illumination of a controller are disclosed. One method comprises receiving, by a computing device, first information relating to a current environment of a controller, wherein the controller comprises a plurality of user engageable interfaces, and wherein at least a subset of the user engageable interfaces is configured to be independently and selectively illuminated. Second information can be received relating to a current operating state of the controller. A portion of the plurality of user engageable interfaces can be selectively illuminated based upon at least the first information and the second information.
1. A method comprising: receiving, by a computing device, first information relating to a current environment of a controller, wherein the controller comprises a plurality of user engageable interfaces, and wherein at least a portion of the user engageable interfaces are configured to be independently and selectively illuminated; receiving, by the computing device, second information relating to a current operating state of one or more of the controller and a controlled device, wherein the current operating state comprises one or more of a location, an orientation, a relative position of the controller and the controlled device, and a use of the one or more of the controller and a controlled device; determining, by the computing device, an illumination signature for the controller based at least in part on the received first information and the received second information; and causing illumination of only a subset of the plurality of user engageable interfaces based upon the illumination signature. 2. The method of claim 1, wherein the first information comprises ambient light level. 3. The method of claim 1, wherein the first information comprises time of day, weather conditions, ambient sound level, or premises security state, or a combination thereof. 4. The method of claim 1, wherein the first information is received from a sensor co-located with the controller, a sensor co-located with the controlled device, a second device configured to be controlled by the controller, a premises security system, a communication gateway, or a network device, or a combination thereof. 5. The method of claim 1, wherein user engageable interfaces comprise one or more of back-lit keys and a touch screen. 6. The method of claim 1, wherein the second information is received from a sensor co-located with the controller, a sensor co-located with the controlled device, a second device configured to be controlled by the controller, a premises security system, a communication gateway, or a network device, or a combination thereof. 7. The method of claim 1, wherein the illumination signature represents a pattern of conditional values relating to at least the environment and operating state of the controller. 8. The method of claim 1, wherein the illumination signature is based at least in part on one or more of historical data for the controller, predictive data for the controller, and aggregated data with at least one other controller. 9. The method of claim 1, wherein the causing illumination of only the subset of the plurality of user engageable interfaces comprises causing illumination in a pre-determined illumination pattern. 10. A controller comprising: a housing; a communication element disposed adjacent the housing and configured to transmit a signal for controlling operations of a controlled device; a plurality of user engageable interfaces disposed adjacent the housing, wherein at least a subset of the user engageable interfaces is configured to be independently and selectively illuminated, wherein the user engageable interfaces are configured to be activated by a user to cause the signal to be transmitted for controlling operations of the controlled device; and a processor disposed within the housing and configured to receive information relating to one or more of an environment of the controller and an operating condition of the controller, and to cause illumination of a portion of the plurality of user engageable interfaces based upon received information, wherein the environment comprises one or more of ambient light, time of day, weather conditions, ambient sound level, or premises security state, or a combination thereof, and wherein the operating state comprises one or more of a location, an orientation, a relative position to a controlled device, and a use of the controller. 11. The controller of claim 10, wherein user engageable interfaces comprise one or more of back-lit keys and a touch screen. 12. The controller of claim 10, wherein the illumination of the portion of the plurality of user engageable interfaces comprises illumination in a pre-determined illumination pattern. 13. The controller of claim 10, wherein the information is received from a sensor disposed adjacent the housing of the controller, a device configured to be controlled by the controller, a premises security system, a communication gateway, or a network device, or a combination thereof. 14. A method comprising: receiving, by a computing device, first information relating to a current environment of a controller, wherein the controller comprises a plurality of user engageable interfaces, and wherein at least a subset of the user engageable interfaces is configured to be independently and selectively illuminated; receiving, by the computing device, second information relating to a current operating state of one or more of the controller and a controlled device; and causing selective emphasis of a portion of the plurality of user engageable interfaces based upon at least the first information and the second information. 15. The method of claim 14, wherein the first information comprises ambient light level, time of day, weather conditions, ambient sound level, or premises security state, or a combination thereof. 16. The method of claim 14, wherein the current operating state comprises one or more of a location, an orientation, a relative position of the controller and the controlled device, and a use of the one or more of the controller and the controlled device. 17. The method of claim 14, wherein the first information is received from a sensor co-located with the controller, a sensor co-located with the controlled device, a second device configured to be controlled by the controller, a premises security system, a communication gateway, or a network device, or a combination thereof. 18. The method of claim 14, wherein the causing emphasis of the portion of the plurality of user engageable interfaces comprises one or more of causing illumination of the portion of the plurality of user engageable interfaces in a pre-determined illumination pattern and causing re-sizing of the portion of the plurality of user engageable interfaces relative to another portion of the plurality of user engageable interfaces. 19. The method of claim 18, wherein the pre-determined illumination pattern represents a notification message. 20. The method of claim 14, further comprising determining, by the computing device, one or more interfaces of the of the plurality of user engageable interfaces that are necessary based upon at least the first information and the second information, wherein the portion of the plurality of user engageable interfaces that is caused to be emphasized comprises the one or more interfaces of the of the plurality of user engageable interfaces determined to be necessary.
Systems and methods for intelligent illumination of a controller are disclosed. One method comprises receiving, by a computing device, first information relating to a current environment of a controller, wherein the controller comprises a plurality of user engageable interfaces, and wherein at least a subset of the user engageable interfaces is configured to be independently and selectively illuminated. Second information can be received relating to a current operating state of the controller. A portion of the plurality of user engageable interfaces can be selectively illuminated based upon at least the first information and the second information.1. A method comprising: receiving, by a computing device, first information relating to a current environment of a controller, wherein the controller comprises a plurality of user engageable interfaces, and wherein at least a portion of the user engageable interfaces are configured to be independently and selectively illuminated; receiving, by the computing device, second information relating to a current operating state of one or more of the controller and a controlled device, wherein the current operating state comprises one or more of a location, an orientation, a relative position of the controller and the controlled device, and a use of the one or more of the controller and a controlled device; determining, by the computing device, an illumination signature for the controller based at least in part on the received first information and the received second information; and causing illumination of only a subset of the plurality of user engageable interfaces based upon the illumination signature. 2. The method of claim 1, wherein the first information comprises ambient light level. 3. The method of claim 1, wherein the first information comprises time of day, weather conditions, ambient sound level, or premises security state, or a combination thereof. 4. The method of claim 1, wherein the first information is received from a sensor co-located with the controller, a sensor co-located with the controlled device, a second device configured to be controlled by the controller, a premises security system, a communication gateway, or a network device, or a combination thereof. 5. The method of claim 1, wherein user engageable interfaces comprise one or more of back-lit keys and a touch screen. 6. The method of claim 1, wherein the second information is received from a sensor co-located with the controller, a sensor co-located with the controlled device, a second device configured to be controlled by the controller, a premises security system, a communication gateway, or a network device, or a combination thereof. 7. The method of claim 1, wherein the illumination signature represents a pattern of conditional values relating to at least the environment and operating state of the controller. 8. The method of claim 1, wherein the illumination signature is based at least in part on one or more of historical data for the controller, predictive data for the controller, and aggregated data with at least one other controller. 9. The method of claim 1, wherein the causing illumination of only the subset of the plurality of user engageable interfaces comprises causing illumination in a pre-determined illumination pattern. 10. A controller comprising: a housing; a communication element disposed adjacent the housing and configured to transmit a signal for controlling operations of a controlled device; a plurality of user engageable interfaces disposed adjacent the housing, wherein at least a subset of the user engageable interfaces is configured to be independently and selectively illuminated, wherein the user engageable interfaces are configured to be activated by a user to cause the signal to be transmitted for controlling operations of the controlled device; and a processor disposed within the housing and configured to receive information relating to one or more of an environment of the controller and an operating condition of the controller, and to cause illumination of a portion of the plurality of user engageable interfaces based upon received information, wherein the environment comprises one or more of ambient light, time of day, weather conditions, ambient sound level, or premises security state, or a combination thereof, and wherein the operating state comprises one or more of a location, an orientation, a relative position to a controlled device, and a use of the controller. 11. The controller of claim 10, wherein user engageable interfaces comprise one or more of back-lit keys and a touch screen. 12. The controller of claim 10, wherein the illumination of the portion of the plurality of user engageable interfaces comprises illumination in a pre-determined illumination pattern. 13. The controller of claim 10, wherein the information is received from a sensor disposed adjacent the housing of the controller, a device configured to be controlled by the controller, a premises security system, a communication gateway, or a network device, or a combination thereof. 14. A method comprising: receiving, by a computing device, first information relating to a current environment of a controller, wherein the controller comprises a plurality of user engageable interfaces, and wherein at least a subset of the user engageable interfaces is configured to be independently and selectively illuminated; receiving, by the computing device, second information relating to a current operating state of one or more of the controller and a controlled device; and causing selective emphasis of a portion of the plurality of user engageable interfaces based upon at least the first information and the second information. 15. The method of claim 14, wherein the first information comprises ambient light level, time of day, weather conditions, ambient sound level, or premises security state, or a combination thereof. 16. The method of claim 14, wherein the current operating state comprises one or more of a location, an orientation, a relative position of the controller and the controlled device, and a use of the one or more of the controller and the controlled device. 17. The method of claim 14, wherein the first information is received from a sensor co-located with the controller, a sensor co-located with the controlled device, a second device configured to be controlled by the controller, a premises security system, a communication gateway, or a network device, or a combination thereof. 18. The method of claim 14, wherein the causing emphasis of the portion of the plurality of user engageable interfaces comprises one or more of causing illumination of the portion of the plurality of user engageable interfaces in a pre-determined illumination pattern and causing re-sizing of the portion of the plurality of user engageable interfaces relative to another portion of the plurality of user engageable interfaces. 19. The method of claim 18, wherein the pre-determined illumination pattern represents a notification message. 20. The method of claim 14, further comprising determining, by the computing device, one or more interfaces of the of the plurality of user engageable interfaces that are necessary based upon at least the first information and the second information, wherein the portion of the plurality of user engageable interfaces that is caused to be emphasized comprises the one or more interfaces of the of the plurality of user engageable interfaces determined to be necessary.
2,600
9,928
9,928
15,143,183
2,652
A method includes determining a first feature of a first audio signal at a first location in a signal processing path and determining, using the first feature, a first environmental classification of the first signal. Further, the method includes, based on the first environmental classification, enabling, modifying or disabling one or both of a first signal processing mode at the first location and a second signal processing mode at a second location in the signal processing path. The method also includes determining a second feature of a second audio signal at the second location and determining, using the second feature, a second environmental classification of the second signal. Further, the method includes, based on the second environmental classification, enabling, modifying or disabling one or both of the first signal processing mode at the first location and the second signal processing mode at the second location.
1. A method performed by a hearing system for processing audio signals, comprising: analyzing a first audio signal and a second audio signal to determine a first directional signal and a second directional signal; analyzing at least one of the first directional signal to determine a first sound environment or the second directional signal to determine a second sound environment; and based on at least one of the first sound environment or the second sound environment, determining a third audio signal. 2. The method of claim 1, wherein the first directional signal is a front directional signal and the second directional signal is a rear directional signal. 3. The method of claim 1, wherein the third signal is substantially similar to first directional signal or the second directional signal. 4. The method of claim 1, wherein the generating the third audio signal includes at least one of: responsive to the first sound environment being classified as noise, performing a first audio signal processing mode on the first directional signal, wherein the first audio signal processing mode reduces noise from the first directional signal; or responsive to the second sound environment being classified as noise, performing a second audio signal processing mode on the second directional signal, wherein the second audio signal processing mode reduces noise from the second directional signal. 5. The method of claim 1, further comprising: based on at least one of the first sound environment or the second sound environment, performing at least one of processing the first directional signal to generate a third directional signal that is different from the first directional signal, or processing the second directional signal to generate a fourth directional signal that is different from the second directional signal, and wherein the third audio signal is based on at least one of the third directional signal or the fourth directional signal. 6. The method of claim 5, further comprising: analyzing the third audio signal to determine a third sound environment; and based on the third sound environment, processing the third audio signal to generate a fourth audio signal that is different from the third audio signal. 7. The method of claim 1, further comprising receiving the first audio signal at a first input to the hearing system, and receiving the second audio signal at a second input to the hearing system. 8. The method of claim 7, wherein the first input and the second input include at least one omnidirectional microphone input. 9. The method of claim 1, wherein the third signal represents at least one of the first directional signal or the second directional signal. 10. The method of claim 1, further comprising: determining a first feature of the first directional signal; and determining a second feature of the second directional signal; wherein the analyzing at least one of the first directional signal or the second directional signal to determine the first sound environment or the second sound environment is based on at least one of the first feature or the second feature. 11. A method for processing audio signals comprising: analyzing a first audio signal and a second audio signal to determine a first directional signal and a second directional signal; analyzing at least one of the first directional signal to determine a first sound environment or the second directional signal to determine a second sound environment; and based on at least one of the first sound environment or the second sound environment, at least one of performing a first audio signal processing mode, or performing a second audio signal processing mode. 12. The method of claim 11, further comprising determining a third audio signal that represents at least one of the first directional signal or the second directional signal. 13. The method of claim 12, further comprising, based on the third audio signal, generating one or more stimulation signals that represent at least one of the first audio signal or the second audio signal, wherein the one or more stimulation signals are used by a hearing prosthesis to generate an output. 14. The method of claim 12, further comprising: analyzing the third audio signal to determine a third sound environment; and based on the third sound environment, performing a third audio signal processing mode on the third audio signal. 15. The method of claim 11, wherein performing the first audio signal processing mode comprises performing the first audio processing mode on the first directional signal or wherein performing the second audio signal processing mode comprises performing the second audio processing mode on the second directional signal. 16. The method of claim 11, wherein the first directional signal is a front directional signal and the second directional signal is a rear directional signal. 17. The method of claim 11, further comprising receiving the first audio signal at a first input to a hearing prosthesis, and receiving the second audio signal at a second input to the hearing prosthesis. 18. The method of claim 17, wherein the first input and the second input include at least one omnidirectional microphone input. 19. A hearing prosthesis system comprising: one or more signal processing components configured to: analyze a first audio signal and a second audio signal to determine a first directional signal and a second directional signal; analyze at least one of the first directional signal to determine a first sound environment or the second directional signal to determine a second sound environment; and based on at least one of the first sound environment or the second sound environment, generate a third audio signal. 20. The hearing prosthesis system of claim 19, wherein the one or more signal processing components are further configured to, based on at least one of the first sound environment or the second sound environment, at least one of perform a first audio signal processing mode on the first directional signal, or perform a second audio signal processing mode on the second directional signal.
A method includes determining a first feature of a first audio signal at a first location in a signal processing path and determining, using the first feature, a first environmental classification of the first signal. Further, the method includes, based on the first environmental classification, enabling, modifying or disabling one or both of a first signal processing mode at the first location and a second signal processing mode at a second location in the signal processing path. The method also includes determining a second feature of a second audio signal at the second location and determining, using the second feature, a second environmental classification of the second signal. Further, the method includes, based on the second environmental classification, enabling, modifying or disabling one or both of the first signal processing mode at the first location and the second signal processing mode at the second location.1. A method performed by a hearing system for processing audio signals, comprising: analyzing a first audio signal and a second audio signal to determine a first directional signal and a second directional signal; analyzing at least one of the first directional signal to determine a first sound environment or the second directional signal to determine a second sound environment; and based on at least one of the first sound environment or the second sound environment, determining a third audio signal. 2. The method of claim 1, wherein the first directional signal is a front directional signal and the second directional signal is a rear directional signal. 3. The method of claim 1, wherein the third signal is substantially similar to first directional signal or the second directional signal. 4. The method of claim 1, wherein the generating the third audio signal includes at least one of: responsive to the first sound environment being classified as noise, performing a first audio signal processing mode on the first directional signal, wherein the first audio signal processing mode reduces noise from the first directional signal; or responsive to the second sound environment being classified as noise, performing a second audio signal processing mode on the second directional signal, wherein the second audio signal processing mode reduces noise from the second directional signal. 5. The method of claim 1, further comprising: based on at least one of the first sound environment or the second sound environment, performing at least one of processing the first directional signal to generate a third directional signal that is different from the first directional signal, or processing the second directional signal to generate a fourth directional signal that is different from the second directional signal, and wherein the third audio signal is based on at least one of the third directional signal or the fourth directional signal. 6. The method of claim 5, further comprising: analyzing the third audio signal to determine a third sound environment; and based on the third sound environment, processing the third audio signal to generate a fourth audio signal that is different from the third audio signal. 7. The method of claim 1, further comprising receiving the first audio signal at a first input to the hearing system, and receiving the second audio signal at a second input to the hearing system. 8. The method of claim 7, wherein the first input and the second input include at least one omnidirectional microphone input. 9. The method of claim 1, wherein the third signal represents at least one of the first directional signal or the second directional signal. 10. The method of claim 1, further comprising: determining a first feature of the first directional signal; and determining a second feature of the second directional signal; wherein the analyzing at least one of the first directional signal or the second directional signal to determine the first sound environment or the second sound environment is based on at least one of the first feature or the second feature. 11. A method for processing audio signals comprising: analyzing a first audio signal and a second audio signal to determine a first directional signal and a second directional signal; analyzing at least one of the first directional signal to determine a first sound environment or the second directional signal to determine a second sound environment; and based on at least one of the first sound environment or the second sound environment, at least one of performing a first audio signal processing mode, or performing a second audio signal processing mode. 12. The method of claim 11, further comprising determining a third audio signal that represents at least one of the first directional signal or the second directional signal. 13. The method of claim 12, further comprising, based on the third audio signal, generating one or more stimulation signals that represent at least one of the first audio signal or the second audio signal, wherein the one or more stimulation signals are used by a hearing prosthesis to generate an output. 14. The method of claim 12, further comprising: analyzing the third audio signal to determine a third sound environment; and based on the third sound environment, performing a third audio signal processing mode on the third audio signal. 15. The method of claim 11, wherein performing the first audio signal processing mode comprises performing the first audio processing mode on the first directional signal or wherein performing the second audio signal processing mode comprises performing the second audio processing mode on the second directional signal. 16. The method of claim 11, wherein the first directional signal is a front directional signal and the second directional signal is a rear directional signal. 17. The method of claim 11, further comprising receiving the first audio signal at a first input to a hearing prosthesis, and receiving the second audio signal at a second input to the hearing prosthesis. 18. The method of claim 17, wherein the first input and the second input include at least one omnidirectional microphone input. 19. A hearing prosthesis system comprising: one or more signal processing components configured to: analyze a first audio signal and a second audio signal to determine a first directional signal and a second directional signal; analyze at least one of the first directional signal to determine a first sound environment or the second directional signal to determine a second sound environment; and based on at least one of the first sound environment or the second sound environment, generate a third audio signal. 20. The hearing prosthesis system of claim 19, wherein the one or more signal processing components are further configured to, based on at least one of the first sound environment or the second sound environment, at least one of perform a first audio signal processing mode on the first directional signal, or perform a second audio signal processing mode on the second directional signal.
2,600
9,929
9,929
14,294,934
2,641
Providing on a graphic display, of a cellular phone, an audio-visual interface (AVI) with a global positioning system (GPS) position of another cellular phone is disclosed. The cellular phone may have an authorized security level to see the another cellular phone's position on a real-time map on the AVI. The AVI on the cellular phone may be configured to initiate a communication over a cellular network with the another cellular phone.
1. A cellular phone comprising: a cellular data network interface device; circuitry configured to provide, on a graphic display, an audio-visual interface (AVI), wherein a global positioning system (GPS) position of at least one other cellular phone with an authorized security level is displayed on a real-time map on the AVI; wherein the AVI provides a log having identifying information of the at least one other cellular phone and the AVI initiates a communication function with the at least one other cellular phone; and wherein the communication function utilizes the cellular data network interface device. 2. The cellular phone of claim 1 further comprising: circuitry configured to send, via the cellular data network interface device, payment to a device; and circuitry configured to receive, via the cellular data network interface device, a virtual receipt from the device. 3. The cellular phone of claim 1 further comprising: circuitry configured to track movement of the cellular phone. 4. The cellular phone of claim 1 further comprising: circuitry configured to indicate when the at least one other cellular phone is within a vicinity of the cellular phone. 5. The cellular phone of claim 4 wherein the indication is provided when the at least one other cellular phone is within the vicinity during a predetermined time of day. 6. The cellular phone of claim 1, wherein another log comprises user activity information of the cellular phone. 7. The cellular phone of claim 1, wherein the cellular phone is part of a vehicle. 8. The cellular phone of claim 1, wherein the authorized security level allows control of functions of the cellular phone subsequent to the cellular phone allowing control. 9. A method performed by a cellular phone, the method comprising: providing, on a graphic display by the cellular phone, an audio-visual interface (AVI), wherein a global positioning system (GPS) position of at least one other cellular phone with an authorized security level is displayed on a real-time map on the AVI; wherein the AVI provides a log having identifying information of the at least one other cellular phone and the AVI initiates s a communication function with the at least one other cellular phone; and wherein the communication function utilizes a cellular data network interface device. 10. The method of claim 9 further comprising: sending, by the cellular phone via the cellular data network interface device, payment to a device; and receiving, by the cellular phone via the cellular data network interface device, a virtual receipt from the device. 11. The method of claim 9 further comprising: tracking movement of the cellular phone. 12. The method of claim 9 further comprising: indicating, by the cellular phone, when the at least one other cellular phone is within a vicinity of the cellular phone. 13. The method of claim 12, wherein the indication is provided when the at least one other cellular phone is within the vicinity during a predetermined time of day. 14. The method of claim 9, wherein another log comprises user activity information of the cellular phone. 15. The method of claim 9, wherein the cellular phone is part of a vehicle. 16. The method of claim 9, wherein the authorized security level allows control of functions of the cellular phone subsequent to the cellular phone allowing control. 17. A method performed by an administrative device, the method comprising: sending, by the administrative device via a cellular data network to another cellular phone, a global positioning system (GPS) position of at least one cellular phone with an authorized security level, wherein the position is configured to be displayed on a real-time map on an audio-visual interface (AVI); providing, by the administrative device to the another cellular phone via the cellular data network, a log having identifying information of the at least one cellular phone; and providing, by the administrative device via the cellular data network, a communication function for communication between the at least one cellular phone and the another cellular phone. 18. The method of claim 17 further comprising: receiving, by the administrative device via the cellular data network, payment from the another cellular phone; and sending, by the administrative device via the cellular data network, a virtual receipt to the another cellular phone. 19. The method of claim 17 further comprising: indicating, by the administrative device to the another cellular phone via the cellular data network, when the at least one cellular phone is within a vicinity of the another cellular phone. 20. The method of claim 19, wherein the indication is provided when the at least one cellular phone is within the vicinity during a predetermined time of day.
Providing on a graphic display, of a cellular phone, an audio-visual interface (AVI) with a global positioning system (GPS) position of another cellular phone is disclosed. The cellular phone may have an authorized security level to see the another cellular phone's position on a real-time map on the AVI. The AVI on the cellular phone may be configured to initiate a communication over a cellular network with the another cellular phone.1. A cellular phone comprising: a cellular data network interface device; circuitry configured to provide, on a graphic display, an audio-visual interface (AVI), wherein a global positioning system (GPS) position of at least one other cellular phone with an authorized security level is displayed on a real-time map on the AVI; wherein the AVI provides a log having identifying information of the at least one other cellular phone and the AVI initiates a communication function with the at least one other cellular phone; and wherein the communication function utilizes the cellular data network interface device. 2. The cellular phone of claim 1 further comprising: circuitry configured to send, via the cellular data network interface device, payment to a device; and circuitry configured to receive, via the cellular data network interface device, a virtual receipt from the device. 3. The cellular phone of claim 1 further comprising: circuitry configured to track movement of the cellular phone. 4. The cellular phone of claim 1 further comprising: circuitry configured to indicate when the at least one other cellular phone is within a vicinity of the cellular phone. 5. The cellular phone of claim 4 wherein the indication is provided when the at least one other cellular phone is within the vicinity during a predetermined time of day. 6. The cellular phone of claim 1, wherein another log comprises user activity information of the cellular phone. 7. The cellular phone of claim 1, wherein the cellular phone is part of a vehicle. 8. The cellular phone of claim 1, wherein the authorized security level allows control of functions of the cellular phone subsequent to the cellular phone allowing control. 9. A method performed by a cellular phone, the method comprising: providing, on a graphic display by the cellular phone, an audio-visual interface (AVI), wherein a global positioning system (GPS) position of at least one other cellular phone with an authorized security level is displayed on a real-time map on the AVI; wherein the AVI provides a log having identifying information of the at least one other cellular phone and the AVI initiates s a communication function with the at least one other cellular phone; and wherein the communication function utilizes a cellular data network interface device. 10. The method of claim 9 further comprising: sending, by the cellular phone via the cellular data network interface device, payment to a device; and receiving, by the cellular phone via the cellular data network interface device, a virtual receipt from the device. 11. The method of claim 9 further comprising: tracking movement of the cellular phone. 12. The method of claim 9 further comprising: indicating, by the cellular phone, when the at least one other cellular phone is within a vicinity of the cellular phone. 13. The method of claim 12, wherein the indication is provided when the at least one other cellular phone is within the vicinity during a predetermined time of day. 14. The method of claim 9, wherein another log comprises user activity information of the cellular phone. 15. The method of claim 9, wherein the cellular phone is part of a vehicle. 16. The method of claim 9, wherein the authorized security level allows control of functions of the cellular phone subsequent to the cellular phone allowing control. 17. A method performed by an administrative device, the method comprising: sending, by the administrative device via a cellular data network to another cellular phone, a global positioning system (GPS) position of at least one cellular phone with an authorized security level, wherein the position is configured to be displayed on a real-time map on an audio-visual interface (AVI); providing, by the administrative device to the another cellular phone via the cellular data network, a log having identifying information of the at least one cellular phone; and providing, by the administrative device via the cellular data network, a communication function for communication between the at least one cellular phone and the another cellular phone. 18. The method of claim 17 further comprising: receiving, by the administrative device via the cellular data network, payment from the another cellular phone; and sending, by the administrative device via the cellular data network, a virtual receipt to the another cellular phone. 19. The method of claim 17 further comprising: indicating, by the administrative device to the another cellular phone via the cellular data network, when the at least one cellular phone is within a vicinity of the another cellular phone. 20. The method of claim 19, wherein the indication is provided when the at least one cellular phone is within the vicinity during a predetermined time of day.
2,600
9,930
9,930
14,080,618
2,658
Contact centers may benefit from routing messages to agents who have similar, or complementary, attributes as the customer of the message. In a text message, certain message attributes provide artifacts that may be common to one particular customer attribute. Messages containing that particular message attribute provide a derived customer attribute and the message routed accordingly. In addition, agents responding to a customer may be provided with guidance to ensure their response is appropriate for the derived customer attribute of the customer.
1. A method, comprising: receiving a message from a customer, the message having message elements; deriving a customer attribute based on a semantic analysis of the message elements; selecting an agent from a plurality of agents in a contact center to interact with the customer based, at least in part, on the derived customer attribute; and enabling a communication session between the selected agent and customer. 2. The method of claim 1, wherein the derived customer attribute is a degree of estimated matching to the derived customer attribute. 3. The method of claim 1, wherein the derived customer attribute comprises a formality-informality conversational preference. 4. The method of claim 1, wherein the derived customer attribute comprises an estimated educational level of the customer. 5. The method of claim 1, wherein the derived customer attribute comprises as estimated gender. 6. The method of claim 1, wherein the derived customer attribute comprises a native language. 7. The method of claim 1, wherein the derived customer attribute comprises a technical expertise level. 8. The method of claim 1, further comprising: monitoring the agent's reply to the message; analyzing the agent's reply; and upon determining the analyzed agent's reply is not in accord with the derived customer attribute, notifying the agent that the agent's reply is not in accord with the derived customer attribute. 9. A non-transitory computer readable medium with instructions, that when executed by a computer, cause the computer to perform: receiving a number of messages from customers having a customer attribute; determining a correlation between a semantic attribute in the number of messages and the customer attribute; and storing the semantic attribute and correlated customer attribute in a database. 10. The instructions of claim 9, wherein, at least one of the number of messages is a transcript of a voice message. 11. The instructions of claim 9, wherein storing the semantic attribute and correlated customer attribute, further comprises, storing an associated degree of correlation. 12. The instructions of claim 9, further comprising: identifying a known customer attribute from a subsequent message having the semantic attribute; and updating the stored semantic attribute and correlated customer attribute in accord with the known customer attribute. 13. A system, comprising: a processor; and a communication interface; and wherein the communication interface is operable to receive a message from a customer; wherein the processor is operable to determine whether the message has a message semantic attribute; wherein the processor is further operable to derive a message customer attribute from the message semantic attribute; and wherein the processor is further operable to make a routing decision for a communication exchange between the customer and an agent based in part on the derived message customer attribute. 14. The system of claim 13, further comprising: a network interface; and wherein the processor is further operable cause the message to be sent to the agent via the network interface. 15. The system of claim 14, further comprising: the processor sending a query to the agent prompting a response to the agent's assessment of the accuracy of the association of the message customer attribute and derive the message customer attribute from the message semantic attribute in accord with the accuracy. 16. The system of claim 13, further comprising: a database having a record including a stored semantic attribute and a stored customer attribute; and the processor derives the derived customer attribute upon determining the message semantic attribute matches the stored semantic attribute. 17. The system of claim 16, wherein: the processor accesses a discovered association between the derived customer attribute and the message semantic attribute and updates the stored semantic attribute in accord with the discovered association. 18. The system of claim 16 wherein the processor is operable to determine whether the message has the message semantic attribute upon parsing a number of portions of the message and comparing ones of the number of portions to the stored semantic attribute. 19. The system of claim 13, wherein the derived message customer attribute is at least one of formality, education, gender, domain expertise, first language fluency, and second language fluency different from the first language. 20. The system of claim 13, wherein the processor derives a message customer attribute from the message semantic attribute further comprising a correlation factor between the same.
Contact centers may benefit from routing messages to agents who have similar, or complementary, attributes as the customer of the message. In a text message, certain message attributes provide artifacts that may be common to one particular customer attribute. Messages containing that particular message attribute provide a derived customer attribute and the message routed accordingly. In addition, agents responding to a customer may be provided with guidance to ensure their response is appropriate for the derived customer attribute of the customer.1. A method, comprising: receiving a message from a customer, the message having message elements; deriving a customer attribute based on a semantic analysis of the message elements; selecting an agent from a plurality of agents in a contact center to interact with the customer based, at least in part, on the derived customer attribute; and enabling a communication session between the selected agent and customer. 2. The method of claim 1, wherein the derived customer attribute is a degree of estimated matching to the derived customer attribute. 3. The method of claim 1, wherein the derived customer attribute comprises a formality-informality conversational preference. 4. The method of claim 1, wherein the derived customer attribute comprises an estimated educational level of the customer. 5. The method of claim 1, wherein the derived customer attribute comprises as estimated gender. 6. The method of claim 1, wherein the derived customer attribute comprises a native language. 7. The method of claim 1, wherein the derived customer attribute comprises a technical expertise level. 8. The method of claim 1, further comprising: monitoring the agent's reply to the message; analyzing the agent's reply; and upon determining the analyzed agent's reply is not in accord with the derived customer attribute, notifying the agent that the agent's reply is not in accord with the derived customer attribute. 9. A non-transitory computer readable medium with instructions, that when executed by a computer, cause the computer to perform: receiving a number of messages from customers having a customer attribute; determining a correlation between a semantic attribute in the number of messages and the customer attribute; and storing the semantic attribute and correlated customer attribute in a database. 10. The instructions of claim 9, wherein, at least one of the number of messages is a transcript of a voice message. 11. The instructions of claim 9, wherein storing the semantic attribute and correlated customer attribute, further comprises, storing an associated degree of correlation. 12. The instructions of claim 9, further comprising: identifying a known customer attribute from a subsequent message having the semantic attribute; and updating the stored semantic attribute and correlated customer attribute in accord with the known customer attribute. 13. A system, comprising: a processor; and a communication interface; and wherein the communication interface is operable to receive a message from a customer; wherein the processor is operable to determine whether the message has a message semantic attribute; wherein the processor is further operable to derive a message customer attribute from the message semantic attribute; and wherein the processor is further operable to make a routing decision for a communication exchange between the customer and an agent based in part on the derived message customer attribute. 14. The system of claim 13, further comprising: a network interface; and wherein the processor is further operable cause the message to be sent to the agent via the network interface. 15. The system of claim 14, further comprising: the processor sending a query to the agent prompting a response to the agent's assessment of the accuracy of the association of the message customer attribute and derive the message customer attribute from the message semantic attribute in accord with the accuracy. 16. The system of claim 13, further comprising: a database having a record including a stored semantic attribute and a stored customer attribute; and the processor derives the derived customer attribute upon determining the message semantic attribute matches the stored semantic attribute. 17. The system of claim 16, wherein: the processor accesses a discovered association between the derived customer attribute and the message semantic attribute and updates the stored semantic attribute in accord with the discovered association. 18. The system of claim 16 wherein the processor is operable to determine whether the message has the message semantic attribute upon parsing a number of portions of the message and comparing ones of the number of portions to the stored semantic attribute. 19. The system of claim 13, wherein the derived message customer attribute is at least one of formality, education, gender, domain expertise, first language fluency, and second language fluency different from the first language. 20. The system of claim 13, wherein the processor derives a message customer attribute from the message semantic attribute further comprising a correlation factor between the same.
2,600
9,931
9,931
14,292,159
2,685
The present system and method is particularly useful for remotely controlling a device having one or more menus via a remote touch interface having at least an unstructured primary input area. A user can provide inputs to a touch interface without needing to view the interface and yet still achieve the desired response from the remotely controlled device. The primary input area of the touch interface may or may not have a background display, such as on a touch screen, but the primary input area of the touch interface should be unstructured and should not have independently selectable items, buttons, icons or anything of the like. Since the touch interface is unstructured, the user does not have to identify any selectable buttons. Instead the user can input a gesture into the interface and watch the remotely controlled device respond. The system does not provide any other visual confirmation.
1. (canceled) 2. A method, comprising: at a device with a touch-sensitive surface: receiving a touch input at a location on the touch-sensitive surface; determining a context of an interface; in accordance with a determination that the interface is in a first context, performing a first action in response to the received touch at the location on the touch-sensitive surface; and in accordance with a determination that the interface is in a second context, different from the first context, performing a second action, different than the first action, in response to the received touch at the location on the touch-sensitive surface. 3. The method of claim 2, wherein the device is configured to wirelessly communicate with a remotely controlled device, and wherein the interface is a graphical user interface of the remotely controlled device. 4. The method of claim 2, wherein the first context comprises a menu navigation context and the second context comprises a media playback context. 5. The method of claim 4, wherein the received touch comprises a single digit tap input, wherein the first action comprises selecting an item in a main selection area of the interface, and wherein the second action comprises toggling a play/pause function. 6. The method of claim 4, wherein the received touch comprises a single digit drag input, wherein the first action comprises moving a selected item, and wherein the second action comprises shuttle transport. 7. The method of claim 4, wherein the received touch comprises a single digit swipe input, wherein the first action comprises a scroll operation, and wherein the second action comprises at least one of skipping forward in media playback, skipping backward in media playback, displaying a chapter selection menu, and cycling through displayed information. 8. The method of claim 2, further comprising: receiving a second touch input at a different location on the touch-sensitive surface; and performing a third action in response to the received second touch at the different location on the touch-sensitive surface, regardless of the context of the interface. 9. A computing device, comprising: a touch-sensitive screen configured to receive a touch input at an unstructured touch sensitive area; a processor coupled to the touch-sensitive screen, the processor configured to: determine a context of an interface; in accordance with a determination that the interface is in a first context, perform a first action in response to the received touch at the unstructured touch sensitive area; and in accordance with a determination that the interface is in a second context, different from the first context, perform a second action, different than the first action, in response to the received touch at the unstructured touch sensitive area. 10. The computing device of claim 9, further comprising: a communication interface configured to wirelessly communicate with a remotely controlled device, wherein the interface is a graphical user interface of the remotely controlled device. 11. The computing device of claim 9, wherein the first context comprises a menu navigation context and the second context comprises a media playback context. 12. The computing device of claim 11, wherein the received touch comprises a single digit tap input, wherein the first action comprises selecting an item in a main selection area of the interface, and wherein the second action comprises toggling a play/pause function. 13. The computing device of claim 11, wherein the received touch comprises a single digit drag input, wherein the first action comprises moving a selected item, and wherein the second action comprises shuttle transport. 14. The computing device of claim 11, wherein the received touch comprises a single digit swipe input, wherein the first action comprises a scroll operation, and wherein the second action comprises at least one of skipping forward in media playback, skipping backward in media playback, displaying a chapter selection menu, and cycling through displayed information. 15. The computing device of claim 9, wherein the touch-sensitive screen is further configured to receive a second touch input at a structured touch sensitive area; and wherein the processor is further configured to perform a third action in response to the received second touch at the structured touch sensitive area, regardless of the context of the interface. 16. A non-transitory computer-readable medium storing program code executable by a processor of a computing device to cause the computing device to: receive a touch input at a location on a touch-sensitive surface of the computing device; determine a context of an interface; interpret the received touch based on the determination of the context of the interface, wherein the received touch is interpreted as a first command in accordance with a determination that the interface is in a first context, and wherein the received touch is interpreted as a second command, different from the first command, in accordance with a determination that the interface is in a second context, different from the first context. 17. The non-transitory computer-readable medium of claim 16, wherein the program code is further executable to cause the computing device to wirelessly communicate with a remotely controlled device, wherein the interface is a graphical user interface of the remotely controlled device. 18. The non-transitory computer-readable medium of claim 16, wherein the first context comprises a menu navigation context and the second context comprises a media playback context. 19. The non-transitory computer-readable medium of claim 18, wherein the received touch comprises a single digit tap input, wherein the first command comprises a command to select an item in a main selection area of the interface, and wherein the second command comprises a command to toggle a play/pause function. 20. The non-transitory computer-readable medium of claim 18, wherein the received touch comprises a single digit swipe input, wherein the first command comprises a command to perform a scroll operation, and wherein the second command comprises a command to perform at least one of skipping forward in media playback, skipping backward in media playback, displaying a chapter selection menu, and cycling through displayed information. 21. The non-transitory computer-readable medium of claim 16, wherein the program code is further executable to cause the computing device to: receive a second touch input at a different location on the touch-sensitive surface; and perform a third action in response to the received second touch at the different location on the touch-sensitive surface, regardless of the context of the interface.
The present system and method is particularly useful for remotely controlling a device having one or more menus via a remote touch interface having at least an unstructured primary input area. A user can provide inputs to a touch interface without needing to view the interface and yet still achieve the desired response from the remotely controlled device. The primary input area of the touch interface may or may not have a background display, such as on a touch screen, but the primary input area of the touch interface should be unstructured and should not have independently selectable items, buttons, icons or anything of the like. Since the touch interface is unstructured, the user does not have to identify any selectable buttons. Instead the user can input a gesture into the interface and watch the remotely controlled device respond. The system does not provide any other visual confirmation.1. (canceled) 2. A method, comprising: at a device with a touch-sensitive surface: receiving a touch input at a location on the touch-sensitive surface; determining a context of an interface; in accordance with a determination that the interface is in a first context, performing a first action in response to the received touch at the location on the touch-sensitive surface; and in accordance with a determination that the interface is in a second context, different from the first context, performing a second action, different than the first action, in response to the received touch at the location on the touch-sensitive surface. 3. The method of claim 2, wherein the device is configured to wirelessly communicate with a remotely controlled device, and wherein the interface is a graphical user interface of the remotely controlled device. 4. The method of claim 2, wherein the first context comprises a menu navigation context and the second context comprises a media playback context. 5. The method of claim 4, wherein the received touch comprises a single digit tap input, wherein the first action comprises selecting an item in a main selection area of the interface, and wherein the second action comprises toggling a play/pause function. 6. The method of claim 4, wherein the received touch comprises a single digit drag input, wherein the first action comprises moving a selected item, and wherein the second action comprises shuttle transport. 7. The method of claim 4, wherein the received touch comprises a single digit swipe input, wherein the first action comprises a scroll operation, and wherein the second action comprises at least one of skipping forward in media playback, skipping backward in media playback, displaying a chapter selection menu, and cycling through displayed information. 8. The method of claim 2, further comprising: receiving a second touch input at a different location on the touch-sensitive surface; and performing a third action in response to the received second touch at the different location on the touch-sensitive surface, regardless of the context of the interface. 9. A computing device, comprising: a touch-sensitive screen configured to receive a touch input at an unstructured touch sensitive area; a processor coupled to the touch-sensitive screen, the processor configured to: determine a context of an interface; in accordance with a determination that the interface is in a first context, perform a first action in response to the received touch at the unstructured touch sensitive area; and in accordance with a determination that the interface is in a second context, different from the first context, perform a second action, different than the first action, in response to the received touch at the unstructured touch sensitive area. 10. The computing device of claim 9, further comprising: a communication interface configured to wirelessly communicate with a remotely controlled device, wherein the interface is a graphical user interface of the remotely controlled device. 11. The computing device of claim 9, wherein the first context comprises a menu navigation context and the second context comprises a media playback context. 12. The computing device of claim 11, wherein the received touch comprises a single digit tap input, wherein the first action comprises selecting an item in a main selection area of the interface, and wherein the second action comprises toggling a play/pause function. 13. The computing device of claim 11, wherein the received touch comprises a single digit drag input, wherein the first action comprises moving a selected item, and wherein the second action comprises shuttle transport. 14. The computing device of claim 11, wherein the received touch comprises a single digit swipe input, wherein the first action comprises a scroll operation, and wherein the second action comprises at least one of skipping forward in media playback, skipping backward in media playback, displaying a chapter selection menu, and cycling through displayed information. 15. The computing device of claim 9, wherein the touch-sensitive screen is further configured to receive a second touch input at a structured touch sensitive area; and wherein the processor is further configured to perform a third action in response to the received second touch at the structured touch sensitive area, regardless of the context of the interface. 16. A non-transitory computer-readable medium storing program code executable by a processor of a computing device to cause the computing device to: receive a touch input at a location on a touch-sensitive surface of the computing device; determine a context of an interface; interpret the received touch based on the determination of the context of the interface, wherein the received touch is interpreted as a first command in accordance with a determination that the interface is in a first context, and wherein the received touch is interpreted as a second command, different from the first command, in accordance with a determination that the interface is in a second context, different from the first context. 17. The non-transitory computer-readable medium of claim 16, wherein the program code is further executable to cause the computing device to wirelessly communicate with a remotely controlled device, wherein the interface is a graphical user interface of the remotely controlled device. 18. The non-transitory computer-readable medium of claim 16, wherein the first context comprises a menu navigation context and the second context comprises a media playback context. 19. The non-transitory computer-readable medium of claim 18, wherein the received touch comprises a single digit tap input, wherein the first command comprises a command to select an item in a main selection area of the interface, and wherein the second command comprises a command to toggle a play/pause function. 20. The non-transitory computer-readable medium of claim 18, wherein the received touch comprises a single digit swipe input, wherein the first command comprises a command to perform a scroll operation, and wherein the second command comprises a command to perform at least one of skipping forward in media playback, skipping backward in media playback, displaying a chapter selection menu, and cycling through displayed information. 21. The non-transitory computer-readable medium of claim 16, wherein the program code is further executable to cause the computing device to: receive a second touch input at a different location on the touch-sensitive surface; and perform a third action in response to the received second touch at the different location on the touch-sensitive surface, regardless of the context of the interface.
2,600
9,932
9,932
15,411,054
2,632
A transmitter includes a plurality of user-specific channels, with each user specific channel associated with a different set of user equipment (UE) receive antennas. For precoding, the transmitter generates a baseline channel matrix reflecting the characteristics of the communication medium employed to transmit data to the different user equipment (UEs). For each user-specific channel, the transmitter generates a complementary channel matrix based on the baseline channel matrix, then performs matrix decomposition to eliminate selected terms of the complementary channel matrix that interfere from other communication channels of the transmitter. The transmitter can reuse portions of one rotational matrix set generated for one channel to generate the rotational matrix sets for one or more other channels.
1. A method comprising: generating at a transmitter device, a first set of rotational matrices for a set of channels of the transmitter device; precoding first data for transmission based on the first set of rotational matrices to generate first precoded data; transmitting the precoded data via a first channel of the set of channels at the transmitter device; generating, at the transmitter device, a second set of rotational matrices for the set of channels based on the first set of rotational matrices; precoding second data for transmission based on the second set of rotational matrices to generate second precoded data; and transmitting the second precoded data via a second channel of the set of channels at the transmitter device. 2. The method of claim 1, wherein generating the second set of rotational matrices comprises using a first subset of the first set of rotational matrices for a corresponding subset of the second set of rotational matrices. 3. The method of claim 2, wherein generating the second set of rotational matrices comprises using a plurality of matrices of the first set of rotational matrices for corresponding matrices of the second set of rotational matrices. 4. The method of claim 1, further comprising: generating, at the transmitter device, a third set of rotational matrices for the set of channels based on the first set of rotational matrices; precoding third data for transmission based on the third set of rotational matrices to generate third precoded data; and transmitting the third precoded data via a third channel of the set of channels at the transmitter device. 5. The method of claim 4, wherein: generating the second set of rotational matrices comprises using a first plurality of matrices of the first set of rotational matrices for matrices of the second set of rotational matrices; and generating the third set of rotational matrices comprises using a second plurality of rotational matrices of the first set of rotational matrices for corresponding matrices of the third set of rotational matrices, the second plurality of rotational matrices different from the first plurality of rotational matrices. 6. The method of claim 1, further comprising: identifying a channel order for the set of channels of the transmitter based on a set of network characteristics; and generating the second set of rotational matrices based on the first set of rotational matrices in response to identifying a proximity of the first channel to the second channel in the channel order. 7. The method of claim 1, further comprising: generating a third set of rotational matrices at the transmitter device independent of the first set of rotational matrices; precoding third data for transmission based on the third set of rotational matrices to generate a third precoded data; concurrent with transmitting the first transmission matrix, transmitting the third precoded data via a third channel of the set of channels at the transmitter device; generating, at the transmitter device, a fourth set of rotational matrices based on the third set of rotational matrices; precoding fourth data for transmission based on the fourth set of rotational matrices to generate fourth precoded data; and transmitting the fourth precoded data via a fourth channel of the set of channels at the transmitter device. 8. A method, comprising: identifying at a transmitter device a channel order for a plurality of channels of the transmitter device; generating for a first channel of the plurality of channels a first set of rotational matrices; precoding first data for transmission based on the first set of rotational matrices to generate first precoded data; transmitting the first precoded data via the first channel; in response to identifying a second channel of the plurality of channels based on a proximity of the second channel to the first channel in the channel order: generating for the second channel a second set of rotational matrices based on the first set of rotational matrices; precoding second data for transmission based on the second set of rotational matrices to generate second precoded data; and transmitting the second precoded data via the second channel at the transmitter device. 9. The method of claim 8, wherein the first channel is an initial channel in the channel order, and the second channel immediately follows the first channel in the channel order. 10. The method of claim 9, further comprising: in response to identifying a third channel of the plurality of channels based on a proximity of the third channel to the first channel in the channel order: generating for the third channel a third set of rotational matrices based on the first set of rotational matrices; precoding third data for transmission based on the third set of rotational matrices to generate third precoded data; and transmitting the third precoded data via the second channel at the transmitter device. 11. The method of claim 10, wherein the third channel immediately follows the second channel in the channel order. 12. The method of claim 8, further comprising: in response to identifying a third channel is at a threshold position in the channel order: modifying the channel order to generate a modified channel order; generating, for an initial channel of the plurality of channels in the modified channel order, a third set of rotational matrices independent of the first set of rotational matrices; and precoding third data for transmission based on the third set of rotational matrices to generate third precoded data. 13. The method of claim 12, further comprising: in response to identifying a fourth channel of the plurality of channels based on a proximity of the fourth channel to the initial channel in the modified channel order: generating for the fourth channel a third set of rotational matrices based on the third set of rotational matrices; and precoding fourth data for transmission based on the third set of rotational matrices to generate third precoded data. 14. The method of claim 13, wherein modifying the channel order comprises reversing the channel order. 15. A transmitter, comprising: a plurality of channels including a first channel and a second channel; a data precode module configured to: generate a first set of rotational matrices for the first channel; precode first data for transmission based on the first set of rotational matrices to generate first precoded data; transmit the precoded data via the first channel; generate a second set of rotational matrices based on the first set of rotational matrices; precode second data for transmission based on the second set of rotational matrices to generate second precoded data; and transmit the second precoded data via the second channel. 16. The transmitter of claim 15, wherein the data precode module is to generate the second set of rotational matrices by using a first subset of the first set of rotational matrices for a corresponding subset of the second set of rotational matrices. 17. The transmitter of claim 16, wherein generating second rotational matrix comprises using a plurality of matrices of the first set of rotational matrices for corresponding matrices of the second set of rotational matrices. 18. The transmitter of claim 15, wherein the data precode module is configured to: generate a third set of rotational matrices based on the first set of rotational matrices; precode third data for transmission based on the third set of rotational matrices to generate third precoded data; and transmit the third precoded data via a third channel. 19. The transmitter of claim 18, wherein: the data precode module is configured to generate the second set of rotational matrices by using a first plurality of matrices of the first set of rotational matrices for matrices of the second set of rotational matrices; and the data precode module is configured to generate the third set of rotational matrices comprises by using a second plurality of rotational matrices of the first set of rotational matrices for corresponding matrices of the third set of rotational matrices, the second plurality of rotational matrices different from the first plurality of rotational matrices. 20. The transmitter of claim 15, wherein the data precode module is configured to: identify a channel order for a plurality of channels of the transmitter based on a set of network characteristics; and generate the second set of rotational matrices based on the first set of rotational matrices in response to identifying a proximity of the first channel to the second channel in the channel order.
A transmitter includes a plurality of user-specific channels, with each user specific channel associated with a different set of user equipment (UE) receive antennas. For precoding, the transmitter generates a baseline channel matrix reflecting the characteristics of the communication medium employed to transmit data to the different user equipment (UEs). For each user-specific channel, the transmitter generates a complementary channel matrix based on the baseline channel matrix, then performs matrix decomposition to eliminate selected terms of the complementary channel matrix that interfere from other communication channels of the transmitter. The transmitter can reuse portions of one rotational matrix set generated for one channel to generate the rotational matrix sets for one or more other channels.1. A method comprising: generating at a transmitter device, a first set of rotational matrices for a set of channels of the transmitter device; precoding first data for transmission based on the first set of rotational matrices to generate first precoded data; transmitting the precoded data via a first channel of the set of channels at the transmitter device; generating, at the transmitter device, a second set of rotational matrices for the set of channels based on the first set of rotational matrices; precoding second data for transmission based on the second set of rotational matrices to generate second precoded data; and transmitting the second precoded data via a second channel of the set of channels at the transmitter device. 2. The method of claim 1, wherein generating the second set of rotational matrices comprises using a first subset of the first set of rotational matrices for a corresponding subset of the second set of rotational matrices. 3. The method of claim 2, wherein generating the second set of rotational matrices comprises using a plurality of matrices of the first set of rotational matrices for corresponding matrices of the second set of rotational matrices. 4. The method of claim 1, further comprising: generating, at the transmitter device, a third set of rotational matrices for the set of channels based on the first set of rotational matrices; precoding third data for transmission based on the third set of rotational matrices to generate third precoded data; and transmitting the third precoded data via a third channel of the set of channels at the transmitter device. 5. The method of claim 4, wherein: generating the second set of rotational matrices comprises using a first plurality of matrices of the first set of rotational matrices for matrices of the second set of rotational matrices; and generating the third set of rotational matrices comprises using a second plurality of rotational matrices of the first set of rotational matrices for corresponding matrices of the third set of rotational matrices, the second plurality of rotational matrices different from the first plurality of rotational matrices. 6. The method of claim 1, further comprising: identifying a channel order for the set of channels of the transmitter based on a set of network characteristics; and generating the second set of rotational matrices based on the first set of rotational matrices in response to identifying a proximity of the first channel to the second channel in the channel order. 7. The method of claim 1, further comprising: generating a third set of rotational matrices at the transmitter device independent of the first set of rotational matrices; precoding third data for transmission based on the third set of rotational matrices to generate a third precoded data; concurrent with transmitting the first transmission matrix, transmitting the third precoded data via a third channel of the set of channels at the transmitter device; generating, at the transmitter device, a fourth set of rotational matrices based on the third set of rotational matrices; precoding fourth data for transmission based on the fourth set of rotational matrices to generate fourth precoded data; and transmitting the fourth precoded data via a fourth channel of the set of channels at the transmitter device. 8. A method, comprising: identifying at a transmitter device a channel order for a plurality of channels of the transmitter device; generating for a first channel of the plurality of channels a first set of rotational matrices; precoding first data for transmission based on the first set of rotational matrices to generate first precoded data; transmitting the first precoded data via the first channel; in response to identifying a second channel of the plurality of channels based on a proximity of the second channel to the first channel in the channel order: generating for the second channel a second set of rotational matrices based on the first set of rotational matrices; precoding second data for transmission based on the second set of rotational matrices to generate second precoded data; and transmitting the second precoded data via the second channel at the transmitter device. 9. The method of claim 8, wherein the first channel is an initial channel in the channel order, and the second channel immediately follows the first channel in the channel order. 10. The method of claim 9, further comprising: in response to identifying a third channel of the plurality of channels based on a proximity of the third channel to the first channel in the channel order: generating for the third channel a third set of rotational matrices based on the first set of rotational matrices; precoding third data for transmission based on the third set of rotational matrices to generate third precoded data; and transmitting the third precoded data via the second channel at the transmitter device. 11. The method of claim 10, wherein the third channel immediately follows the second channel in the channel order. 12. The method of claim 8, further comprising: in response to identifying a third channel is at a threshold position in the channel order: modifying the channel order to generate a modified channel order; generating, for an initial channel of the plurality of channels in the modified channel order, a third set of rotational matrices independent of the first set of rotational matrices; and precoding third data for transmission based on the third set of rotational matrices to generate third precoded data. 13. The method of claim 12, further comprising: in response to identifying a fourth channel of the plurality of channels based on a proximity of the fourth channel to the initial channel in the modified channel order: generating for the fourth channel a third set of rotational matrices based on the third set of rotational matrices; and precoding fourth data for transmission based on the third set of rotational matrices to generate third precoded data. 14. The method of claim 13, wherein modifying the channel order comprises reversing the channel order. 15. A transmitter, comprising: a plurality of channels including a first channel and a second channel; a data precode module configured to: generate a first set of rotational matrices for the first channel; precode first data for transmission based on the first set of rotational matrices to generate first precoded data; transmit the precoded data via the first channel; generate a second set of rotational matrices based on the first set of rotational matrices; precode second data for transmission based on the second set of rotational matrices to generate second precoded data; and transmit the second precoded data via the second channel. 16. The transmitter of claim 15, wherein the data precode module is to generate the second set of rotational matrices by using a first subset of the first set of rotational matrices for a corresponding subset of the second set of rotational matrices. 17. The transmitter of claim 16, wherein generating second rotational matrix comprises using a plurality of matrices of the first set of rotational matrices for corresponding matrices of the second set of rotational matrices. 18. The transmitter of claim 15, wherein the data precode module is configured to: generate a third set of rotational matrices based on the first set of rotational matrices; precode third data for transmission based on the third set of rotational matrices to generate third precoded data; and transmit the third precoded data via a third channel. 19. The transmitter of claim 18, wherein: the data precode module is configured to generate the second set of rotational matrices by using a first plurality of matrices of the first set of rotational matrices for matrices of the second set of rotational matrices; and the data precode module is configured to generate the third set of rotational matrices comprises by using a second plurality of rotational matrices of the first set of rotational matrices for corresponding matrices of the third set of rotational matrices, the second plurality of rotational matrices different from the first plurality of rotational matrices. 20. The transmitter of claim 15, wherein the data precode module is configured to: identify a channel order for a plurality of channels of the transmitter based on a set of network characteristics; and generate the second set of rotational matrices based on the first set of rotational matrices in response to identifying a proximity of the first channel to the second channel in the channel order.
2,600
9,933
9,933
15,709,082
2,642
A method for providing information to a third party about a driver of a vehicle having a telematics system and a VIN, the telematics system comprising a positioning module, a telematics unit, a mobile device of the driver having a unique ID, and an integrated communication device of the vehicle, includes the steps of identifying the mobile device of the driver with at least one of the integrated communication device and the telematics unit, generating data from the positioning module as the driver operates the vehicle, transmitting the generated data, the VIN, and the ID of the mobile device of the driver outside the vehicle, generating a driving behavior report from the transmitted data, VIN, and ID, and utilizing the driving behavior report to determine an insurance premium to charge the driver or the owner of the vehicle.
1. A method for providing information to a third party about a driver of a vehicle having a telematics system and a vehicle identification number (VIN), the telematics system comprising a positioning module, a telematics unit, a mobile device of the driver having a unique ID, and an integrated communication device of the vehicle, which comprises: identifying the mobile device of the driver with at least one of the integrated communication device and the telematics unit; generating data from the positioning module as the driver operates the vehicle; transmitting the generated data, the VIN, and the ID of the mobile device of the driver outside the vehicle; generating a driving behavior report from the transmitted data, VIN, and ID; and utilizing the driving behavior report to determine an insurance premium to charge at least one of the driver and an owner of the vehicle. 2. The method according to claim 1, wherein the telematics unit is embedded in the vehicle. 3. The method according to claim 1, wherein the telematics unit is separate from and retrofitted to the vehicle. 4. The method according to claim 1, wherein: the positioning module comprises at least one of a global positioning system (GPS) device and an accelerometer; and the positioning module is part of at least one of the telematics unit and the mobile device. 5. The method according to claim 4, wherein the GPS generates the data that establishes at least one of a position of the vehicle, a time when the data was generated, a velocity of the vehicle, an acceleration of the vehicle, a braking of the vehicle, and turning of the vehicle. 6. The method according to claim 4, wherein: an off-site entity utilizes the data generated by the GPS to establish at least one of a position of the vehicle, a time when the data was generated, a velocity of the vehicle, an acceleration of the vehicle, a braking of the vehicle, and turning of the vehicle; and the off-site entity is at least one of a telematics provider, an insurer, and a fleet operator. 7. The method according to claim 1, wherein: the integrated communication device of the vehicle comprises a Bluetooth device; and the driver pairs the mobile device of the driver with the Bluetooth device. 8. The method according to claim 1, wherein: the integrated communication device of the vehicle comprises a Bluetooth device having a highest priority phone; and the mobile device of the driver is the highest priority phone and automatically pairs with the Bluetooth device when in communications range with the Bluetooth device. 9. The method according to claim 8, wherein the driver accesses the driving behavior report to confirm an identity of the driver or to correct the identity to another person. 10. The method according to claim 1, wherein the generated data, the VIN, and the ID of the mobile device of the driver are transmitted to a receiving device outside the vehicle. 11. The method according to claim 10, wherein the receiving device is a server connected to the Internet. 12. The method according to claim 1, wherein the generated data, the VIN, and the ID of the mobile device of the driver are transmitted to a third party. 13. The method according to claim 1, wherein the third party is a telematics provider. 14. The method according to claim 1, wherein the third party is an insurer. 15. The method according to claim 1, wherein the third party is a fleet operator. 16. The method according to claim 1, wherein the driving behavior report indicate at least one of: individual driver performance; individual fleet driver performance; and how much taxation to charge to at least one of the driver and the owner of the vehicle.
A method for providing information to a third party about a driver of a vehicle having a telematics system and a VIN, the telematics system comprising a positioning module, a telematics unit, a mobile device of the driver having a unique ID, and an integrated communication device of the vehicle, includes the steps of identifying the mobile device of the driver with at least one of the integrated communication device and the telematics unit, generating data from the positioning module as the driver operates the vehicle, transmitting the generated data, the VIN, and the ID of the mobile device of the driver outside the vehicle, generating a driving behavior report from the transmitted data, VIN, and ID, and utilizing the driving behavior report to determine an insurance premium to charge the driver or the owner of the vehicle.1. A method for providing information to a third party about a driver of a vehicle having a telematics system and a vehicle identification number (VIN), the telematics system comprising a positioning module, a telematics unit, a mobile device of the driver having a unique ID, and an integrated communication device of the vehicle, which comprises: identifying the mobile device of the driver with at least one of the integrated communication device and the telematics unit; generating data from the positioning module as the driver operates the vehicle; transmitting the generated data, the VIN, and the ID of the mobile device of the driver outside the vehicle; generating a driving behavior report from the transmitted data, VIN, and ID; and utilizing the driving behavior report to determine an insurance premium to charge at least one of the driver and an owner of the vehicle. 2. The method according to claim 1, wherein the telematics unit is embedded in the vehicle. 3. The method according to claim 1, wherein the telematics unit is separate from and retrofitted to the vehicle. 4. The method according to claim 1, wherein: the positioning module comprises at least one of a global positioning system (GPS) device and an accelerometer; and the positioning module is part of at least one of the telematics unit and the mobile device. 5. The method according to claim 4, wherein the GPS generates the data that establishes at least one of a position of the vehicle, a time when the data was generated, a velocity of the vehicle, an acceleration of the vehicle, a braking of the vehicle, and turning of the vehicle. 6. The method according to claim 4, wherein: an off-site entity utilizes the data generated by the GPS to establish at least one of a position of the vehicle, a time when the data was generated, a velocity of the vehicle, an acceleration of the vehicle, a braking of the vehicle, and turning of the vehicle; and the off-site entity is at least one of a telematics provider, an insurer, and a fleet operator. 7. The method according to claim 1, wherein: the integrated communication device of the vehicle comprises a Bluetooth device; and the driver pairs the mobile device of the driver with the Bluetooth device. 8. The method according to claim 1, wherein: the integrated communication device of the vehicle comprises a Bluetooth device having a highest priority phone; and the mobile device of the driver is the highest priority phone and automatically pairs with the Bluetooth device when in communications range with the Bluetooth device. 9. The method according to claim 8, wherein the driver accesses the driving behavior report to confirm an identity of the driver or to correct the identity to another person. 10. The method according to claim 1, wherein the generated data, the VIN, and the ID of the mobile device of the driver are transmitted to a receiving device outside the vehicle. 11. The method according to claim 10, wherein the receiving device is a server connected to the Internet. 12. The method according to claim 1, wherein the generated data, the VIN, and the ID of the mobile device of the driver are transmitted to a third party. 13. The method according to claim 1, wherein the third party is a telematics provider. 14. The method according to claim 1, wherein the third party is an insurer. 15. The method according to claim 1, wherein the third party is a fleet operator. 16. The method according to claim 1, wherein the driving behavior report indicate at least one of: individual driver performance; individual fleet driver performance; and how much taxation to charge to at least one of the driver and the owner of the vehicle.
2,600
9,934
9,934
15,529,426
2,612
Methods, software, and apparatus for application transparent, high available GPU computing with VM checkpointing. The guest access of certain GPU resources, such as MMIO resources, are trapped to keep a copy of guest context per semantics, and/or emulate the guest access of the resources prior to submission to the GPU, while other commands relating to certain graphics memory address regions are trapped before being passed through to the GPU. The trapped commands are scanned before submission to predict: a) potential to-be-dirtied graphics memory pages, and b) the execution time of intercepted commands, so the next checkpointing can be aligned to a predicted execution time. The GPU internal states are drained by flushing internal context/tlb/cache, at the completion of submitted commands, and then a snapshot of the vGPU state is taken, based on tracked GPU state, GPU context (through GPU-specific commands), detected dirty graphics memory pages and predicted to-be dirtied graphics memory pages.
1-25. (canceled) 26. A method comprising: implementing at least one virtual machine (VM) on a compute platform including a central processing unit (CPU), a graphics processing unit (GPU), and graphics memory, each of the at least one virtual machine hosted by a hypervisor executed via the CPU; for each of at least one VM, trapping GPU commands submitted from the VM; emulating, using a virtual GPU associated with the VM, changes to state information for the GPU that are predicted to result when the trapped GPU commands are executed by the GPU; predicting graphics memory pages that might be dirtied via execution of trapped GPU commands by the GPU; and periodically performing a VM checkpointing operation for the VM, wherein a snapshot of changes to the state information for the GPU and a copy of the graphics memory pages that are predicted to-be-dirtied are stored as a VM checkpoint. 27. The method of claim 26, further comprising: predicting execution times of a command or batch of commands that have been trapped; and determining, based on the predicted execution times, which trapped commands to submit to the GPU prior to performing a next checkpointing operation. 28. The method of claim 26, further comprising: scanning trapped GPU commands submitted by a given VM through a command parser; and emulating accesses to at least one of GPU Input/Output (I/O) registers and one or more GPU page tables using the virtual GPU associated with the given VM. 29. The method of claim 26, further comprising: passing through certain accesses to the graphics memory and marking graphics memory pages that are predicted to-be-dirtied as a result of the certain accesses to graphics memory; and including a copy of the graphic memory pages that are predicted to-be-dirtied as a result of the certain accesses to the graphics memory in the VM checkpoint. 30. The method of claim 29, wherein the certain accesses to the graphics memory that are passed through include accesses to a display buffer and a command buffer in the graphics memory. 31. The method of claim 29, wherein the certain accesses to the graphics memory that are passed through include accesses to the graphics memory by the CPU. 32. The method of claim 26, wherein the GPU includes an internal context, translation look-aside buffer, and cache, the method further comprising draining GPU internal state data by flushing the internal GPU context, translation look-aside buffer and cache in connection with taking a snapshot of the changes to the state information for the GPU. 33. The method of claim 26, further comprising buffering commands submitted from a VM during a VM checkpointing operation. 34. The method of claim 26, wherein the hypervisor is a Type-1 hypervisor. 35. A tangible non-transient machine readable medium having instructions comprising a plurality of software modules stored thereon, configured to be implemented on a compute platform having a central processing unit (CPU), a graphics processing unit (GPU), and graphics memory, the compute platform further configured to execute a hypervisor on the CPU that hosts at least one virtual machine (VM) hosting a guest operating system including a graphics driver, wherein upon execution the instructions enable the compute platform to: for each of at least one VM hosting a guest operating system including a graphics driver, trap commands issued from the graphics driver of the VM to be executed by the GPU; trap accesses to predetermined GPU resources made by the graphics driver; emulate execution of the commands and accesses to the GPU resources using a virtual GPU associated with the VM, the virtual GPU including state information; track state information for the virtual GPU, pass through certain accesses from the graphics driver to the graphics memory while marking graphics memory pages modified by the certain accesses as dirtied; periodically performing a VM checkpointing operation for the VM, wherein a snapshot of current tracked state information for the virtual GPU and a copy of the graphics memory pages that are dirtied are stored as a VM checkpoint; and submit the trapped commands and accesses to the GPU. 36. The tangible non-transient machine readable medium of claim 35, wherein execution of the instructions further enables the compute platform to: scan the commands that are trapped via a command parser; predict graphics memory pages that may be potentially dirtied via execution of a trapped command or batch of trapped commands by the GPU; and include the content of the graphics memory pages that are predicted to be potentially dirtied as part of the VM checkpoint. 37. The tangible non-transient machine readable medium of claim 35, wherein execution of the instructions further enables the compute platform to: predict execution times of a command or batch of commands; and determine, based on the predicted execution times, which commands to submit to the GPU prior to performing a next checkpointing operation. 38. The tangible non-transient machine readable medium of claim 35, wherein execution of the instructions further enables the compute platform to: scan trapped commands submitted by a graphics driver for a given VM through a command parser; and emulate accesses to at least one of GPU Input/Output (I/O) registers and one or more GPU page tables using the virtual GPU associated with the given VM. 39. The tangible non-transient machine readable medium of claim 35, wherein execution of the instructions further enables the compute platform to: detect GPU accesses to a display buffer or command buffer; and track dirty graphics memory pages resulting from the GPU accesses to the display buffer and command buffer. 40. The tangible non-transient machine readable medium of claim 35, wherein execution of the instructions further enables the compute platform to buffer commands submitted to the GPU during a VM checkpointing operation. 41. The tangible non-transient machine readable medium of claim 35, wherein execution of the instructions further enables the compute platform to flush a translation look-aside buffer and cache for the GPU during each VM checkpoint. 42. The tangible non-transient machine readable medium of claim 35, wherein the plurality of modules include: a mediator module configured to be implemented in a hypervisor. 43. The tangible non-transient machine readable medium of claim 35, wherein the plurality of modules include: a mediator module configured to be hosted by a VM; and a mediator helper module associated with the mediator module configured to be implemented in the hypervisor. 44. The tangible non-transient machine readable medium of claim 43, wherein the mediator helper module includes logic for trapping commands and forwarding trapped commands to the mediator. 45. The tangible non-transient machine readable medium of claim 35, wherein the hypervisor is implemented in a Type-1 hybrid hypervisor architecture, and the mediator is implemented in a VM operating as a control domain under the Type-1 hybrid hypervisor architecture. 46. A system comprising: a main board on which a plurality of components are mounted and interconnected, including, a central processing unit (CPU); a graphics processing unit (GPU) memory, operatively coupled to each of the GPU; and a storage device, operatively coupled to the CPU; wherein instructions reside in at least one of the memory and storage device comprising a plurality of software modules configured to be executed by the CPU and GPU, the software modules including a hypervisor that is configured to be executed by the CPU and host at least one virtual machine (VM) hosting a guest operating system including a graphics driver, wherein upon execution the instructions enable the system to: for each of at least one VM hosting a guest operating system including a graphics driver, trap commands issued from the graphics driver of the VM to be executed by the GPU; trap accesses to predetermined GPU resources made by the graphics driver; emulate execution of the commands and accesses to the GPU resources using a virtual GPU associated with the VM, the virtual GPU including state information; track state information for the virtual GPU, pass through certain accesses from the graphics driver to the graphics memory while marking graphics memory pages modified by the certain accesses as dirtied; periodically performing a VM checkpointing operation for the VM, wherein a snapshot of current tracked state information for the virtual GPU and a copy of the graphics memory pages that are dirtied are stored as a VM checkpoint; and submit the trapped commands and accesses to the GPU. 47. The system of claim 46, wherein execution of the instructions further enables the system to: scan the commands that are trapped via a command parser; predict graphics memory pages that may be potentially dirtied via execution of a trapped command or batch of trapped commands by the GPU; and include the content of the graphics memory pages that are predicted to be potentially dirtied as part of the VM checkpoint. 48. The system of claim 46, wherein execution of the instructions further enables the system to: predict execution times of a command or batch of commands; and determine, based on the predicted execution times, which commands to submit to the GPU prior to performing a next checkpointing operation. 49. The system of claim 46, wherein execution of the instructions further enables the system to: scan trapped commands submitted by a graphics driver for a given VM through a command parser; and emulate accesses to at least one of GPU Input/Output (I/O) registers and one or more GPU page tables using the virtual GPU associated with the given VM. 50. The system of claim 46, wherein execution of the instructions further enables the system to: detect GPU accesses to a display buffer or command buffer; and track dirty graphics memory pages resulting from the GPU accesses to the display buffer and command buffer.
Methods, software, and apparatus for application transparent, high available GPU computing with VM checkpointing. The guest access of certain GPU resources, such as MMIO resources, are trapped to keep a copy of guest context per semantics, and/or emulate the guest access of the resources prior to submission to the GPU, while other commands relating to certain graphics memory address regions are trapped before being passed through to the GPU. The trapped commands are scanned before submission to predict: a) potential to-be-dirtied graphics memory pages, and b) the execution time of intercepted commands, so the next checkpointing can be aligned to a predicted execution time. The GPU internal states are drained by flushing internal context/tlb/cache, at the completion of submitted commands, and then a snapshot of the vGPU state is taken, based on tracked GPU state, GPU context (through GPU-specific commands), detected dirty graphics memory pages and predicted to-be dirtied graphics memory pages.1-25. (canceled) 26. A method comprising: implementing at least one virtual machine (VM) on a compute platform including a central processing unit (CPU), a graphics processing unit (GPU), and graphics memory, each of the at least one virtual machine hosted by a hypervisor executed via the CPU; for each of at least one VM, trapping GPU commands submitted from the VM; emulating, using a virtual GPU associated with the VM, changes to state information for the GPU that are predicted to result when the trapped GPU commands are executed by the GPU; predicting graphics memory pages that might be dirtied via execution of trapped GPU commands by the GPU; and periodically performing a VM checkpointing operation for the VM, wherein a snapshot of changes to the state information for the GPU and a copy of the graphics memory pages that are predicted to-be-dirtied are stored as a VM checkpoint. 27. The method of claim 26, further comprising: predicting execution times of a command or batch of commands that have been trapped; and determining, based on the predicted execution times, which trapped commands to submit to the GPU prior to performing a next checkpointing operation. 28. The method of claim 26, further comprising: scanning trapped GPU commands submitted by a given VM through a command parser; and emulating accesses to at least one of GPU Input/Output (I/O) registers and one or more GPU page tables using the virtual GPU associated with the given VM. 29. The method of claim 26, further comprising: passing through certain accesses to the graphics memory and marking graphics memory pages that are predicted to-be-dirtied as a result of the certain accesses to graphics memory; and including a copy of the graphic memory pages that are predicted to-be-dirtied as a result of the certain accesses to the graphics memory in the VM checkpoint. 30. The method of claim 29, wherein the certain accesses to the graphics memory that are passed through include accesses to a display buffer and a command buffer in the graphics memory. 31. The method of claim 29, wherein the certain accesses to the graphics memory that are passed through include accesses to the graphics memory by the CPU. 32. The method of claim 26, wherein the GPU includes an internal context, translation look-aside buffer, and cache, the method further comprising draining GPU internal state data by flushing the internal GPU context, translation look-aside buffer and cache in connection with taking a snapshot of the changes to the state information for the GPU. 33. The method of claim 26, further comprising buffering commands submitted from a VM during a VM checkpointing operation. 34. The method of claim 26, wherein the hypervisor is a Type-1 hypervisor. 35. A tangible non-transient machine readable medium having instructions comprising a plurality of software modules stored thereon, configured to be implemented on a compute platform having a central processing unit (CPU), a graphics processing unit (GPU), and graphics memory, the compute platform further configured to execute a hypervisor on the CPU that hosts at least one virtual machine (VM) hosting a guest operating system including a graphics driver, wherein upon execution the instructions enable the compute platform to: for each of at least one VM hosting a guest operating system including a graphics driver, trap commands issued from the graphics driver of the VM to be executed by the GPU; trap accesses to predetermined GPU resources made by the graphics driver; emulate execution of the commands and accesses to the GPU resources using a virtual GPU associated with the VM, the virtual GPU including state information; track state information for the virtual GPU, pass through certain accesses from the graphics driver to the graphics memory while marking graphics memory pages modified by the certain accesses as dirtied; periodically performing a VM checkpointing operation for the VM, wherein a snapshot of current tracked state information for the virtual GPU and a copy of the graphics memory pages that are dirtied are stored as a VM checkpoint; and submit the trapped commands and accesses to the GPU. 36. The tangible non-transient machine readable medium of claim 35, wherein execution of the instructions further enables the compute platform to: scan the commands that are trapped via a command parser; predict graphics memory pages that may be potentially dirtied via execution of a trapped command or batch of trapped commands by the GPU; and include the content of the graphics memory pages that are predicted to be potentially dirtied as part of the VM checkpoint. 37. The tangible non-transient machine readable medium of claim 35, wherein execution of the instructions further enables the compute platform to: predict execution times of a command or batch of commands; and determine, based on the predicted execution times, which commands to submit to the GPU prior to performing a next checkpointing operation. 38. The tangible non-transient machine readable medium of claim 35, wherein execution of the instructions further enables the compute platform to: scan trapped commands submitted by a graphics driver for a given VM through a command parser; and emulate accesses to at least one of GPU Input/Output (I/O) registers and one or more GPU page tables using the virtual GPU associated with the given VM. 39. The tangible non-transient machine readable medium of claim 35, wherein execution of the instructions further enables the compute platform to: detect GPU accesses to a display buffer or command buffer; and track dirty graphics memory pages resulting from the GPU accesses to the display buffer and command buffer. 40. The tangible non-transient machine readable medium of claim 35, wherein execution of the instructions further enables the compute platform to buffer commands submitted to the GPU during a VM checkpointing operation. 41. The tangible non-transient machine readable medium of claim 35, wherein execution of the instructions further enables the compute platform to flush a translation look-aside buffer and cache for the GPU during each VM checkpoint. 42. The tangible non-transient machine readable medium of claim 35, wherein the plurality of modules include: a mediator module configured to be implemented in a hypervisor. 43. The tangible non-transient machine readable medium of claim 35, wherein the plurality of modules include: a mediator module configured to be hosted by a VM; and a mediator helper module associated with the mediator module configured to be implemented in the hypervisor. 44. The tangible non-transient machine readable medium of claim 43, wherein the mediator helper module includes logic for trapping commands and forwarding trapped commands to the mediator. 45. The tangible non-transient machine readable medium of claim 35, wherein the hypervisor is implemented in a Type-1 hybrid hypervisor architecture, and the mediator is implemented in a VM operating as a control domain under the Type-1 hybrid hypervisor architecture. 46. A system comprising: a main board on which a plurality of components are mounted and interconnected, including, a central processing unit (CPU); a graphics processing unit (GPU) memory, operatively coupled to each of the GPU; and a storage device, operatively coupled to the CPU; wherein instructions reside in at least one of the memory and storage device comprising a plurality of software modules configured to be executed by the CPU and GPU, the software modules including a hypervisor that is configured to be executed by the CPU and host at least one virtual machine (VM) hosting a guest operating system including a graphics driver, wherein upon execution the instructions enable the system to: for each of at least one VM hosting a guest operating system including a graphics driver, trap commands issued from the graphics driver of the VM to be executed by the GPU; trap accesses to predetermined GPU resources made by the graphics driver; emulate execution of the commands and accesses to the GPU resources using a virtual GPU associated with the VM, the virtual GPU including state information; track state information for the virtual GPU, pass through certain accesses from the graphics driver to the graphics memory while marking graphics memory pages modified by the certain accesses as dirtied; periodically performing a VM checkpointing operation for the VM, wherein a snapshot of current tracked state information for the virtual GPU and a copy of the graphics memory pages that are dirtied are stored as a VM checkpoint; and submit the trapped commands and accesses to the GPU. 47. The system of claim 46, wherein execution of the instructions further enables the system to: scan the commands that are trapped via a command parser; predict graphics memory pages that may be potentially dirtied via execution of a trapped command or batch of trapped commands by the GPU; and include the content of the graphics memory pages that are predicted to be potentially dirtied as part of the VM checkpoint. 48. The system of claim 46, wherein execution of the instructions further enables the system to: predict execution times of a command or batch of commands; and determine, based on the predicted execution times, which commands to submit to the GPU prior to performing a next checkpointing operation. 49. The system of claim 46, wherein execution of the instructions further enables the system to: scan trapped commands submitted by a graphics driver for a given VM through a command parser; and emulate accesses to at least one of GPU Input/Output (I/O) registers and one or more GPU page tables using the virtual GPU associated with the given VM. 50. The system of claim 46, wherein execution of the instructions further enables the system to: detect GPU accesses to a display buffer or command buffer; and track dirty graphics memory pages resulting from the GPU accesses to the display buffer and command buffer.
2,600
9,935
9,935
14,579,780
2,685
In the specification and drawings, an apparatus for conducting hot work is described and shown with an enclosure; a hot work apparatus operable within the enclosure; and a detector located exterior of the enclosure, the detector being in detecting communication with the interior of the enclosure, such that the detector detects the presence of a condition within the enclosure. A method of conducting hot work is also described and shown.
1. A method of detecting a combustible gas entering an enclosure by detecting a pressure drop in the atmosphere within the enclosure comprising: a. transferring air from an exterior of the enclosure to an interior of the enclosure; b. detecting a level of combustible gas in or near to a source of the air transferred from the exterior of the enclosure to the interior of the enclosure; c. stopping said transferring of air from the exterior of the enclosure to the interior of the enclosure in response to said detecting a level of combustible gas; and d. detecting a pressure drop in the atmosphere within the enclosure so as to detect a combustible gas entering the enclosure. 2. The method of claim 1 further comprising conducting hot work in the enclosure. 3. The method of claim 2 further comprising terminating the hot work in the enclosure in response to said detecting a pressure drop in the atmosphere within the enclosure. 4. The method of claim 3 further comprising transmitting a signal to a controller in response to said detecting a pressure drop in the atmosphere within the enclosure. 5. The method of claim 4 further comprising transmitting a signal from a controller to a power source of the hot work. 6. The method of claim 4 wherein said terminating the hot work further comprises terminating the hot work in response to a signal from the controller.
In the specification and drawings, an apparatus for conducting hot work is described and shown with an enclosure; a hot work apparatus operable within the enclosure; and a detector located exterior of the enclosure, the detector being in detecting communication with the interior of the enclosure, such that the detector detects the presence of a condition within the enclosure. A method of conducting hot work is also described and shown.1. A method of detecting a combustible gas entering an enclosure by detecting a pressure drop in the atmosphere within the enclosure comprising: a. transferring air from an exterior of the enclosure to an interior of the enclosure; b. detecting a level of combustible gas in or near to a source of the air transferred from the exterior of the enclosure to the interior of the enclosure; c. stopping said transferring of air from the exterior of the enclosure to the interior of the enclosure in response to said detecting a level of combustible gas; and d. detecting a pressure drop in the atmosphere within the enclosure so as to detect a combustible gas entering the enclosure. 2. The method of claim 1 further comprising conducting hot work in the enclosure. 3. The method of claim 2 further comprising terminating the hot work in the enclosure in response to said detecting a pressure drop in the atmosphere within the enclosure. 4. The method of claim 3 further comprising transmitting a signal to a controller in response to said detecting a pressure drop in the atmosphere within the enclosure. 5. The method of claim 4 further comprising transmitting a signal from a controller to a power source of the hot work. 6. The method of claim 4 wherein said terminating the hot work further comprises terminating the hot work in response to a signal from the controller.
2,600
9,936
9,936
14,580,323
2,685
In the specification and drawings, an apparatus for conducting hot work is described and shown with an enclosure; a hot work apparatus operable within the enclosure; and a detector located exterior of the enclosure, the detector being in detecting communication with the interior of the enclosure, such that the detector detects the presence of a condition within the enclosure. A method of conducting hot work is also described and shown.
1. An apparatus for conducting hot work comprising: a. an enclosure; b. a hot work apparatus operable within said enclosure; and c. a detector located exterior of said enclosure, said detector being in detecting communication with an interior of said enclosure, such that said detector detects the presence of a condition within said enclosure. 2. The apparatus of claim 1 wherein said hot work apparatus is shut down in response to said detector detecting the presence of a predetermined condition within said enclosure. 3. The apparatus of claim 1 further comprising a housing located adjacent to said enclosure, the interior of said housing being fluidly connected to the interior of said enclosure, said detector being fluidly connected to the interior of said housing. 4. The apparatus of claim 3 further comprising a gap between said housing and said enclosure. 5. The apparatus of claim 3 wherein said housing is not in contact with said enclosure. 6. The apparatus of claim 3 wherein said housing is portable. 7. The apparatus of claim 3 further comprising a stand attached to said housing. 8. The apparatus of claim 3 further comprising a damper attached to said housing. 9. The apparatus of claim 3 wherein said detector comprises a first combustible gas detector. 10. The apparatus of claim 9 further comprising: a. an oxygen detector fluidly connected to the interior of said housing; and b. a pressure detector fluidly connected to the interior of said housing. 11. The apparatus of claim 10 wherein at least one of said first combustible gas detector, said oxygen detector and said pressure detector is located exterior of said housing. 12. The apparatus of claim 11 further comprising: a. a blower assembly in fluid communication with the interior of said enclosure; b. a second combustible gas detector located so as to detect the presence of a combustible gas in or near to an air intake of said blower assembly; c. a manual shutdown switch located within said enclosure; and d. a controller in communication with said first combustible gas detector, said second combustible gas detector, said oxygen detector, said pressure detector and said manual shutdown switch, said controller being in communication with said hot work apparatus and capable of controlling the operation of said hot work apparatus in response to a signal received from at least one of said first combustible gas detector, said second combustible gas detector, said oxygen detector, said pressure detector and said manual shutdown switch. 13. The apparatus of claim 1 wherein said hot work apparatus comprises a welding apparatus. 14. The apparatus of claim 1 further comprising an aperture extending from the interior of said enclosure to an exterior of said enclosure, said detector being fluidly connected to the interior of said enclosure through said aperture. 15. The apparatus of claim 14 further comprising a conduit between said aperture and said detector. 16. The apparatus of claim 1 further comprising a blower assembly in fluid communication with the interior of said enclosure. 17. The apparatus of claim 1 further comprising a positive pressure atmosphere within said enclosure. 18. The apparatus of claim 1 further comprising a second detector for detecting a condition exterior said enclosure. 19. The apparatus of claim 1 wherein said detector comprises a combustible gas detector. 20. The apparatus of claim 19 wherein said combustible gas detector is fluidly connected to the interior of said enclosure. 21. The apparatus of claim 19 wherein said combustible gas detector is in light communication with the interior of said enclosure. 22. The apparatus of claim 19 wherein said combustible gas detector is in infrared light communication with the interior of said enclosure. 23. The apparatus of claim 19 wherein said combustible gas detector is in light communication with air transferred from the interior of said enclosure to an exterior of said enclosure. 24. The apparatus of claim 19 wherein said combustible gas detector is in infrared light communication with air transferred from the interior of said enclosure to an exterior of said enclosure. 25. An apparatus for conducting hot work comprising: a. an enclosure; b. a blower assembly in fluid communication with an interior of said enclosure; and c. a means for detecting the presence of combustible gas that is within said enclosure by sampling air that is not within said enclosure. 26. The apparatus of claim 25 wherein said air that is not within said enclosure has been transferred from the interior of said enclosure. 27. The apparatus of claim 26 further comprising a welding apparatus operable within said enclosure. 28. The apparatus of claim 27 further comprising a positive pressure atmosphere within said enclosure. 29. The apparatus of claim 27 wherein said means for detecting the presence of combustible gas that is within said enclosure by sampling air that is not within said enclosure comprises a combustible gas detector located exterior of said enclosure. 30. An apparatus for conducting hot work comprising: a. an enclosure; b. a first combustible gas detector for detecting combustible gas within said enclosure; c. an oxygen detector for detecting oxygen within said enclosure; and d. a pressure detector for detecting pressure within said enclosure, at least one of said first combustible gas detector, said oxygen detector and said pressure detector being located exterior of said enclosure. 31. The apparatus of claim 30 further comprising: a. a blower assembly in fluid communication with an interior of said enclosure; b. a second combustible gas detector located so as to detect the presence of a combustible gas in or near to an air intake of said blower assembly; c. a welding apparatus operable within said enclosure, the operation of said welding apparatus being controllable in response to a signal generated by at least one of said first combustible gas detector, said second combustible gas detector, said oxygen detector and said pressure detector. 32. An apparatus for conducting hot work comprising: a. an enclosure; b. a welding apparatus operable at least partially within said enclosure; c. a blower assembly in fluid communication with an interior of said enclosure; d. a manual shutdown switch in communication with said welding apparatus; e. an oxygen detector fluidly connected to the interior of said enclosure and in communication with said welding apparatus; f. a pressure detector fluidly connected to the interior of said enclosure and in communication with said welding apparatus; and g. a combustible gas detector located exterior of said enclosure, said combustible gas detector being fluidly connected to the interior of said enclosure such that said combustible gas detector detects the presence of a combustible gas within said enclosure, said combustible gas detector being in communication with said welding apparatus. 33. The apparatus of claim 32 wherein said welding apparatus is shut down in response to a signal generated by at least one of said oxygen detector, said pressure detector and said combustible gas detector. 34. A method of terminating hot work within an enclosure comprising: a. conducting hot work within an enclosure; b. transferring air from an interior of the enclosure to an exterior of the enclosure; c. sampling the air transferred from the interior of the enclosure for the presence of combustible gas; and d. terminating the hot work within the enclosure in response to detection of a level of combustible gas in the air transferred from the interior of the enclosure to the exterior of the enclosure. 35. The method of claim 34 further comprising producing a positive pressure atmosphere within the enclosure. 36. The method of claim 35 wherein said producing a positive pressure atmosphere within the enclosure further comprises transferring air from the exterior of the enclosure to the interior of the enclosure. 37. The method of claim 36 wherein said terminating hot work within the enclosure in response to detection of a level of combustible gas in the air transferred from the interior of the enclosure further comprises terminating hot work within the enclosure in response to detection of a level of combustible gas above a predefined level. 38. The method of claim 37 wherein the predefined level is at most 25% of the lower explosive limit of the combustible gas. 39. The method of claim 34 further comprising forming the enclosure at a location where hot work is to be conducted on a production platform that drills for flammable materials. 40. The method of claim 39 further comprising: a. conducting drilling operations for flammable materials on the production platform; and b. terminating drilling operations in response to detection of a level of combustible gas in the air transferred from the interior of the enclosure. 41. The method of claim 34 further comprising sampling the air transferred from the interior of the enclosure for the presence of a level of oxygen. 42. The method of claim 34 wherein said conducting hot work within an enclosure further comprises conducting welding within the enclosure.
In the specification and drawings, an apparatus for conducting hot work is described and shown with an enclosure; a hot work apparatus operable within the enclosure; and a detector located exterior of the enclosure, the detector being in detecting communication with the interior of the enclosure, such that the detector detects the presence of a condition within the enclosure. A method of conducting hot work is also described and shown.1. An apparatus for conducting hot work comprising: a. an enclosure; b. a hot work apparatus operable within said enclosure; and c. a detector located exterior of said enclosure, said detector being in detecting communication with an interior of said enclosure, such that said detector detects the presence of a condition within said enclosure. 2. The apparatus of claim 1 wherein said hot work apparatus is shut down in response to said detector detecting the presence of a predetermined condition within said enclosure. 3. The apparatus of claim 1 further comprising a housing located adjacent to said enclosure, the interior of said housing being fluidly connected to the interior of said enclosure, said detector being fluidly connected to the interior of said housing. 4. The apparatus of claim 3 further comprising a gap between said housing and said enclosure. 5. The apparatus of claim 3 wherein said housing is not in contact with said enclosure. 6. The apparatus of claim 3 wherein said housing is portable. 7. The apparatus of claim 3 further comprising a stand attached to said housing. 8. The apparatus of claim 3 further comprising a damper attached to said housing. 9. The apparatus of claim 3 wherein said detector comprises a first combustible gas detector. 10. The apparatus of claim 9 further comprising: a. an oxygen detector fluidly connected to the interior of said housing; and b. a pressure detector fluidly connected to the interior of said housing. 11. The apparatus of claim 10 wherein at least one of said first combustible gas detector, said oxygen detector and said pressure detector is located exterior of said housing. 12. The apparatus of claim 11 further comprising: a. a blower assembly in fluid communication with the interior of said enclosure; b. a second combustible gas detector located so as to detect the presence of a combustible gas in or near to an air intake of said blower assembly; c. a manual shutdown switch located within said enclosure; and d. a controller in communication with said first combustible gas detector, said second combustible gas detector, said oxygen detector, said pressure detector and said manual shutdown switch, said controller being in communication with said hot work apparatus and capable of controlling the operation of said hot work apparatus in response to a signal received from at least one of said first combustible gas detector, said second combustible gas detector, said oxygen detector, said pressure detector and said manual shutdown switch. 13. The apparatus of claim 1 wherein said hot work apparatus comprises a welding apparatus. 14. The apparatus of claim 1 further comprising an aperture extending from the interior of said enclosure to an exterior of said enclosure, said detector being fluidly connected to the interior of said enclosure through said aperture. 15. The apparatus of claim 14 further comprising a conduit between said aperture and said detector. 16. The apparatus of claim 1 further comprising a blower assembly in fluid communication with the interior of said enclosure. 17. The apparatus of claim 1 further comprising a positive pressure atmosphere within said enclosure. 18. The apparatus of claim 1 further comprising a second detector for detecting a condition exterior said enclosure. 19. The apparatus of claim 1 wherein said detector comprises a combustible gas detector. 20. The apparatus of claim 19 wherein said combustible gas detector is fluidly connected to the interior of said enclosure. 21. The apparatus of claim 19 wherein said combustible gas detector is in light communication with the interior of said enclosure. 22. The apparatus of claim 19 wherein said combustible gas detector is in infrared light communication with the interior of said enclosure. 23. The apparatus of claim 19 wherein said combustible gas detector is in light communication with air transferred from the interior of said enclosure to an exterior of said enclosure. 24. The apparatus of claim 19 wherein said combustible gas detector is in infrared light communication with air transferred from the interior of said enclosure to an exterior of said enclosure. 25. An apparatus for conducting hot work comprising: a. an enclosure; b. a blower assembly in fluid communication with an interior of said enclosure; and c. a means for detecting the presence of combustible gas that is within said enclosure by sampling air that is not within said enclosure. 26. The apparatus of claim 25 wherein said air that is not within said enclosure has been transferred from the interior of said enclosure. 27. The apparatus of claim 26 further comprising a welding apparatus operable within said enclosure. 28. The apparatus of claim 27 further comprising a positive pressure atmosphere within said enclosure. 29. The apparatus of claim 27 wherein said means for detecting the presence of combustible gas that is within said enclosure by sampling air that is not within said enclosure comprises a combustible gas detector located exterior of said enclosure. 30. An apparatus for conducting hot work comprising: a. an enclosure; b. a first combustible gas detector for detecting combustible gas within said enclosure; c. an oxygen detector for detecting oxygen within said enclosure; and d. a pressure detector for detecting pressure within said enclosure, at least one of said first combustible gas detector, said oxygen detector and said pressure detector being located exterior of said enclosure. 31. The apparatus of claim 30 further comprising: a. a blower assembly in fluid communication with an interior of said enclosure; b. a second combustible gas detector located so as to detect the presence of a combustible gas in or near to an air intake of said blower assembly; c. a welding apparatus operable within said enclosure, the operation of said welding apparatus being controllable in response to a signal generated by at least one of said first combustible gas detector, said second combustible gas detector, said oxygen detector and said pressure detector. 32. An apparatus for conducting hot work comprising: a. an enclosure; b. a welding apparatus operable at least partially within said enclosure; c. a blower assembly in fluid communication with an interior of said enclosure; d. a manual shutdown switch in communication with said welding apparatus; e. an oxygen detector fluidly connected to the interior of said enclosure and in communication with said welding apparatus; f. a pressure detector fluidly connected to the interior of said enclosure and in communication with said welding apparatus; and g. a combustible gas detector located exterior of said enclosure, said combustible gas detector being fluidly connected to the interior of said enclosure such that said combustible gas detector detects the presence of a combustible gas within said enclosure, said combustible gas detector being in communication with said welding apparatus. 33. The apparatus of claim 32 wherein said welding apparatus is shut down in response to a signal generated by at least one of said oxygen detector, said pressure detector and said combustible gas detector. 34. A method of terminating hot work within an enclosure comprising: a. conducting hot work within an enclosure; b. transferring air from an interior of the enclosure to an exterior of the enclosure; c. sampling the air transferred from the interior of the enclosure for the presence of combustible gas; and d. terminating the hot work within the enclosure in response to detection of a level of combustible gas in the air transferred from the interior of the enclosure to the exterior of the enclosure. 35. The method of claim 34 further comprising producing a positive pressure atmosphere within the enclosure. 36. The method of claim 35 wherein said producing a positive pressure atmosphere within the enclosure further comprises transferring air from the exterior of the enclosure to the interior of the enclosure. 37. The method of claim 36 wherein said terminating hot work within the enclosure in response to detection of a level of combustible gas in the air transferred from the interior of the enclosure further comprises terminating hot work within the enclosure in response to detection of a level of combustible gas above a predefined level. 38. The method of claim 37 wherein the predefined level is at most 25% of the lower explosive limit of the combustible gas. 39. The method of claim 34 further comprising forming the enclosure at a location where hot work is to be conducted on a production platform that drills for flammable materials. 40. The method of claim 39 further comprising: a. conducting drilling operations for flammable materials on the production platform; and b. terminating drilling operations in response to detection of a level of combustible gas in the air transferred from the interior of the enclosure. 41. The method of claim 34 further comprising sampling the air transferred from the interior of the enclosure for the presence of a level of oxygen. 42. The method of claim 34 wherein said conducting hot work within an enclosure further comprises conducting welding within the enclosure.
2,600
9,937
9,937
14,449,326
2,631
A nonlinear distorter is configured to mitigate nonlinearity from a nonlinear component of a nonlinear system. The nonlinear distorter operates to model the nonlinearity as a function of a piecewise polynomial approximation applied to segments of a nonlinear function of the nonlinearity. The nonlinear distorter generates a model output that decreases the nonlinearity of the nonlinear component.
1. A nonlinear system for mitigating nonlinearity from a nonlinear behavior having memory or exhibiting a memory effect comprising: a memory storing executable components; and a processor, coupled to the memory, configured to execute or facilitate execution of the executable components, comprising: a nonlinear component configured to process an input and provide an output that comprises a nonlinearity; and a distortion component configured to generate a model of the nonlinearity of the nonlinear component based on a segmentwise piecewise polynomial approximation and provide a model output that decreases the nonlinearity. 2. The nonlinear system of claim 1, further comprising: A distortion core component configured to generate an approximation of the nonlinearity or an inverse approximation based on the segmentwise piecewise polynomial approximation applied to a number of N segments of a function of the nonlinearity. 3. The nonlinear system of claim 2, wherein the number of N segments comprising a P order of complexity, wherein N and P comprise an integer of at least two. 4. The nonlinear system of claim 2, further comprising: an error component configured to control an approximation error based on the number of segments and a segmentation of the number of segments, wherein the approximation error is based on a nonlinearity function of the nonlinearity and an approximation of the nonlinearity function through the piecewise polynomial function with the number of N segments. 5. The nonlinear system of claim 1, further comprising: a coefficient component configured to receive the input and the output and estimate a set of coefficients based on the input signal, the output and the model output that is generated by the distortion component to mitigate the nonlinearity from the processing operation. 6. The nonlinear system of claim 5, wherein the distorter component generates the model output without changing a degree of complexity of the nonlinear component and is configured to model the nonlinearity of the nonlinear component as a function of the set of coefficients. 7. The nonlinear system of claim 6, wherein the distortion component is further configured to generate the model output based on the model comprising the input and a pre-inverse function or a post inverse function of the nonlinearity of the nonlinear component that mitigates the nonlinearity from the processing operation. 8. The system of claim 1, wherein the nonlinear component comprises at least one of a power amplifier, an analog component or a digital component of a communication transceiver, or a hybrid analog and digital component configured to separately transmit and receive signals. 9. The nonlinear system of claim 1, wherein the distortion component is further configured to generate the model of the nonlinearity by generating a segmentwise piecewise polynomial approximation or an inverse approximation of a nonlinear function that corresponds to the nonlinearity of the nonlinear component in real time via a number of N segments, wherein N comprises an integer greater than one, and the N segments comprise a P order of complexity, wherein P comprises an integer of at least two. 10. The nonlinear system of claim 1, further comprising: a distortion core component configured to execute runtime operations of the input with a set of coefficients for a memory slice of the nonlinear behavior and the output that comprises the nonlinearity; and a lookup table generator configured to receive a set of coefficients from a coefficient component and generate a lookup table to provide the set of coefficients to the distortion core component corresponding to the memory slice. 11. The nonlinear system of claim 1, further comprising: a coefficient component configured to estimate the set of coefficients that correspond to the nonlinearity of the nonlinear component for the memory slice, process the input and an error based on a number of segments selected of a nonlinear function of the nonlinearity, and determine a segmentation of the number of segments. 12. The nonlinear system of claim 11, further comprising: an adaptive segmentation component configured to determine the segmentation of the number of segments based on the error and an order of complexity of the number of segments and to select the number of segments of the nonlinear function upon which the segmentwise piecewise polynomial approximation operates. 13. A mobile device that mitigates nonlinearity from a nonlinear behavior of a nonlinear component, comprising: a memory storing executable instructions; and a processor, coupled to the memory, that executes or facilitates execution of the executable instructions to at least: facilitate a nonlinearity in an output with a nonlinearity function via the nonlinear component; generate an evaluation of the nonlinearity function of the output based on a piecewise polynomial approximation that is applied to segments of the nonlinearity function; and provide a model output that decreases a nonlinearity that is generated by the nonlinearity function based on the evaluation. 14. The mobile device of claim 13, wherein the processor further executes or facilitates the execution of the executable instructions to: determine, as a part of the evaluation, a set of coefficients that is a function of the input signal, the output signal and the modified output. 15. The mobile device of claim 13, wherein the processor further executes or facilitates the execution of the executable instructions to: control an approximation error based on the number of segments and a segmentation of the number of segments. 16. The mobile device of claim 13, wherein the processor further executes or facilitates the execution of the executable instructions to: generate a lookup table corresponding to a set of coefficients of the nonlinear function related to a memory slice of the nonlinear component. 17. The mobile device of claim 13, wherein the processor further executes or facilitates the execution of the executable instructions to: determine a segmentation of the number of segments based on an approximation error of the number of segments. 18. The mobile device of claim 13, wherein the processor further executes or facilitates the execution of the executable instructions to: identify a set of coefficients of the nonlinear function related to a memory slice with one or more least squares operations as part of the piecewise polynomial approximation; and generate, via one or more multipliers, the evaluation by indexing a look up table for a set of coefficients corresponding to a memory slice in Cartesian coordinates. 19. The mobile device of claim 13, wherein the processor further executes or facilitates the execution of the executable instructions to: identify a set of coefficients of the nonlinear function related to a memory slice with one or more least squares operations as part of the piecewise polynomial approximation; and generate, via one or more CORDIC components and independent of a multiplier, the evaluation by indexing a look up table for a set of coefficients corresponding to a memory slice in Polar coordinates. 20. A method for mitigating nonlinearity in a nonlinear component comprising: approximating, via a processing device coupled to a memory, a nonlinearity function of the nonlinearity with a set of piecewise polynomial approximations to different segments of the nonlinearity; and providing a model output that decreases the nonlinearity generated by the nonlinear component that comprises a post inverse of the nonlinearity function or a pre-inverse of the nonlinearity function that operates to decrease the nonlinearity in an output of the nonlinear component as a function of the set of piecewise polynomial approximations. 21. The method of claim 20, further comprising: determining an approximation error as a function of a nonlinearity function and at least one of the set of piecewise polynomial functions. 22. The method of claim 20, further comprising: selecting the different segments of the nonlinearity function to a memory slice based on at least one of an approximation error or an order of complexity of the different segments. 23. The method of claim 20, further comprising: adaptively determining a set of coefficients corresponding to the nonlinearity function of a memory slice with a least square operation applied to the different segments of the nonlinearity function of the memory slice as a function of at least one of the different segments selected, the number of the different segments selected, an approximation error, an order of complexity of the different segments selected, or a previous set of coefficients stored in a look up table corresponding to a previous memory slice; and iteratively updating at least one look up table with the set of coefficients. 24. The method of claim 20, further comprising: generating the model output as a function of a set of coefficients from at least one look table and at least one of a preceding output result of a preceding memory slice, a set of multiplications computing a magnitude of an input to the nonlinearity component in a Cartesian coordinate system, or a set of CORDIC computations without one or more multipliers computing the magnitude of the input to the nonlinearity component in a polar coordinate system. 25. The method of claim 20, further comprising: controlling an approximation error of the model output based on a number of the different segments, a segmentation of the number of segments, and a polynomial order per segment.
A nonlinear distorter is configured to mitigate nonlinearity from a nonlinear component of a nonlinear system. The nonlinear distorter operates to model the nonlinearity as a function of a piecewise polynomial approximation applied to segments of a nonlinear function of the nonlinearity. The nonlinear distorter generates a model output that decreases the nonlinearity of the nonlinear component.1. A nonlinear system for mitigating nonlinearity from a nonlinear behavior having memory or exhibiting a memory effect comprising: a memory storing executable components; and a processor, coupled to the memory, configured to execute or facilitate execution of the executable components, comprising: a nonlinear component configured to process an input and provide an output that comprises a nonlinearity; and a distortion component configured to generate a model of the nonlinearity of the nonlinear component based on a segmentwise piecewise polynomial approximation and provide a model output that decreases the nonlinearity. 2. The nonlinear system of claim 1, further comprising: A distortion core component configured to generate an approximation of the nonlinearity or an inverse approximation based on the segmentwise piecewise polynomial approximation applied to a number of N segments of a function of the nonlinearity. 3. The nonlinear system of claim 2, wherein the number of N segments comprising a P order of complexity, wherein N and P comprise an integer of at least two. 4. The nonlinear system of claim 2, further comprising: an error component configured to control an approximation error based on the number of segments and a segmentation of the number of segments, wherein the approximation error is based on a nonlinearity function of the nonlinearity and an approximation of the nonlinearity function through the piecewise polynomial function with the number of N segments. 5. The nonlinear system of claim 1, further comprising: a coefficient component configured to receive the input and the output and estimate a set of coefficients based on the input signal, the output and the model output that is generated by the distortion component to mitigate the nonlinearity from the processing operation. 6. The nonlinear system of claim 5, wherein the distorter component generates the model output without changing a degree of complexity of the nonlinear component and is configured to model the nonlinearity of the nonlinear component as a function of the set of coefficients. 7. The nonlinear system of claim 6, wherein the distortion component is further configured to generate the model output based on the model comprising the input and a pre-inverse function or a post inverse function of the nonlinearity of the nonlinear component that mitigates the nonlinearity from the processing operation. 8. The system of claim 1, wherein the nonlinear component comprises at least one of a power amplifier, an analog component or a digital component of a communication transceiver, or a hybrid analog and digital component configured to separately transmit and receive signals. 9. The nonlinear system of claim 1, wherein the distortion component is further configured to generate the model of the nonlinearity by generating a segmentwise piecewise polynomial approximation or an inverse approximation of a nonlinear function that corresponds to the nonlinearity of the nonlinear component in real time via a number of N segments, wherein N comprises an integer greater than one, and the N segments comprise a P order of complexity, wherein P comprises an integer of at least two. 10. The nonlinear system of claim 1, further comprising: a distortion core component configured to execute runtime operations of the input with a set of coefficients for a memory slice of the nonlinear behavior and the output that comprises the nonlinearity; and a lookup table generator configured to receive a set of coefficients from a coefficient component and generate a lookup table to provide the set of coefficients to the distortion core component corresponding to the memory slice. 11. The nonlinear system of claim 1, further comprising: a coefficient component configured to estimate the set of coefficients that correspond to the nonlinearity of the nonlinear component for the memory slice, process the input and an error based on a number of segments selected of a nonlinear function of the nonlinearity, and determine a segmentation of the number of segments. 12. The nonlinear system of claim 11, further comprising: an adaptive segmentation component configured to determine the segmentation of the number of segments based on the error and an order of complexity of the number of segments and to select the number of segments of the nonlinear function upon which the segmentwise piecewise polynomial approximation operates. 13. A mobile device that mitigates nonlinearity from a nonlinear behavior of a nonlinear component, comprising: a memory storing executable instructions; and a processor, coupled to the memory, that executes or facilitates execution of the executable instructions to at least: facilitate a nonlinearity in an output with a nonlinearity function via the nonlinear component; generate an evaluation of the nonlinearity function of the output based on a piecewise polynomial approximation that is applied to segments of the nonlinearity function; and provide a model output that decreases a nonlinearity that is generated by the nonlinearity function based on the evaluation. 14. The mobile device of claim 13, wherein the processor further executes or facilitates the execution of the executable instructions to: determine, as a part of the evaluation, a set of coefficients that is a function of the input signal, the output signal and the modified output. 15. The mobile device of claim 13, wherein the processor further executes or facilitates the execution of the executable instructions to: control an approximation error based on the number of segments and a segmentation of the number of segments. 16. The mobile device of claim 13, wherein the processor further executes or facilitates the execution of the executable instructions to: generate a lookup table corresponding to a set of coefficients of the nonlinear function related to a memory slice of the nonlinear component. 17. The mobile device of claim 13, wherein the processor further executes or facilitates the execution of the executable instructions to: determine a segmentation of the number of segments based on an approximation error of the number of segments. 18. The mobile device of claim 13, wherein the processor further executes or facilitates the execution of the executable instructions to: identify a set of coefficients of the nonlinear function related to a memory slice with one or more least squares operations as part of the piecewise polynomial approximation; and generate, via one or more multipliers, the evaluation by indexing a look up table for a set of coefficients corresponding to a memory slice in Cartesian coordinates. 19. The mobile device of claim 13, wherein the processor further executes or facilitates the execution of the executable instructions to: identify a set of coefficients of the nonlinear function related to a memory slice with one or more least squares operations as part of the piecewise polynomial approximation; and generate, via one or more CORDIC components and independent of a multiplier, the evaluation by indexing a look up table for a set of coefficients corresponding to a memory slice in Polar coordinates. 20. A method for mitigating nonlinearity in a nonlinear component comprising: approximating, via a processing device coupled to a memory, a nonlinearity function of the nonlinearity with a set of piecewise polynomial approximations to different segments of the nonlinearity; and providing a model output that decreases the nonlinearity generated by the nonlinear component that comprises a post inverse of the nonlinearity function or a pre-inverse of the nonlinearity function that operates to decrease the nonlinearity in an output of the nonlinear component as a function of the set of piecewise polynomial approximations. 21. The method of claim 20, further comprising: determining an approximation error as a function of a nonlinearity function and at least one of the set of piecewise polynomial functions. 22. The method of claim 20, further comprising: selecting the different segments of the nonlinearity function to a memory slice based on at least one of an approximation error or an order of complexity of the different segments. 23. The method of claim 20, further comprising: adaptively determining a set of coefficients corresponding to the nonlinearity function of a memory slice with a least square operation applied to the different segments of the nonlinearity function of the memory slice as a function of at least one of the different segments selected, the number of the different segments selected, an approximation error, an order of complexity of the different segments selected, or a previous set of coefficients stored in a look up table corresponding to a previous memory slice; and iteratively updating at least one look up table with the set of coefficients. 24. The method of claim 20, further comprising: generating the model output as a function of a set of coefficients from at least one look table and at least one of a preceding output result of a preceding memory slice, a set of multiplications computing a magnitude of an input to the nonlinearity component in a Cartesian coordinate system, or a set of CORDIC computations without one or more multipliers computing the magnitude of the input to the nonlinearity component in a polar coordinate system. 25. The method of claim 20, further comprising: controlling an approximation error of the model output based on a number of the different segments, a segmentation of the number of segments, and a polynomial order per segment.
2,600
9,938
9,938
15,298,363
2,674
An information processing device is provided with a terminal screen processing portion. When the terminal screen processing portion receives from a terminal a command to register and delete a functional item of each function that the information processing device executes, the terminal screen processing portion registers and deletes a selected functional item with respect to a shortcut tab of a list page and also guides a screen of the list page of a result of registration and deletion to a display portion of the terminal. Furthermore, the terminal screen processing portion, when receiving from a terminal a command to delete a functional item to be the last with respect to the list page, creates a screen in which the list page is not displayed, and guides the screen to the display portion of the terminal.
1. An information processing device capable of executing a plurality of functions relating to information processing, the information processing device comprising: a screen creation portion that is connected to a terminal including a display portion and an operating portion through a network and displays an individual tab on the terminal, the individual tab including a plurality of functional items previously arranged for each screen and having a list page in which a functional item among the plurality of functional items for each screen is able to be arbitrarily registered and deleted; and a setting processing portion that receives a command to register and delete the functional item from the terminal and then performs the registration and deletion of a selected functional item with respect to the list page, and also guides a screen of the list page of a result of the registration and deletion to the display portion of the terminal, wherein the screen creation portion, when receiving a command to delete a functional item to be last with respect to the list page from the terminal, creates a screen in which the list page is not displayed and guides the screen to the display portion of the terminal. 2. The information processing device according to claim 1, wherein the plurality of functional items for each screen are previously divided into a plurality of standard tabs. 3. The information processing device according to claim 1, wherein the screen creation portion, at registration of a first functional item, creates a screen in which the list page is displayed and the individual tab is also displayed and guides the screen to the display portion of the terminal. 4. The information processing device according to claim 1, wherein the screen creation portion creates a screen in which the list page is not displayed and the individual tab is not either displayed and guides the screen to the display portion of the terminal. 5. The information processing device according to claim 1, wherein, at the registration of the functional item, the selected functional item is set to the list page of the individual tab by a shortcut. 6. An image forming apparatus comprising the information processing device according to claim 1, wherein the information processing device includes at least two or more image information processing modes among image information processing modes of printing, copying, scanning, and facsimile.
An information processing device is provided with a terminal screen processing portion. When the terminal screen processing portion receives from a terminal a command to register and delete a functional item of each function that the information processing device executes, the terminal screen processing portion registers and deletes a selected functional item with respect to a shortcut tab of a list page and also guides a screen of the list page of a result of registration and deletion to a display portion of the terminal. Furthermore, the terminal screen processing portion, when receiving from a terminal a command to delete a functional item to be the last with respect to the list page, creates a screen in which the list page is not displayed, and guides the screen to the display portion of the terminal.1. An information processing device capable of executing a plurality of functions relating to information processing, the information processing device comprising: a screen creation portion that is connected to a terminal including a display portion and an operating portion through a network and displays an individual tab on the terminal, the individual tab including a plurality of functional items previously arranged for each screen and having a list page in which a functional item among the plurality of functional items for each screen is able to be arbitrarily registered and deleted; and a setting processing portion that receives a command to register and delete the functional item from the terminal and then performs the registration and deletion of a selected functional item with respect to the list page, and also guides a screen of the list page of a result of the registration and deletion to the display portion of the terminal, wherein the screen creation portion, when receiving a command to delete a functional item to be last with respect to the list page from the terminal, creates a screen in which the list page is not displayed and guides the screen to the display portion of the terminal. 2. The information processing device according to claim 1, wherein the plurality of functional items for each screen are previously divided into a plurality of standard tabs. 3. The information processing device according to claim 1, wherein the screen creation portion, at registration of a first functional item, creates a screen in which the list page is displayed and the individual tab is also displayed and guides the screen to the display portion of the terminal. 4. The information processing device according to claim 1, wherein the screen creation portion creates a screen in which the list page is not displayed and the individual tab is not either displayed and guides the screen to the display portion of the terminal. 5. The information processing device according to claim 1, wherein, at the registration of the functional item, the selected functional item is set to the list page of the individual tab by a shortcut. 6. An image forming apparatus comprising the information processing device according to claim 1, wherein the information processing device includes at least two or more image information processing modes among image information processing modes of printing, copying, scanning, and facsimile.
2,600
9,939
9,939
14,090,258
2,625
A touch input method and a mobile terminal are provided. The mobile terminal includes a touch screen having a transparent display panel, a front touch panel configured to detect a touch input corresponding to the front of the transparent display panel, and a rear touch panel configured to detect a touch input corresponding to a rear of the transparent touch panel. The touch input method includes detecting a touch input from one of the front touch panel and the rear touch panel, determining whether a user's intent is a data input via the rear touch panel in response to the touch input, and displaying a keypad on a top of the touch screen if the user's intent is determined as data input via the rear touch panel.
1. A touch input method in a mobile terminal having a touch screen that includes a transparent display panel, a front touch panel configured to detect a touch input corresponding to a front of the transparent display panel, and a rear touch panel configured to detect a touch input corresponding to a rear of the transparent display panel, the method comprising: detecting a touch input from one of the front touch panel and the rear touch panel; determining whether a user's intent is a data input via the rear touch panel in response to the touch input; and displaying a keypad on a top of the touch screen when the user's intent is determined as data input via the rear touch panel. 2. The method of claim 1, wherein the determining of whether the user's intent is a data input comprises: determining that the user's intent is a data input via the rear touch panel when the touch input is generated from a data input box of an image and is detected through the rear touch panel; determining that the user's intent is a data input via the front touch panel when the touch input is generated from the data input box of the image and is detected through the front touch panel. 3. The method of claim 2, further comprising: displaying the keypad on a bottom of the touch screen when the user's intent is data input via the front touch panel. 4. The method of claim 3, further comprising: detecting a first touch movement from the bottom of the touch screen to the top of the touch screen from one of the front touch panel and the rear touch panel while displaying the keypad on the bottom of the touch screen; and displaying the keypad on the top of the touch screen in response to the first touch movement. 5. The method of claim 1, further comprising: detecting a second touch movement from the top of the touch screen to the bottom of the touch screen from one of the front touch panel and the rear touch panel while displaying the keypad on the top of the touch screen; and displaying the keypad on the bottom of the touch screen in response to the second touch movement. 6. The method of claim 1, wherein the determining of whether the user's intent is data input comprises: determining that the user's intent is data input via the rear touch panel with a finger of one hand holding the mobile terminal when the touch input is generated from a data input box of an image and is detected through the rear touch panel. 7. The method of claim 1, wherein the keypad is displayed on the top of the touch screen when a display mode of the mobile terminal is a portrait mode. 8. The method of claim 1, further comprising; turning a power of the front touch panel off or not responding to a touch input corresponding to the front touch panel when the user's intent is data input via the rear touch panel; and turning a power of the rear touch panel off or not responding to a touch input corresponding to the rear touch panel when the user's intent is data input via the front touch panel. 9. A mobile terminal, the mobile terminal comprising: a touch screen including a transparent display panel, a front touch panel configured to detect a touch input corresponding to a front of the transparent display panel, and a rear touch panel configured to detect a touch input corresponding to a rear of the transparent display panel; and a controller configured to control the touch screen, wherein the controller controls the touch screen to detect a touch input from one of the front touch panel and the rear touch panel, determines whether a user's intent is a data input via the rear touch panel in response to the touch input, and controls the touch screen to display a keypad on a top of the touch screen when the user's intent is determined as data input via the rear touch panel. 10. The mobile terminal of claim 9, wherein the controller determines that the user's intent is data input via the rear touch panel when the touch input is generated from a data input box of an image and is detected through the rear touch panel, and determines that the user's intent is data input via the front touch panel when the touch input is generated from the data input box of the image and is detected through the front touch panel. 11. The mobile terminal of claim 10, wherein the controller controls the transparent display panel to display the keypad on a bottom of the touch screen when the user's intent is data input via the front touch panel. 12. The mobile terminal of claim 11, wherein the controller controls the touch screen to detect a first touch movement from the bottom of the touch screen to the top of the touch screen from one of the front touch panel and the rear touch panel while displaying the keypad on the bottom of the touch screen, and to display the keypad on the top of the touch screen in response to the touch movement. 13. The mobile terminal of claim 9, wherein the controller controls the touch screen to detect a second touch movement from the top of the touch screen to the bottom of the touch screen from one of the front touch panel and the rear touch panel while displaying the keypad on the top of the touch screen, and to display the keypad on the bottom of the touch screen in response to the second touch movement. 14. The mobile terminal of claim 9, wherein the controller determines a display mode as one of a landscape mode and a portrait mode, and controls the transparent display panel to display the keypad on the top of the touch screen when the display mode is determined as the portrait mode.
A touch input method and a mobile terminal are provided. The mobile terminal includes a touch screen having a transparent display panel, a front touch panel configured to detect a touch input corresponding to the front of the transparent display panel, and a rear touch panel configured to detect a touch input corresponding to a rear of the transparent touch panel. The touch input method includes detecting a touch input from one of the front touch panel and the rear touch panel, determining whether a user's intent is a data input via the rear touch panel in response to the touch input, and displaying a keypad on a top of the touch screen if the user's intent is determined as data input via the rear touch panel.1. A touch input method in a mobile terminal having a touch screen that includes a transparent display panel, a front touch panel configured to detect a touch input corresponding to a front of the transparent display panel, and a rear touch panel configured to detect a touch input corresponding to a rear of the transparent display panel, the method comprising: detecting a touch input from one of the front touch panel and the rear touch panel; determining whether a user's intent is a data input via the rear touch panel in response to the touch input; and displaying a keypad on a top of the touch screen when the user's intent is determined as data input via the rear touch panel. 2. The method of claim 1, wherein the determining of whether the user's intent is a data input comprises: determining that the user's intent is a data input via the rear touch panel when the touch input is generated from a data input box of an image and is detected through the rear touch panel; determining that the user's intent is a data input via the front touch panel when the touch input is generated from the data input box of the image and is detected through the front touch panel. 3. The method of claim 2, further comprising: displaying the keypad on a bottom of the touch screen when the user's intent is data input via the front touch panel. 4. The method of claim 3, further comprising: detecting a first touch movement from the bottom of the touch screen to the top of the touch screen from one of the front touch panel and the rear touch panel while displaying the keypad on the bottom of the touch screen; and displaying the keypad on the top of the touch screen in response to the first touch movement. 5. The method of claim 1, further comprising: detecting a second touch movement from the top of the touch screen to the bottom of the touch screen from one of the front touch panel and the rear touch panel while displaying the keypad on the top of the touch screen; and displaying the keypad on the bottom of the touch screen in response to the second touch movement. 6. The method of claim 1, wherein the determining of whether the user's intent is data input comprises: determining that the user's intent is data input via the rear touch panel with a finger of one hand holding the mobile terminal when the touch input is generated from a data input box of an image and is detected through the rear touch panel. 7. The method of claim 1, wherein the keypad is displayed on the top of the touch screen when a display mode of the mobile terminal is a portrait mode. 8. The method of claim 1, further comprising; turning a power of the front touch panel off or not responding to a touch input corresponding to the front touch panel when the user's intent is data input via the rear touch panel; and turning a power of the rear touch panel off or not responding to a touch input corresponding to the rear touch panel when the user's intent is data input via the front touch panel. 9. A mobile terminal, the mobile terminal comprising: a touch screen including a transparent display panel, a front touch panel configured to detect a touch input corresponding to a front of the transparent display panel, and a rear touch panel configured to detect a touch input corresponding to a rear of the transparent display panel; and a controller configured to control the touch screen, wherein the controller controls the touch screen to detect a touch input from one of the front touch panel and the rear touch panel, determines whether a user's intent is a data input via the rear touch panel in response to the touch input, and controls the touch screen to display a keypad on a top of the touch screen when the user's intent is determined as data input via the rear touch panel. 10. The mobile terminal of claim 9, wherein the controller determines that the user's intent is data input via the rear touch panel when the touch input is generated from a data input box of an image and is detected through the rear touch panel, and determines that the user's intent is data input via the front touch panel when the touch input is generated from the data input box of the image and is detected through the front touch panel. 11. The mobile terminal of claim 10, wherein the controller controls the transparent display panel to display the keypad on a bottom of the touch screen when the user's intent is data input via the front touch panel. 12. The mobile terminal of claim 11, wherein the controller controls the touch screen to detect a first touch movement from the bottom of the touch screen to the top of the touch screen from one of the front touch panel and the rear touch panel while displaying the keypad on the bottom of the touch screen, and to display the keypad on the top of the touch screen in response to the touch movement. 13. The mobile terminal of claim 9, wherein the controller controls the touch screen to detect a second touch movement from the top of the touch screen to the bottom of the touch screen from one of the front touch panel and the rear touch panel while displaying the keypad on the top of the touch screen, and to display the keypad on the bottom of the touch screen in response to the second touch movement. 14. The mobile terminal of claim 9, wherein the controller determines a display mode as one of a landscape mode and a portrait mode, and controls the transparent display panel to display the keypad on the top of the touch screen when the display mode is determined as the portrait mode.
2,600
9,940
9,940
14,209,244
2,625
A display apparatus has an image display unit having a plurality of arrayed pixel circuits, and an image signal compensation circuit compensating an image signal and outputs the compensated signal to the image display unit. Each of the pixel circuits has a compensating capacitor which compensates the threshold voltage of the driving transistor. The image signal compensation circuit has a compensation memory storing a compensation data for compensating the current variation of the driving transistors, a first comparison circuit which compares the image signal and first threshold value, and an arithmetic circuit compensating the image signal. When the image signal has a luminance larger than the threshold value, the compensation is performed.
1. A display apparatus comprising an image display unit having a plurality of arrayed pixel circuits, each of the pixel circuits having a current light emitting device and a driving transistor supplying current to the current light emitting device; and an image signal compensation circuit compensating an image signal and outputs the compensated signal to the image display unit; wherein each of the pixel circuits having a compensating capacitor which compensates a threshold voltage of a corresponding driving transistor; and the image signal compensation circuit having a compensation memory storing compensation data for compensating current dispersion between the driving transistors, a comparison circuit which compares the image signal with a threshold value of luminance, and an arithmetic circuit which outputs the compensated image signal being compensated based on the compensation data, and the image signal compensation circuit outputs the compensated image signal when the image signal has a luminance larger than the threshold value, and outputs the image signal when the image signal has a luminance smaller than the threshold value. 2. The apparatus of claim 1, wherein the image signal compensation circuit comprises a compensation memory storing the compensation data for compensating the variation of current in the driving transistor; a first comparison circuit comparing the image signal and a first threshold of luminance; a lighting ratio calculation circuit calculating a lighting ratio of the plurality of arrayed pixel circuits for every frame of the image signal; a second comparison circuit comparing the lighting ratio calculated by the lighting ratio calculation circuit and the second threshold which is different from the first threshold, and an arithmetic circuit which performs the compensation, wherein the image signal compensation circuit compensates the image signal when the image signal is larger than the first threshold and the lighting ratio is larger than the second threshold. 3. The apparatus of claim 1, wherein each of the pixel circuits further comprises a first capacitor having a first terminal connected to a gate of the driving transistor; a second capacitor connected between a second terminal of the first capacitor and a source of the driving transistor; a first switch applying a reference voltage to a node to which the first and the second capacitors are connected; a second switch supplying an image signal voltage to the gate of the driving transistor; a third switch supplying an initialization voltage to drain of the driving transistor, and a fourth switch supplying current to the drain of the drive transistor for emitting light from the current light emitting device, wherein the second capacitor is the compensating capacitor.
A display apparatus has an image display unit having a plurality of arrayed pixel circuits, and an image signal compensation circuit compensating an image signal and outputs the compensated signal to the image display unit. Each of the pixel circuits has a compensating capacitor which compensates the threshold voltage of the driving transistor. The image signal compensation circuit has a compensation memory storing a compensation data for compensating the current variation of the driving transistors, a first comparison circuit which compares the image signal and first threshold value, and an arithmetic circuit compensating the image signal. When the image signal has a luminance larger than the threshold value, the compensation is performed.1. A display apparatus comprising an image display unit having a plurality of arrayed pixel circuits, each of the pixel circuits having a current light emitting device and a driving transistor supplying current to the current light emitting device; and an image signal compensation circuit compensating an image signal and outputs the compensated signal to the image display unit; wherein each of the pixel circuits having a compensating capacitor which compensates a threshold voltage of a corresponding driving transistor; and the image signal compensation circuit having a compensation memory storing compensation data for compensating current dispersion between the driving transistors, a comparison circuit which compares the image signal with a threshold value of luminance, and an arithmetic circuit which outputs the compensated image signal being compensated based on the compensation data, and the image signal compensation circuit outputs the compensated image signal when the image signal has a luminance larger than the threshold value, and outputs the image signal when the image signal has a luminance smaller than the threshold value. 2. The apparatus of claim 1, wherein the image signal compensation circuit comprises a compensation memory storing the compensation data for compensating the variation of current in the driving transistor; a first comparison circuit comparing the image signal and a first threshold of luminance; a lighting ratio calculation circuit calculating a lighting ratio of the plurality of arrayed pixel circuits for every frame of the image signal; a second comparison circuit comparing the lighting ratio calculated by the lighting ratio calculation circuit and the second threshold which is different from the first threshold, and an arithmetic circuit which performs the compensation, wherein the image signal compensation circuit compensates the image signal when the image signal is larger than the first threshold and the lighting ratio is larger than the second threshold. 3. The apparatus of claim 1, wherein each of the pixel circuits further comprises a first capacitor having a first terminal connected to a gate of the driving transistor; a second capacitor connected between a second terminal of the first capacitor and a source of the driving transistor; a first switch applying a reference voltage to a node to which the first and the second capacitors are connected; a second switch supplying an image signal voltage to the gate of the driving transistor; a third switch supplying an initialization voltage to drain of the driving transistor, and a fourth switch supplying current to the drain of the drive transistor for emitting light from the current light emitting device, wherein the second capacitor is the compensating capacitor.
2,600
9,941
9,941
15,423,479
2,631
A method to transmit data over a single-wire bus wherein a first communication channel is defined by pulses of different durations according to the state of the transmitted bit and depending on a reference duration, and a second communication channel is defined by the reference duration.
1. A method to transmit data on a single-wire bus, comprising: transmitting first channel data on the single-wire bus by transmitting data pulses having data pulse durations based on first channel data values and on reference pulse durations; and transmitting second channel data on the single-wire bus by transmitting reference pulses having the reference pulse durations, wherein the reference pulse durations are based on second channel data values. 2. A method to transmit data as defined in claim 1, wherein a first value of the first channel data has a pulse duration less than a corresponding reference pulse duration and a second value of the first channel data has a pulse duration greater than the corresponding reference pulse duration. 3. A method to transmit data as defined in claim 2, wherein a first value of the second channel data corresponds to a first reference pulse duration and a second value of the second channel data corresponds to a second reference pulse duration. 4. A method to transmit data as defined in claim 1, wherein each of the reference pulse durations corresponds to a value of the second channel data. 5. A method to transmit data as defined in claim 1, wherein a reference pulse duration is fixed for a word of the first channel data. 6. A method to transmit data as defined in claim 5, wherein each word of the first channel data is preceded by a corresponding reference pulse that sets a reference pulse duration for a word of the first channel data following the reference pulse. 7. A method to transmit data as defined in claim 1, wherein the single wire bus is at a first voltage level when idle and wherein the data pulses and the reference pulses are separated from one another by periods of fixed duration at a second voltage level. 8. A method to transmit data as defined in claim 7, wherein the data pulse durations and the reference pulse durations are multiples of the periods of fixed duration. 9. A system to communicate data, comprising: a single-wire bus; a transmitting device, the transmitting device arranged to: transmit first channel data on the single-wire bus by transmitting data pulses having data pulse durations based on first channel data values and on reference pulse durations; transmit second channel data on the single-wire bus by transmitting reference pulses having the reference pulse durations, wherein the reference pulse durations are based on second channel data values; and a receiving device coupled to the transmitting device via the single-wire bus, the receiving device arranged to: receive the transmitted data pulses on the single-wire bus; determine the first channel data values based on the data pulse durations and on the reference pulse durations; and determine the second channel data values based on the reference pulse durations. 10. A system to communicate data as defined in claim 9, wherein the transmitting device is arranged in one of a printer and an ink cartridge, and wherein the receiving device is arranged in a different one of the printer and the ink cartridge. 11. A system to communicate data as defined in claim 9, wherein the transmitting device is arranged in one of a mobile telephone and a battery, and wherein the receiving device is arranged in a different one of the mobile telephone and the battery. 12. A system to communicate data as defined in claim 9, wherein the transmitting device is a master device and the receiving device is a slave device. 13. A method to receive data on a single-wire bus, comprising: receiving data pulses having data pulse durations and reference pulses having reference pulse durations on the single-wire bus; determining first channel data values based on the data pulse durations and on the reference pulse durations; and determining second channel data values based on the reference pulse durations. 14. A method to receive data as defined in claim 13, wherein a first value of the first channel data has a data pulse duration less than a corresponding reference pulse duration and a second value of the first channel data has a data pulse duration greater than the corresponding reference pulse duration. 15. A method to receive data as defined in claim 14, wherein a first value of the second channel data corresponds to a first reference pulse duration and a second value of the second channel data corresponds to a second reference pulse duration. 16. A method to receive data as defined in claim 13, wherein each of the reference pulse durations corresponds to a value of the second channel data. 17. A method to receive data as defined in claim 13, wherein a reference pulse duration is fixed for a word of the first channel data. 18. A method to receive data as defined in claim 17, wherein each word of the first channel data is preceded by a corresponding reference pulse that sets a reference pulse duration for a word of the first channel data following the reference pulse. 19. A method to receive data as defined in claim 13, wherein the single-wire bus is at a first voltage level when idle and wherein the data pulses and the reference pulses are separated from one another by periods of fixed duration at a second voltage level. 20. A method to receive data as defined in claim 19, wherein the data pulse durations and the reference pulse durations correspond to multiples of the periods of fixed duration.
A method to transmit data over a single-wire bus wherein a first communication channel is defined by pulses of different durations according to the state of the transmitted bit and depending on a reference duration, and a second communication channel is defined by the reference duration.1. A method to transmit data on a single-wire bus, comprising: transmitting first channel data on the single-wire bus by transmitting data pulses having data pulse durations based on first channel data values and on reference pulse durations; and transmitting second channel data on the single-wire bus by transmitting reference pulses having the reference pulse durations, wherein the reference pulse durations are based on second channel data values. 2. A method to transmit data as defined in claim 1, wherein a first value of the first channel data has a pulse duration less than a corresponding reference pulse duration and a second value of the first channel data has a pulse duration greater than the corresponding reference pulse duration. 3. A method to transmit data as defined in claim 2, wherein a first value of the second channel data corresponds to a first reference pulse duration and a second value of the second channel data corresponds to a second reference pulse duration. 4. A method to transmit data as defined in claim 1, wherein each of the reference pulse durations corresponds to a value of the second channel data. 5. A method to transmit data as defined in claim 1, wherein a reference pulse duration is fixed for a word of the first channel data. 6. A method to transmit data as defined in claim 5, wherein each word of the first channel data is preceded by a corresponding reference pulse that sets a reference pulse duration for a word of the first channel data following the reference pulse. 7. A method to transmit data as defined in claim 1, wherein the single wire bus is at a first voltage level when idle and wherein the data pulses and the reference pulses are separated from one another by periods of fixed duration at a second voltage level. 8. A method to transmit data as defined in claim 7, wherein the data pulse durations and the reference pulse durations are multiples of the periods of fixed duration. 9. A system to communicate data, comprising: a single-wire bus; a transmitting device, the transmitting device arranged to: transmit first channel data on the single-wire bus by transmitting data pulses having data pulse durations based on first channel data values and on reference pulse durations; transmit second channel data on the single-wire bus by transmitting reference pulses having the reference pulse durations, wherein the reference pulse durations are based on second channel data values; and a receiving device coupled to the transmitting device via the single-wire bus, the receiving device arranged to: receive the transmitted data pulses on the single-wire bus; determine the first channel data values based on the data pulse durations and on the reference pulse durations; and determine the second channel data values based on the reference pulse durations. 10. A system to communicate data as defined in claim 9, wherein the transmitting device is arranged in one of a printer and an ink cartridge, and wherein the receiving device is arranged in a different one of the printer and the ink cartridge. 11. A system to communicate data as defined in claim 9, wherein the transmitting device is arranged in one of a mobile telephone and a battery, and wherein the receiving device is arranged in a different one of the mobile telephone and the battery. 12. A system to communicate data as defined in claim 9, wherein the transmitting device is a master device and the receiving device is a slave device. 13. A method to receive data on a single-wire bus, comprising: receiving data pulses having data pulse durations and reference pulses having reference pulse durations on the single-wire bus; determining first channel data values based on the data pulse durations and on the reference pulse durations; and determining second channel data values based on the reference pulse durations. 14. A method to receive data as defined in claim 13, wherein a first value of the first channel data has a data pulse duration less than a corresponding reference pulse duration and a second value of the first channel data has a data pulse duration greater than the corresponding reference pulse duration. 15. A method to receive data as defined in claim 14, wherein a first value of the second channel data corresponds to a first reference pulse duration and a second value of the second channel data corresponds to a second reference pulse duration. 16. A method to receive data as defined in claim 13, wherein each of the reference pulse durations corresponds to a value of the second channel data. 17. A method to receive data as defined in claim 13, wherein a reference pulse duration is fixed for a word of the first channel data. 18. A method to receive data as defined in claim 17, wherein each word of the first channel data is preceded by a corresponding reference pulse that sets a reference pulse duration for a word of the first channel data following the reference pulse. 19. A method to receive data as defined in claim 13, wherein the single-wire bus is at a first voltage level when idle and wherein the data pulses and the reference pulses are separated from one another by periods of fixed duration at a second voltage level. 20. A method to receive data as defined in claim 19, wherein the data pulse durations and the reference pulse durations correspond to multiples of the periods of fixed duration.
2,600
9,942
9,942
14,419,258
2,698
A rest for supporting an object, the rest comprising: a rest fixture ( 5 ) which is in use attached to an object, wherein the rest fixture comprises a stand coupling ( 67 ) which includes a magnetic element ( 69 ) for removable attachment of a rest stand; and a rest stand for resting on a surface, wherein the rest stand comprises a support body ( 11 ), at least one leg ( 15 ) which is supported by the support body and a fixture coupling ( 17 ) which includes a magnetic element ( 51 ) for removable attachment to the stand coupling on the object.
1. A rest for supporting an object, the rest comprising: a rest fixture which is in use attached to an object, wherein the rest fixture comprises a stand coupling which includes a magnetic element for removable attachment of a rest stand; and a rest stand for resting on a surface, wherein the rest stand comprises a support body, at least one leg which is supported by the support body and a fixture coupling which includes a magnetic element for removable attachment to the stand coupling on the object. 2. The rest of claim 1, wherein the object is a rifle. 3. The rest of claim 1, wherein the object is a camera. 4. The rest of claim 1, wherein the object is a scope. 5. The rest of claim 1, wherein the object is a monocular or binoculars. 6. The rest of any of claims 1 to 5, wherein the rest stand comprises two legs. 7. The rest of any of claims 1 to 5, wherein the rest stand comprises three legs. 8. The rest of any of claims 1 to 7, wherein each leg comprises at least one leg element and a foot. 9. The rest of claim 8, wherein each leg comprises a plurality of leg elements, which allows for the legs to have different lengths. 10. The rest of claim 9, wherein the leg elements are connected by a screw or magnetic coupling. 11. The rest of claim 9, wherein the leg elements are telescopic or continuously extendable. 12. The rest of any of claims 8 to 11, wherein each foot is connected to the respective leg element by a screw or magnetic coupling. 13. The rest of any of claims 8 to 12, wherein each foot includes a removable cap. 14. The rest of any of claims 1 to 13, wherein each leg is comprised of composite materials. 15. The rest of any of claims 1 to 14, wherein the legs are each pivotably coupled to the support body about a respective pivot between a first, expanded or in use configuration and a collapsed configuration. 16. The rest of claim 15, wherein the legs each include a pivot connector at one, upper end thereof which is pivotably coupled to the support body. 17. The rest of claim 16, wherein the support body includes first and second magnetic elements, each adjacent and to a respective outer side of one the pivots, and the pivot connectors each include a magnetic element, whereby the magnetic element of the respective leg is attracted to the respective magnetic element of the support body, thereby maintaining the legs in the expanded, in use configuration during use. 18. The rest of any of claims 1 to 17, wherein the magnetic element of the fixture coupling comprises a magnet. 19. The rest of claim 18, wherein the magnetic element of the fixture coupling is provided as a magnet pair which is located to opposite sides of an axis of rotation of the stand. 20. The rest of any of claims 1 to 19, wherein the fixture coupling and the stand coupling each comprise circular or near-circular sections. 21. The rest of any of claims 1 to 20, wherein the rest fixture further comprises an attachment body which is in use fixed to the object. 22. The rest of claim 21, wherein the attachment body comprises a plate which is attached by a fixing to the object. 23. The rest of claim 22, wherein the attachment body includes an aperture through which the fixing is made to the object. 24. The rest of any of claims 1 to 23, wherein the rest fixture is integrated into the object. 25. The rest of any of claims 1 to 24, wherein the fixture coupling on the rest stand comprises one of a male projection or coupling or a female recess or coupling and the stand coupling on the rest fixture comprises the other of a female recess or coupling or a male projection or coupling. 26. The rest of claim 25, wherein the male projection or coupling or the female recess or coupling presents a part-spherical surface. 27. The rest of claim 25, wherein the male projection or coupling and the female recess or coupling present part-spherical surfaces. 28. A rest stand for attachment to an object and resting on a surface, wherein the rest stand comprises a support body, at least one leg which is supported by the support body and a fixture coupling which includes a magnetic element for removable attachment to a stand coupling on the object. 29. The rest stand of claim 28, wherein the object is a rifle. 30. The rest stand of claim 28, wherein the object is a camera. 31. The rest stand of claim 28, wherein the object is a scope. 32. The rest stand of claim 28, wherein the object is a monocular or binoculars. 33. The rest stand of any of claims 28 to 32, comprising two legs. 34. The rest stand of any of claims 28 to 32, comprising three legs. 35. The rest stand of any of claim 28 or 34, wherein each leg comprises at least one leg element and a foot. 36. The rest stand of claim 35, wherein each leg comprises a plurality of leg elements, which allows for the legs to have different lengths. 37. The rest stand of claim 36, wherein the leg elements are connected by a screw or magnetic coupling. 38. The rest stand of claim 36, wherein the leg elements are telescopic or continuously extendable. 39. The rest stand of any of claims 35 to 38, wherein each foot is connected to the respective leg element by a screw or magnetic coupling. 40. The rest stand of any of claims 35 to 39, wherein each foot includes a removable cap. 41. The rest stand of any of claims 28 to 40, wherein each leg is comprised of composite materials. 42. The rest stand of any of claims 28 to 41, wherein the legs are each pivotably coupled to the support body about a respective pivot between a first, expanded or in use configuration and a collapsed configuration. 43. The rest stand of claim 42, wherein the legs each include a pivot connector at one, upper end thereof which is pivotably coupled to the support body. 44. The rest stand of claim 43, wherein the support body includes first and second magnetic elements, each adjacent and to a respective outer side of one the pivots, and the pivot connectors each include a magnetic element, whereby the magnetic element of the respective leg is attracted to the respective magnetic element of the support body, thereby maintaining the legs in the expanded, in use configuration during use. 45. The rest stand of any of claims 28 to 44, wherein the magnetic element of the fixture coupling comprises a magnet. 46. The rest stand of claim 45, wherein the magnetic element of the fixture coupling is provided as a magnet pair which is located to opposite sides of an axis of rotation of the stand. 47. The rest stand of any of claims 28 to 46, wherein the fixture coupling comprises a circular or near-circular section, providing for rotation thereabout. 48. The rest stand of any of claims 28 to 47, wherein the fixture coupling comprises one of a male projection or coupling or a female recess or coupling. 49. The rest stand of claim 48, wherein the one of a male projection or coupling or a female recess or coupling presents a part-spherical surface. 50. A rest fixture which is in use attached to an object and provides for removable attachment of a rest stand, wherein the rest fixture comprises a stand coupling which includes a magnetic element for removable attachment of a rest stand. 51. The rest fixture of claim 50, wherein the object is a rifle. 52. The rest fixture of claim 50, wherein the object is a camera. 53. The rest fixture of claim 50, wherein the object is a scope. 54. The rest fixture of claim 50, wherein the object is a monocular or binoculars. 55. The rest fixture of any of claims 50 to 54, further comprising an attachment body which is in use fixed to the object. 56. The rest fixture of claim 55, wherein the attachment body comprises a plate which is attached by a fixing to the object. 57. The rest fixture of claim 56, wherein the attachment body includes an aperture through which the fixing is made to the object. 58. The rest fixture of claim 57, where integrated into the object. 59. The rest fixture of any of claims 50 to 58, wherein the stand coupling comprises a circular or near-circular section, providing for rotation thereabout. 60. The rest fixture of any of claims 50 to 59, wherein the stand coupling comprises one of a female recess or coupling or a male projection or coupling. 61. The rest fixture of claim 60, wherein the one of a female recess or coupling or a male projection or coupling presents a part-spherical surface. 62. A rifle comprising the rest of any of claims 1 to 27, the rest stand of any of claims 28 to 49 or the rest fixture of any of claims 50 to 61. 63. A camera comprising the rest of any of claims 1 to 27, the rest stand of any of claims 28 to 49 or the rest fixture of any of claims 50 to 61. 64. A scope comprising the rest of any of claims 1 to 27, the rest stand of any of claims 28 to 49 or the rest fixture of any of claims 50 to 61. 65. A monocular or binoculars comprising the rest of any of claims 1 to 27, the rest stand of any of claims 28 to 49 or the rest fixture of any of claims 50 to 61. 66. A rifle rest, comprising: a rifle fixture which is in use attached to a rifle, wherein the rifle fixture comprises a stand coupling which includes a magnetic element for removable attachment of a rest stand; and a rest stand for resting on a surface, wherein the rest stand comprises a support body, first and second legs which are supported by the support body and a rifle coupling which includes a magnetic element for removable attachment to the stand coupling on the rifle. 67. A rest substantially as hereinbefore described with reference to the accompanying drawings. 68. A rest stand substantially as hereinbefore described with reference to the accompanying drawings. 69. A rest fixture substantially as hereinbefore described with reference to the accompanying drawings. 70. A rifle rest substantially as hereinbefore described with reference to the accompanying drawings.
A rest for supporting an object, the rest comprising: a rest fixture ( 5 ) which is in use attached to an object, wherein the rest fixture comprises a stand coupling ( 67 ) which includes a magnetic element ( 69 ) for removable attachment of a rest stand; and a rest stand for resting on a surface, wherein the rest stand comprises a support body ( 11 ), at least one leg ( 15 ) which is supported by the support body and a fixture coupling ( 17 ) which includes a magnetic element ( 51 ) for removable attachment to the stand coupling on the object.1. A rest for supporting an object, the rest comprising: a rest fixture which is in use attached to an object, wherein the rest fixture comprises a stand coupling which includes a magnetic element for removable attachment of a rest stand; and a rest stand for resting on a surface, wherein the rest stand comprises a support body, at least one leg which is supported by the support body and a fixture coupling which includes a magnetic element for removable attachment to the stand coupling on the object. 2. The rest of claim 1, wherein the object is a rifle. 3. The rest of claim 1, wherein the object is a camera. 4. The rest of claim 1, wherein the object is a scope. 5. The rest of claim 1, wherein the object is a monocular or binoculars. 6. The rest of any of claims 1 to 5, wherein the rest stand comprises two legs. 7. The rest of any of claims 1 to 5, wherein the rest stand comprises three legs. 8. The rest of any of claims 1 to 7, wherein each leg comprises at least one leg element and a foot. 9. The rest of claim 8, wherein each leg comprises a plurality of leg elements, which allows for the legs to have different lengths. 10. The rest of claim 9, wherein the leg elements are connected by a screw or magnetic coupling. 11. The rest of claim 9, wherein the leg elements are telescopic or continuously extendable. 12. The rest of any of claims 8 to 11, wherein each foot is connected to the respective leg element by a screw or magnetic coupling. 13. The rest of any of claims 8 to 12, wherein each foot includes a removable cap. 14. The rest of any of claims 1 to 13, wherein each leg is comprised of composite materials. 15. The rest of any of claims 1 to 14, wherein the legs are each pivotably coupled to the support body about a respective pivot between a first, expanded or in use configuration and a collapsed configuration. 16. The rest of claim 15, wherein the legs each include a pivot connector at one, upper end thereof which is pivotably coupled to the support body. 17. The rest of claim 16, wherein the support body includes first and second magnetic elements, each adjacent and to a respective outer side of one the pivots, and the pivot connectors each include a magnetic element, whereby the magnetic element of the respective leg is attracted to the respective magnetic element of the support body, thereby maintaining the legs in the expanded, in use configuration during use. 18. The rest of any of claims 1 to 17, wherein the magnetic element of the fixture coupling comprises a magnet. 19. The rest of claim 18, wherein the magnetic element of the fixture coupling is provided as a magnet pair which is located to opposite sides of an axis of rotation of the stand. 20. The rest of any of claims 1 to 19, wherein the fixture coupling and the stand coupling each comprise circular or near-circular sections. 21. The rest of any of claims 1 to 20, wherein the rest fixture further comprises an attachment body which is in use fixed to the object. 22. The rest of claim 21, wherein the attachment body comprises a plate which is attached by a fixing to the object. 23. The rest of claim 22, wherein the attachment body includes an aperture through which the fixing is made to the object. 24. The rest of any of claims 1 to 23, wherein the rest fixture is integrated into the object. 25. The rest of any of claims 1 to 24, wherein the fixture coupling on the rest stand comprises one of a male projection or coupling or a female recess or coupling and the stand coupling on the rest fixture comprises the other of a female recess or coupling or a male projection or coupling. 26. The rest of claim 25, wherein the male projection or coupling or the female recess or coupling presents a part-spherical surface. 27. The rest of claim 25, wherein the male projection or coupling and the female recess or coupling present part-spherical surfaces. 28. A rest stand for attachment to an object and resting on a surface, wherein the rest stand comprises a support body, at least one leg which is supported by the support body and a fixture coupling which includes a magnetic element for removable attachment to a stand coupling on the object. 29. The rest stand of claim 28, wherein the object is a rifle. 30. The rest stand of claim 28, wherein the object is a camera. 31. The rest stand of claim 28, wherein the object is a scope. 32. The rest stand of claim 28, wherein the object is a monocular or binoculars. 33. The rest stand of any of claims 28 to 32, comprising two legs. 34. The rest stand of any of claims 28 to 32, comprising three legs. 35. The rest stand of any of claim 28 or 34, wherein each leg comprises at least one leg element and a foot. 36. The rest stand of claim 35, wherein each leg comprises a plurality of leg elements, which allows for the legs to have different lengths. 37. The rest stand of claim 36, wherein the leg elements are connected by a screw or magnetic coupling. 38. The rest stand of claim 36, wherein the leg elements are telescopic or continuously extendable. 39. The rest stand of any of claims 35 to 38, wherein each foot is connected to the respective leg element by a screw or magnetic coupling. 40. The rest stand of any of claims 35 to 39, wherein each foot includes a removable cap. 41. The rest stand of any of claims 28 to 40, wherein each leg is comprised of composite materials. 42. The rest stand of any of claims 28 to 41, wherein the legs are each pivotably coupled to the support body about a respective pivot between a first, expanded or in use configuration and a collapsed configuration. 43. The rest stand of claim 42, wherein the legs each include a pivot connector at one, upper end thereof which is pivotably coupled to the support body. 44. The rest stand of claim 43, wherein the support body includes first and second magnetic elements, each adjacent and to a respective outer side of one the pivots, and the pivot connectors each include a magnetic element, whereby the magnetic element of the respective leg is attracted to the respective magnetic element of the support body, thereby maintaining the legs in the expanded, in use configuration during use. 45. The rest stand of any of claims 28 to 44, wherein the magnetic element of the fixture coupling comprises a magnet. 46. The rest stand of claim 45, wherein the magnetic element of the fixture coupling is provided as a magnet pair which is located to opposite sides of an axis of rotation of the stand. 47. The rest stand of any of claims 28 to 46, wherein the fixture coupling comprises a circular or near-circular section, providing for rotation thereabout. 48. The rest stand of any of claims 28 to 47, wherein the fixture coupling comprises one of a male projection or coupling or a female recess or coupling. 49. The rest stand of claim 48, wherein the one of a male projection or coupling or a female recess or coupling presents a part-spherical surface. 50. A rest fixture which is in use attached to an object and provides for removable attachment of a rest stand, wherein the rest fixture comprises a stand coupling which includes a magnetic element for removable attachment of a rest stand. 51. The rest fixture of claim 50, wherein the object is a rifle. 52. The rest fixture of claim 50, wherein the object is a camera. 53. The rest fixture of claim 50, wherein the object is a scope. 54. The rest fixture of claim 50, wherein the object is a monocular or binoculars. 55. The rest fixture of any of claims 50 to 54, further comprising an attachment body which is in use fixed to the object. 56. The rest fixture of claim 55, wherein the attachment body comprises a plate which is attached by a fixing to the object. 57. The rest fixture of claim 56, wherein the attachment body includes an aperture through which the fixing is made to the object. 58. The rest fixture of claim 57, where integrated into the object. 59. The rest fixture of any of claims 50 to 58, wherein the stand coupling comprises a circular or near-circular section, providing for rotation thereabout. 60. The rest fixture of any of claims 50 to 59, wherein the stand coupling comprises one of a female recess or coupling or a male projection or coupling. 61. The rest fixture of claim 60, wherein the one of a female recess or coupling or a male projection or coupling presents a part-spherical surface. 62. A rifle comprising the rest of any of claims 1 to 27, the rest stand of any of claims 28 to 49 or the rest fixture of any of claims 50 to 61. 63. A camera comprising the rest of any of claims 1 to 27, the rest stand of any of claims 28 to 49 or the rest fixture of any of claims 50 to 61. 64. A scope comprising the rest of any of claims 1 to 27, the rest stand of any of claims 28 to 49 or the rest fixture of any of claims 50 to 61. 65. A monocular or binoculars comprising the rest of any of claims 1 to 27, the rest stand of any of claims 28 to 49 or the rest fixture of any of claims 50 to 61. 66. A rifle rest, comprising: a rifle fixture which is in use attached to a rifle, wherein the rifle fixture comprises a stand coupling which includes a magnetic element for removable attachment of a rest stand; and a rest stand for resting on a surface, wherein the rest stand comprises a support body, first and second legs which are supported by the support body and a rifle coupling which includes a magnetic element for removable attachment to the stand coupling on the rifle. 67. A rest substantially as hereinbefore described with reference to the accompanying drawings. 68. A rest stand substantially as hereinbefore described with reference to the accompanying drawings. 69. A rest fixture substantially as hereinbefore described with reference to the accompanying drawings. 70. A rifle rest substantially as hereinbefore described with reference to the accompanying drawings.
2,600
9,943
9,943
13,282,369
2,619
An invention is provided for affording a real-time three-dimensional interactive environment using a depth sensing device. The invention includes obtaining depth values indicating distances from one or more physical objects in a physical scene to a depth sensing device. The depth sensing device is configurable to be maintained at a particular depth range defined by a plane so that objects between the particular depth range and the depth sensing device are processed by the depth sensing device, wherein the particular depth range establishes active detection by the depth sensing device, as depth values of objects placed through the particular depth range and toward the depth sensing device are detected and depth values of objects placed beyond the particular depth range are not detected. The objects placed through the particular depth range are rendered and displayed in a virtual scene based on geometric characteristics of the object itself.
1. A computer implemented method having access to memory, the method providing a real-time three-dimensional interactive environment, comprising the operations of: obtaining depth values indicating distances from one or more physical objects in a physical scene to a depth sensing device, the depth sensing device configurable to be maintained at a particular depth range defined by a plane so that objects between the particular depth range and the depth sensing device are processed by the depth sensing device, wherein the particular depth range establishes active detection by the depth sensing device, as depth values of objects placed through the particular depth range and toward the depth sensing device are detected and depth values of objects placed beyond the particular depth range are not detected, and the objects placed through the particular depth range are rendered and displayed in a virtual scene based on geometric characteristics of the object itself. 2. A method as recited in claim 1, further comprising, initiating tracking of the objects when the objects are placed through the particular depth range and toward the depth sensing device, and terminating tracking of the objects when the objects are placed beyond the particular depth range. 3. A method as recited in claim 1, further comprising, inserting at least one virtual object into the virtual scene after obtaining the depth values, the virtual object being computer-generated and configured to be inserted within and beyond the particular depth range. 4. A method as recited in claim 3, further comprising, detecting an interaction between only objects placed through the particular depth range and the virtual object based on coordinates of the virtual object and the obtained depth values of the objects placed through the particular depth range. 5. A method as recited in claim 1, wherein the depth sensing device is a depth camera using controlled infrared lighting. 6. A method as recited in claim 1, further comprising, estimating three-dimensional volume information for each physical object within the particular depth range based on the obtained depth values. 7. A computer program embodied on a computer readable medium for providing a real-time three-dimensional interactive environment, comprising: program instructions that obtain depth values indicating distances from one or more physical objects in a physical scene to a depth sensing device, the depth sensing device configurable to be maintained at a particular depth range defined by a plane so that objects between the particular depth range and the depth sensing device are processed by the depth sensing device, wherein the particular depth range establishes active detection by the depth sensing device, as depth values of objects placed through the particular depth range and toward the depth sensing device are detected and depth values of objects placed beyond the particular depth range are not detected, and the objects placed through the particular depth range are rendered and displayed in a virtual scene based on geometric characteristics of the object itself. 8. A computer program as recited in claim 7, further comprising, program instructions that initiate tracking of the objects when the objects are placed through the particular depth range and toward the depth sensing device, and terminate tracking of the objects when the objects are placed beyond the particular depth range. 9. A computer program as recited in claim 7, further comprising, program instructions that insert at least one virtual object into the virtual scene after obtaining the depth values, the virtual object being computer-generated and configured to be inserted within and beyond the particular depth range. 10. A computer program as recited in claim 9, further comprising, program instructions that detect an interaction between only objects placed through the particular depth range and the virtual object based on coordinates of the virtual object and the obtained depth values of the objects placed through the particular depth range. 11. A computer program as recited in claim 7, wherein the depth sensing device is a depth camera using controlled infrared lighting. 12. A computer program as recited in claim 7, further comprising, program instructions that estimate three-dimensional volume information for each physical object within the particular depth range based on the obtained depth values. 13. A system for providing a real-time three-dimensional interactive environment, comprising: a depth sensing device capable of obtaining depth values indicating distances from one or more physical objects in a physical scene to a depth sensing device, the depth sensing device configurable to be maintained at a particular depth range defined by a plane so that objects between the particular depth range and the depth sensing device are processed by the depth sensing device, wherein the particular depth range establishes active detection by the depth sensing device, as depth values of objects placed through the particular depth range and toward the depth sensing device are detected and depth values of objects placed beyond the particular depth range are not detected; and a console having logic configured to render and display the objects placed through the particular depth range in a virtual scene based on geometric characteristics of the object itself. 14. A system as recited in claim 13, further comprising, logic that initiates tracking of the objects when the objects are placed through the particular depth range and toward the depth sensing device, and terminates tracking of the objects when the objects are placed beyond the particular depth range. 15. A system as recited in claim 13, further comprising, logic that inserts at least one virtual object into the virtual scene after obtaining the depth values, the virtual object being computer-generated and configured to be inserted within and beyond the particular depth range. 16. A system as recited in claim 15, further comprising, logic that detects an interaction between only objects placed through the particular depth range and the virtual object based on coordinates of the virtual object and the obtained depth values of the objects placed through the particular depth range. 17. A system as recited in claim 13, wherein the depth sensing device is a depth camera using controlled infrared lighting. 18. A system as recited in claim 13, further comprising, logic that estimates three-dimensional volume information for each physical object within the particular depth range based on the obtained depth values.
An invention is provided for affording a real-time three-dimensional interactive environment using a depth sensing device. The invention includes obtaining depth values indicating distances from one or more physical objects in a physical scene to a depth sensing device. The depth sensing device is configurable to be maintained at a particular depth range defined by a plane so that objects between the particular depth range and the depth sensing device are processed by the depth sensing device, wherein the particular depth range establishes active detection by the depth sensing device, as depth values of objects placed through the particular depth range and toward the depth sensing device are detected and depth values of objects placed beyond the particular depth range are not detected. The objects placed through the particular depth range are rendered and displayed in a virtual scene based on geometric characteristics of the object itself.1. A computer implemented method having access to memory, the method providing a real-time three-dimensional interactive environment, comprising the operations of: obtaining depth values indicating distances from one or more physical objects in a physical scene to a depth sensing device, the depth sensing device configurable to be maintained at a particular depth range defined by a plane so that objects between the particular depth range and the depth sensing device are processed by the depth sensing device, wherein the particular depth range establishes active detection by the depth sensing device, as depth values of objects placed through the particular depth range and toward the depth sensing device are detected and depth values of objects placed beyond the particular depth range are not detected, and the objects placed through the particular depth range are rendered and displayed in a virtual scene based on geometric characteristics of the object itself. 2. A method as recited in claim 1, further comprising, initiating tracking of the objects when the objects are placed through the particular depth range and toward the depth sensing device, and terminating tracking of the objects when the objects are placed beyond the particular depth range. 3. A method as recited in claim 1, further comprising, inserting at least one virtual object into the virtual scene after obtaining the depth values, the virtual object being computer-generated and configured to be inserted within and beyond the particular depth range. 4. A method as recited in claim 3, further comprising, detecting an interaction between only objects placed through the particular depth range and the virtual object based on coordinates of the virtual object and the obtained depth values of the objects placed through the particular depth range. 5. A method as recited in claim 1, wherein the depth sensing device is a depth camera using controlled infrared lighting. 6. A method as recited in claim 1, further comprising, estimating three-dimensional volume information for each physical object within the particular depth range based on the obtained depth values. 7. A computer program embodied on a computer readable medium for providing a real-time three-dimensional interactive environment, comprising: program instructions that obtain depth values indicating distances from one or more physical objects in a physical scene to a depth sensing device, the depth sensing device configurable to be maintained at a particular depth range defined by a plane so that objects between the particular depth range and the depth sensing device are processed by the depth sensing device, wherein the particular depth range establishes active detection by the depth sensing device, as depth values of objects placed through the particular depth range and toward the depth sensing device are detected and depth values of objects placed beyond the particular depth range are not detected, and the objects placed through the particular depth range are rendered and displayed in a virtual scene based on geometric characteristics of the object itself. 8. A computer program as recited in claim 7, further comprising, program instructions that initiate tracking of the objects when the objects are placed through the particular depth range and toward the depth sensing device, and terminate tracking of the objects when the objects are placed beyond the particular depth range. 9. A computer program as recited in claim 7, further comprising, program instructions that insert at least one virtual object into the virtual scene after obtaining the depth values, the virtual object being computer-generated and configured to be inserted within and beyond the particular depth range. 10. A computer program as recited in claim 9, further comprising, program instructions that detect an interaction between only objects placed through the particular depth range and the virtual object based on coordinates of the virtual object and the obtained depth values of the objects placed through the particular depth range. 11. A computer program as recited in claim 7, wherein the depth sensing device is a depth camera using controlled infrared lighting. 12. A computer program as recited in claim 7, further comprising, program instructions that estimate three-dimensional volume information for each physical object within the particular depth range based on the obtained depth values. 13. A system for providing a real-time three-dimensional interactive environment, comprising: a depth sensing device capable of obtaining depth values indicating distances from one or more physical objects in a physical scene to a depth sensing device, the depth sensing device configurable to be maintained at a particular depth range defined by a plane so that objects between the particular depth range and the depth sensing device are processed by the depth sensing device, wherein the particular depth range establishes active detection by the depth sensing device, as depth values of objects placed through the particular depth range and toward the depth sensing device are detected and depth values of objects placed beyond the particular depth range are not detected; and a console having logic configured to render and display the objects placed through the particular depth range in a virtual scene based on geometric characteristics of the object itself. 14. A system as recited in claim 13, further comprising, logic that initiates tracking of the objects when the objects are placed through the particular depth range and toward the depth sensing device, and terminates tracking of the objects when the objects are placed beyond the particular depth range. 15. A system as recited in claim 13, further comprising, logic that inserts at least one virtual object into the virtual scene after obtaining the depth values, the virtual object being computer-generated and configured to be inserted within and beyond the particular depth range. 16. A system as recited in claim 15, further comprising, logic that detects an interaction between only objects placed through the particular depth range and the virtual object based on coordinates of the virtual object and the obtained depth values of the objects placed through the particular depth range. 17. A system as recited in claim 13, wherein the depth sensing device is a depth camera using controlled infrared lighting. 18. A system as recited in claim 13, further comprising, logic that estimates three-dimensional volume information for each physical object within the particular depth range based on the obtained depth values.
2,600
9,944
9,944
15,002,175
2,621
A multi-view display system that permits viewers to individually interact therewith to communicate commands or viewing preferences is disclosed. Methods in accordance with the present teachings enable a multi-view display to deliver a unique content stream to each of plural viewers, based on the viewers' interactions with the multi-view display system, wherein the viewers are not in fixed locations.
1. A method for operating a system including a multi-view display, wherein the method comprises: detecting a presence and determining a location of a plurality of viewers, including a first viewer and a second viewer, in a viewing region of the multi-view display, wherein a location of each viewer in the viewing region at any moment defines, for each viewer, a personal viewing space; detecting a first interaction of the first viewer with the system and a second interaction of a second viewer with the system; generating first content based on the first interaction and second content based on the second interaction; displaying, via the multi-view display, the first content to the first viewer and the second content to the second viewer, wherein the first content is viewable only in the first viewer's personal viewing space and the second content is viewable only in the second viewer's personal viewing space. 2. The method of claim 1 further comprising associating the first interaction with the first viewer and the second interaction with the second viewer. 3. The method of claim 1 further comprising updating the location of the first viewer and the second viewer. 4. The method of claim 1 wherein the location of viewers is determined via a sensing system. 5. The method of claim 4 wherein the sensing system comprises an imaging device. 6. The method of claim 5 wherein the sensing system comprises a passive trackable object. 7. The method of claim 4 wherein the sensing system comprises an active trackable object. 8. The method of claim 2 wherein associating the first interaction with the first viewer further comprises associating the first interaction with a viewer-provided communications device in the possession of the first viewer. 9. The method of claim 1 wherein detecting a first interaction further comprises capturing, via a sensing system, gestures performed by the first viewer, wherein the gestures represent at least one of a command and a preference of the first viewer pertaining to content. 10. The method of claim 9 wherein generating first content based on the first interaction further comprises interpreting the gestures to determine the command or preference pertaining to content. 11. The method of claim 9 and further wherein the first gesture comprises the first viewer's presence in a sequence of locations. 12. The method of claim 11 wherein interpretation of the first gesture is based on an order in which locations in the sequence are visited by the first viewer. 13. The method of claim 11 wherein interpretation of the first gesture is based on specific locations in the sequence. 14. The method of claim 9 wherein the sensing system comprises a passive trackable object and the gesture comprises moving the passive trackable object. 15. The method of claim 1 wherein the first content is associated with a first location and the second content is associated with a second location, the method further comprising: displaying, on the multi-view display, the first content for viewing by the first viewer when the first viewer is at the first location; and displaying, on the multi-view display, second content for viewing by the second viewer when the second viewer is at the second location. 16. The method of claim 15 wherein when the first viewer moves from the first location to the second location, the method further comprises displaying, on the multi-view display, third content for viewing by the first viewer, wherein the third content pertains to the first content and the second content. 17. The method of claim 15 wherein the first content pertains to a first product. 18. The method of claim 17 wherein the first product is situated proximal to the first location. 19. The method of claim 16 wherein first content pertains to a first product and the second content pertains to a second product. 20. The method of claim 19 wherein the third content is a comparison of the first product and the second product to one another. 21. The method of claim 1 the first interaction is detected via a sensing system. 22. The method of claim 21 wherein the sensing system comprises a plurality of microphones. 23. The method of claim 1 and further wherein the first interaction is verbal command pertaining to content. 24. The method of claim 1 and further comprising: receiving, at a first user interface, first information from the first viewer and second information from the second viewer, wherein the first information is different than the second information; generating first content based on the first information and second content based on the second information; and displaying, on the multi-view-display for viewing at a respective personal viewing space of the first viewer and the second viewer, the first content and the second content. 25. The method of claim 1 and further comprising: receiving, for at least some of the viewers of the plurality thereof, respective preference information, wherein the respective preference information is received at plural user interfaces, wherein each one of said some viewers is uniquely associated with one of the plural user interfaces; generating content based on the received preference information; and displaying, on the multi-view-display for viewing at a respective personal viewing space of each of said some viewers, the content based on the respective preference information. 26. The method of claim 24 wherein the user interface comprises a microphone. 27. The method of claim 25 wherein the user interfaces each comprise a microphone. 28. The method of claim 25 wherein the user interface appears in a display screen of a viewer-provided communications device. 29. The method of claim 25 and further comprising interacting with the system via the viewer-provided communications device, wherein interactions are selected from the group consisting of navigating content, downloading second content pertaining to content displayed on the multi-view display, and tagging content. 30. A system comprising: a multi-view display; a sensing system that: a) detects a presence of viewers in a viewing region of the multi-view display, and b) captures interactions of the viewers with the system; and one or more processors that collectively: a) associate the interactions with individual ones of the viewers, and b) update, in conjunction with the sensing system, a location of at least some of the viewers, as the viewers move through the viewing region; and c) command the multi-view display to display content related to the interactions to the associated viewer, wherein the displayed content is viewable only by the associated viewer. 31. The system of claim 30 wherein the location of said some viewers is updated only when viewers interact with the system.
A multi-view display system that permits viewers to individually interact therewith to communicate commands or viewing preferences is disclosed. Methods in accordance with the present teachings enable a multi-view display to deliver a unique content stream to each of plural viewers, based on the viewers' interactions with the multi-view display system, wherein the viewers are not in fixed locations.1. A method for operating a system including a multi-view display, wherein the method comprises: detecting a presence and determining a location of a plurality of viewers, including a first viewer and a second viewer, in a viewing region of the multi-view display, wherein a location of each viewer in the viewing region at any moment defines, for each viewer, a personal viewing space; detecting a first interaction of the first viewer with the system and a second interaction of a second viewer with the system; generating first content based on the first interaction and second content based on the second interaction; displaying, via the multi-view display, the first content to the first viewer and the second content to the second viewer, wherein the first content is viewable only in the first viewer's personal viewing space and the second content is viewable only in the second viewer's personal viewing space. 2. The method of claim 1 further comprising associating the first interaction with the first viewer and the second interaction with the second viewer. 3. The method of claim 1 further comprising updating the location of the first viewer and the second viewer. 4. The method of claim 1 wherein the location of viewers is determined via a sensing system. 5. The method of claim 4 wherein the sensing system comprises an imaging device. 6. The method of claim 5 wherein the sensing system comprises a passive trackable object. 7. The method of claim 4 wherein the sensing system comprises an active trackable object. 8. The method of claim 2 wherein associating the first interaction with the first viewer further comprises associating the first interaction with a viewer-provided communications device in the possession of the first viewer. 9. The method of claim 1 wherein detecting a first interaction further comprises capturing, via a sensing system, gestures performed by the first viewer, wherein the gestures represent at least one of a command and a preference of the first viewer pertaining to content. 10. The method of claim 9 wherein generating first content based on the first interaction further comprises interpreting the gestures to determine the command or preference pertaining to content. 11. The method of claim 9 and further wherein the first gesture comprises the first viewer's presence in a sequence of locations. 12. The method of claim 11 wherein interpretation of the first gesture is based on an order in which locations in the sequence are visited by the first viewer. 13. The method of claim 11 wherein interpretation of the first gesture is based on specific locations in the sequence. 14. The method of claim 9 wherein the sensing system comprises a passive trackable object and the gesture comprises moving the passive trackable object. 15. The method of claim 1 wherein the first content is associated with a first location and the second content is associated with a second location, the method further comprising: displaying, on the multi-view display, the first content for viewing by the first viewer when the first viewer is at the first location; and displaying, on the multi-view display, second content for viewing by the second viewer when the second viewer is at the second location. 16. The method of claim 15 wherein when the first viewer moves from the first location to the second location, the method further comprises displaying, on the multi-view display, third content for viewing by the first viewer, wherein the third content pertains to the first content and the second content. 17. The method of claim 15 wherein the first content pertains to a first product. 18. The method of claim 17 wherein the first product is situated proximal to the first location. 19. The method of claim 16 wherein first content pertains to a first product and the second content pertains to a second product. 20. The method of claim 19 wherein the third content is a comparison of the first product and the second product to one another. 21. The method of claim 1 the first interaction is detected via a sensing system. 22. The method of claim 21 wherein the sensing system comprises a plurality of microphones. 23. The method of claim 1 and further wherein the first interaction is verbal command pertaining to content. 24. The method of claim 1 and further comprising: receiving, at a first user interface, first information from the first viewer and second information from the second viewer, wherein the first information is different than the second information; generating first content based on the first information and second content based on the second information; and displaying, on the multi-view-display for viewing at a respective personal viewing space of the first viewer and the second viewer, the first content and the second content. 25. The method of claim 1 and further comprising: receiving, for at least some of the viewers of the plurality thereof, respective preference information, wherein the respective preference information is received at plural user interfaces, wherein each one of said some viewers is uniquely associated with one of the plural user interfaces; generating content based on the received preference information; and displaying, on the multi-view-display for viewing at a respective personal viewing space of each of said some viewers, the content based on the respective preference information. 26. The method of claim 24 wherein the user interface comprises a microphone. 27. The method of claim 25 wherein the user interfaces each comprise a microphone. 28. The method of claim 25 wherein the user interface appears in a display screen of a viewer-provided communications device. 29. The method of claim 25 and further comprising interacting with the system via the viewer-provided communications device, wherein interactions are selected from the group consisting of navigating content, downloading second content pertaining to content displayed on the multi-view display, and tagging content. 30. A system comprising: a multi-view display; a sensing system that: a) detects a presence of viewers in a viewing region of the multi-view display, and b) captures interactions of the viewers with the system; and one or more processors that collectively: a) associate the interactions with individual ones of the viewers, and b) update, in conjunction with the sensing system, a location of at least some of the viewers, as the viewers move through the viewing region; and c) command the multi-view display to display content related to the interactions to the associated viewer, wherein the displayed content is viewable only by the associated viewer. 31. The system of claim 30 wherein the location of said some viewers is updated only when viewers interact with the system.
2,600
9,945
9,945
15,870,115
2,651
An input signal is provided with a low microphone noise in a hearing apparatus. The microphone noise in the input signal of the hearing apparatus is reduced, by the input signal being filtered by a Wiener filter, if a noise power determined at the input signal is smaller than a predetermined limit value. The Wiener filter is however deactivated, if the noise power is greater than the limit value or equal to the limit value.
1. A method for reducing inherent microphone noise generated independently of ambient noise in an input signal of a hearing apparatus, which comprises the steps of: filtering the input signal, received by a microphone of the hearing apparatus, via a Wiener filter if a noise power determined for the input signal is smaller than a predetermined limit value for assisting in reducing the inherent microphone noise; and deactivating the Wiener filter if the noise power is greater than the predetermined limit value or equal to the predetermined limit value for assisting in reducing the inherent microphone noise. 2. The method according to claim 1, which further comprises, for noise power-dependent deactivation, weighting an attenuation of the Wiener filter acting on the input signal with a weighting factor, which is a function of the noise power. 3. The method according claim 2, wherein the function forms a gradual transition between a completely active attenuation and a completely deactivated attenuation. 4. The method according to claim 1, which further comprises limiting the noise power to a predetermined highest value. 5. The method according to claim 1, which further comprises estimating the noise power for at least one signal part of the input signal on a basis of the signal part according to a statistical estimation method. 6. The method according to claim 1, which further comprises determining the noise power for at least one signal part of the input signal on a basis of a characteristic microphone noise curve. 7. The method according to claim 1, which further comprises defining the predetermined limit value on a basis of a characteristic curve of a microphone. 8. The method according to claim 1, which further comprises limiting attenuation of the Wiener filter acting on the input signal to a predetermined maximum attenuation value with an active Wiener filter. 9. The method according to claim 1, which further comprises forming the input signal from a plurality of microphone signals by means of a beam-former, in which a directional effect can be set with an aid of a directional parameter, and when determining the noise power, the input signal is initially scaled in dependence on a current value of the directional parameter. 10. The method according to claim 9, which further comprises back-scaling the noise power in dependence on the current value of the directional parameter. 11. The method according to claim 9, which further comprises limiting an attenuation of the Wiener filter acting on the input signal to a highest value in dependence on the current value of the directional parameter. 12. The method according to claim 9, wherein the predetermined limit value is dependent on the current value of the directional parameter. 13. The method according claim 3, which further comprises forming the gradual transition according to a ramp function or a tangens hyperbolicus function. 14. A hearing apparatus, comprising: at least one microphone; and a facility for reducing inherent microphone noise generated independently of ambient noise and receiving signals from said at least one microphone, said facility for reducing said microphone noise having a Wiener filter and an estimation facility coupled to said Wiener filter for determining an estimated value for a noise power, wherein an input signal can be subjected to an attenuation by means of said Wiener filter for generating a processed input signal and a value of the attenuation can be determined on a basis of the estimated value for the noise power, said facility for reducing said microphone noise is set up to monitor the estimated value for the noise power and to deactivate said Wiener filter, if the estimated value is greater than a predetermined limit value. 15. The hearing apparatus according to claim 14, further comprising a beam-former; and wherein said at least one microphone is one of a plurality of microphones sending the signals to said facility for reducing the microphone noise via said beam-former, by means of which the input signal can be generated from microphone signals of said microphone for said facility for reducing the microphone noise and in which a directional effect can herewith be set with an aid of a directional parameter, wherein said facility for reducing the microphone noise is set up to reduce the microphone noise, to determine the estimated value by means of said estimation facility for the noise power, and to scale the input signal in dependence on a value of the directional parameter when determining the noise power.
An input signal is provided with a low microphone noise in a hearing apparatus. The microphone noise in the input signal of the hearing apparatus is reduced, by the input signal being filtered by a Wiener filter, if a noise power determined at the input signal is smaller than a predetermined limit value. The Wiener filter is however deactivated, if the noise power is greater than the limit value or equal to the limit value.1. A method for reducing inherent microphone noise generated independently of ambient noise in an input signal of a hearing apparatus, which comprises the steps of: filtering the input signal, received by a microphone of the hearing apparatus, via a Wiener filter if a noise power determined for the input signal is smaller than a predetermined limit value for assisting in reducing the inherent microphone noise; and deactivating the Wiener filter if the noise power is greater than the predetermined limit value or equal to the predetermined limit value for assisting in reducing the inherent microphone noise. 2. The method according to claim 1, which further comprises, for noise power-dependent deactivation, weighting an attenuation of the Wiener filter acting on the input signal with a weighting factor, which is a function of the noise power. 3. The method according claim 2, wherein the function forms a gradual transition between a completely active attenuation and a completely deactivated attenuation. 4. The method according to claim 1, which further comprises limiting the noise power to a predetermined highest value. 5. The method according to claim 1, which further comprises estimating the noise power for at least one signal part of the input signal on a basis of the signal part according to a statistical estimation method. 6. The method according to claim 1, which further comprises determining the noise power for at least one signal part of the input signal on a basis of a characteristic microphone noise curve. 7. The method according to claim 1, which further comprises defining the predetermined limit value on a basis of a characteristic curve of a microphone. 8. The method according to claim 1, which further comprises limiting attenuation of the Wiener filter acting on the input signal to a predetermined maximum attenuation value with an active Wiener filter. 9. The method according to claim 1, which further comprises forming the input signal from a plurality of microphone signals by means of a beam-former, in which a directional effect can be set with an aid of a directional parameter, and when determining the noise power, the input signal is initially scaled in dependence on a current value of the directional parameter. 10. The method according to claim 9, which further comprises back-scaling the noise power in dependence on the current value of the directional parameter. 11. The method according to claim 9, which further comprises limiting an attenuation of the Wiener filter acting on the input signal to a highest value in dependence on the current value of the directional parameter. 12. The method according to claim 9, wherein the predetermined limit value is dependent on the current value of the directional parameter. 13. The method according claim 3, which further comprises forming the gradual transition according to a ramp function or a tangens hyperbolicus function. 14. A hearing apparatus, comprising: at least one microphone; and a facility for reducing inherent microphone noise generated independently of ambient noise and receiving signals from said at least one microphone, said facility for reducing said microphone noise having a Wiener filter and an estimation facility coupled to said Wiener filter for determining an estimated value for a noise power, wherein an input signal can be subjected to an attenuation by means of said Wiener filter for generating a processed input signal and a value of the attenuation can be determined on a basis of the estimated value for the noise power, said facility for reducing said microphone noise is set up to monitor the estimated value for the noise power and to deactivate said Wiener filter, if the estimated value is greater than a predetermined limit value. 15. The hearing apparatus according to claim 14, further comprising a beam-former; and wherein said at least one microphone is one of a plurality of microphones sending the signals to said facility for reducing the microphone noise via said beam-former, by means of which the input signal can be generated from microphone signals of said microphone for said facility for reducing the microphone noise and in which a directional effect can herewith be set with an aid of a directional parameter, wherein said facility for reducing the microphone noise is set up to reduce the microphone noise, to determine the estimated value by means of said estimation facility for the noise power, and to scale the input signal in dependence on a value of the directional parameter when determining the noise power.
2,600
9,946
9,946
15,882,010
2,691
A handheld electronic device may be provided that contains a conductive housing and other conductive elements. The conductive elements may form an antenna ground plane. One or more antennas for the handheld electronic device may be formed from the ground plane and one or more associated antenna resonating elements. Transceiver circuitry may be connected to the resonating elements by transmission lines such as coaxial cables. Ferrules may be crimped to the coaxial cables. A bracket with extending members may be crimped over the ferrules to ground the coaxial cables to the housing and other conductive elements in the ground plane. The ground plane may contain an antenna slot. A dock connector and flex circuit may overlap the slot in a way that does not affect the resonant frequency of the slot. Electrical components may be isolated from the antenna using isolation elements such as inductors and resistors.
1. A wireless communications device comprising: wireless communications circuitry; a display that includes a glass element, wherein the glass element includes an opening; a button included in the opening; and a plurality of touch screen sensors, wherein at least one of the plurality of touch screen sensors is integrated into the display. 2. The wireless communications device of claim 1, wherein the glass element is a first glass element, and wherein the button includes a second glass element. 3. The wireless communications device of claim 1, wherein the plurality of touch screen sensors includes at least one touch screen sensor separate from the display. 4. The wireless communications device of claim 1, wherein at least two of the plurality of touch screen sensors are positioned on different surfaces of the wireless communications device. 5. The wireless communications device of claim 1, wherein the button is a mechanical button. 6. The wireless communications device of claim 1, wherein a first touch screen sensor of the plurality of touch screen sensors implements a first touch sensing technology, and wherein a second touch screen sensor of the plurality of touch screen sensors implements a second touch sensing technology. 7. A wireless communications device comprising: wireless communications circuitry configured to handle wireless signals; a display that includes a glass element, wherein the glass element includes a hole; and a plurality of touch screen sensors, wherein at least one of the plurality of touch screen sensors is integrated into the display. 8. The wireless communications device of claim 7, wherein the display includes a button. 9. The wireless communications device of claim 8, wherein the button is integrated with the glass element. 10. The wireless communications device of claim 7, wherein the plurality of touch screen sensors includes at least one touch screen sensor separate from the display. 11. The wireless communications device of claim 7, further comprising a button positioned within the hole. 12. The wireless communications device of claim 7, wherein at least two of the plurality of touch screen sensors are positioned on different surfaces of the wireless communications device. 13. The wireless communications device of claim 12, wherein a first touch screen sensor of the plurality of touch screen sensors implements a first touch sensing technology, and wherein a second touch screen sensor of the plurality of touch screen sensors implements a second touch sensing technology. 14. The wireless communications device of claim 7, further comprising a speaker positioned adjacent to the hole. 15. A wireless communications device comprising: wireless communications circuitry configured to handle wireless signals; a touch screen display that includes a glass element; and a button integrated into the glass element and configured to accept input commands. 16. The wireless communications device of claim 15, wherein the glass element includes an opening. 17. The wireless communications device of claim 16, wherein the button is a mechanical button that is positioned within the opening. 18. The wireless communications device of claim 15, further comprising a plurality of touch screen sensors. 19. The wireless communications device of claim 18, wherein a first touch screen sensor of the plurality of touch screen sensors implements a first touch sensing technology, and wherein a second touch screen sensor of the plurality of touch screen sensors implements a second touch sensing technology. 20. The wireless communications device of claim 19, wherein the plurality of touch screen sensors includes at least one touch screen sensor integrated into the touch screen display and at least one touch screen sensor separate from the touch screen display.
A handheld electronic device may be provided that contains a conductive housing and other conductive elements. The conductive elements may form an antenna ground plane. One or more antennas for the handheld electronic device may be formed from the ground plane and one or more associated antenna resonating elements. Transceiver circuitry may be connected to the resonating elements by transmission lines such as coaxial cables. Ferrules may be crimped to the coaxial cables. A bracket with extending members may be crimped over the ferrules to ground the coaxial cables to the housing and other conductive elements in the ground plane. The ground plane may contain an antenna slot. A dock connector and flex circuit may overlap the slot in a way that does not affect the resonant frequency of the slot. Electrical components may be isolated from the antenna using isolation elements such as inductors and resistors.1. A wireless communications device comprising: wireless communications circuitry; a display that includes a glass element, wherein the glass element includes an opening; a button included in the opening; and a plurality of touch screen sensors, wherein at least one of the plurality of touch screen sensors is integrated into the display. 2. The wireless communications device of claim 1, wherein the glass element is a first glass element, and wherein the button includes a second glass element. 3. The wireless communications device of claim 1, wherein the plurality of touch screen sensors includes at least one touch screen sensor separate from the display. 4. The wireless communications device of claim 1, wherein at least two of the plurality of touch screen sensors are positioned on different surfaces of the wireless communications device. 5. The wireless communications device of claim 1, wherein the button is a mechanical button. 6. The wireless communications device of claim 1, wherein a first touch screen sensor of the plurality of touch screen sensors implements a first touch sensing technology, and wherein a second touch screen sensor of the plurality of touch screen sensors implements a second touch sensing technology. 7. A wireless communications device comprising: wireless communications circuitry configured to handle wireless signals; a display that includes a glass element, wherein the glass element includes a hole; and a plurality of touch screen sensors, wherein at least one of the plurality of touch screen sensors is integrated into the display. 8. The wireless communications device of claim 7, wherein the display includes a button. 9. The wireless communications device of claim 8, wherein the button is integrated with the glass element. 10. The wireless communications device of claim 7, wherein the plurality of touch screen sensors includes at least one touch screen sensor separate from the display. 11. The wireless communications device of claim 7, further comprising a button positioned within the hole. 12. The wireless communications device of claim 7, wherein at least two of the plurality of touch screen sensors are positioned on different surfaces of the wireless communications device. 13. The wireless communications device of claim 12, wherein a first touch screen sensor of the plurality of touch screen sensors implements a first touch sensing technology, and wherein a second touch screen sensor of the plurality of touch screen sensors implements a second touch sensing technology. 14. The wireless communications device of claim 7, further comprising a speaker positioned adjacent to the hole. 15. A wireless communications device comprising: wireless communications circuitry configured to handle wireless signals; a touch screen display that includes a glass element; and a button integrated into the glass element and configured to accept input commands. 16. The wireless communications device of claim 15, wherein the glass element includes an opening. 17. The wireless communications device of claim 16, wherein the button is a mechanical button that is positioned within the opening. 18. The wireless communications device of claim 15, further comprising a plurality of touch screen sensors. 19. The wireless communications device of claim 18, wherein a first touch screen sensor of the plurality of touch screen sensors implements a first touch sensing technology, and wherein a second touch screen sensor of the plurality of touch screen sensors implements a second touch sensing technology. 20. The wireless communications device of claim 19, wherein the plurality of touch screen sensors includes at least one touch screen sensor integrated into the touch screen display and at least one touch screen sensor separate from the touch screen display.
2,600
9,947
9,947
15,213,964
2,621
System and method for control using face detection or hand gesture detection algorithms in a captured image. Based on the existence of a detected human face or a hand gesture in an image captured by a digital camera (still or video), a control signal is generated and provided to a device. The control may provide power or disconnect power supply to the device (or part of the device circuits). Further, the location of the detected face in the image may be used to rotate a display screen horizontally, vertically or both, to achieve a better line of sight with a viewing person. If two or more faces are detected, the average location is calculated and used for line of sight correction. A linear feedback control loop is implemented wherein detected face deviation from the optimum is the error to be corrected by rotating the display to the required angular position. A hand gesture detection can be used as a replacement to a remote control wherein the various hand gestures control the various function of the controlled unit, such as a television set.
1. A device for displaying serially carried digital video data by a video display, for use with a Local Area Network (LAN) cable simultaneously carrying DC power and the serial digital video data over the same wires, the device comprising: a LAN connector for connecting to the LAN cable; a transceiver coupled to the LAN connector receiving the serial digital video data from the LAN cable; a video connector for connecting to the video display, the video connector coupled to the transceiver for displaying the serial digital video data by the video display; software and a processor to execute the software coupled to control the transceiver; and a single enclosure housing the LAN connector, the transceiver, the video connector, and the processor, wherein the transceiver and the processor are coupled to the LAN connector for being powered by the DC power. 2. The device according to claim 1, further comprising the video display, wherein the single enclosure further houses the video display. 3. The device according to claim 2, wherein the video display is coupled to the LAN connector for being powered by the DC power. 4. The device according to claim 2, wherein the video display comprises a silicon-based flat screen. 5. The device according to claim 4, wherein the flat screen is LED (Light Emitting Diode), LCD (Liquid Crystal Display), or TFT (Thin-Film Transistor) based. 6. The device according to claim 1, further operative for displaying High Definition (HD) by the video display, wherein the video connector is an HDMI (High-Definition Multimedia Interface) connector for displaying HD video by the video display. 7. The device according to claim 1, further comprising a power/data splitter having first, second, and third ports for passing the serial digital video data between the first and second ports and for passing the DC power between the first and third ports, the first port coupled to the LAN connector, and the second port coupled to the transceiver. 8. The device according to claim 7, wherein the third port is coupled to the processor for powering the processor from the DC power. 9. The device according to claim 7, wherein the DC power and the serial digital video data are carried using Frequency Division/Domain Multiplexing (FDM), where the serial digital video data is carried in a frequency band above, and distinct from, the DC power. 10. The device according to claim 9, wherein the power/data splitter comprises a high pass filter between the first and second ports and a low pass filter between the first and third ports. 11. The device according to claim 9, wherein the power/data splitter comprises a transformer having windings and a capacitor connected between the transformer windings. 12. The device according to claim 1, further operative for receiving and displaying television channels by the video display. 13. The device according to claim 12, further comprising, or consisting of, a television set. 14. The device according to claim 1, wherein the transceiver comprises a LAN transceiver. 15. The device according to claim 14, wherein the LAN cable is based on, or uses, twisted-pair copper wires, the LAN transceiver is according to, compatible with, or based on, 10Base-T, 100Base-TX, or 1000Base-T, and the LAN connector is RJ-45 type connector. 16. The device according to claim 15, wherein the LAN cable is according to, compatible with, or based on, EIA/TIA-568 or EIA/TIA-570. 17. The device according to claim 15, wherein the LAN cable is a structured wiring cable according to, compatible with, or based on, Category 3, 4, 5e, 6, 6e, or 7 cable. 18. The device according to claim 15, wherein the cable comprises Unshielded Twisted Pair (UTP) or Shielded Twisted Pair (STP) wires. 19. The device according to claim 14, wherein the LAN is an Ethernet-based LAN that is according to, compatible with, or based on, IEEE 802.3-2008 standard. 20. The device according to claim 1, wherein the DC power and the serial digital video data are carried according to, compatible with, or based on, IEEE 802.3af-2003 or IEEE 802.3at-2009 standard. 21. The device according to claim 1, wherein the serial digital video data comprises captured digital video data that is compressed according to a compression scheme. 22. The device according to claim 21, wherein the compression scheme is lossy or lossless type. 23. The device according to claim 21, wherein the compression scheme is according to, compatible with, or based on, JPEG (Joint Photographic Experts Group) or MPEG (Moving Picture Experts Group) standard. 24. The device according to claim 1, further for initiating and receiving telephone calls over a telephone network. 25. The device according to claim 24, wherein the telephone network is a cellular telephone network. 26. The device according to claim 25, further for initiating and receiving telephone calls over a cellular network, the device further comprising in the single enclosure: a cellular antenna for over-the-air radio-frequency communication; and a cellular modem coupled to the cellular antenna for transmitting serial digital data to, or receiving serial digital data from, the cellular telephone network. 27. The device according to claim 26, further comprising, or consisting of, a cellular telephone device. 28. The device according to claim 26, wherein the communication over the cellular network is according to, compatible with, or is based on, GSM (Global System for Mobile Communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), EDGE (Enhanced Data Rates for GSM Evolution), 3GSM, DECT (Digital Enhanced Cordless Telecommunications), Digital AMPS, or iDEN (Integrated Digital Enhanced Network). 29. The device according to claim 26, wherein the cellular modem is coupled to the video display for receiving information from the cellular network and displaying the received information on the video display. 30. A device for use with a Local Area Network (LAN) cable simultaneously carrying DC power and bi-directional serial digital data over the same wires, the device comprising: a digital video camera for capturing digital video data; a LAN connector for connecting to the cable; a transceiver coupled between the LAN connector and the digital video camera for transmitting the digital video data to the cable; an image processor coupled between the digital video camera and the transceiver for receiving the digital video data, for detecting an element in the digital video data, and for transmitting a signal to the LAN cable responsive to the element detection; and software and a processor to execute the software coupled to control the image processor, the transceiver, and the digital video camera; and a single enclosure housing the digital video camera, the processor, the LAN connector, the image processor, and the transceiver, wherein the device is powered by the DC power. 31. The device according to claim 30, further comprising a video display coupled to display the digital video data, wherein the single enclosure further houses the video display. 32. The device according to claim 31, wherein the video display is coupled to the LAN connector for being powered by the DC power. 33. The device according to claim 31, wherein the video display comprises a silicon-based flat screen. 34. The device according to claim 33, wherein the flat screen is LED (Light Emitting Diode), LCD (Liquid Crystal Display), or TFT (Thin-Film Transistor) based. 35. The device according to claim 31, further operative for displaying High Definition (HD) by the video display, wherein the video display is an HDMI (High-Definition Multimedia Interface) display for displaying HD video. 36. The device according to claim 30, further comprising a power/data splitter having first, second, and third ports for passing the digital video data between the first and second ports and for passing the DC power between the first and third ports, the first port coupled to the LAN connector, and the second port coupled to the transceiver. 37. The device according to claim 36, wherein the third port is coupled to the processor for powering the processor from the DC power. 38. The device according to claim 36, wherein the DC power and the digital video data are carried using Frequency Division/Domain Multiplexing (FDM), where the digital video data is carried in a frequency band above, and distinct from, the DC power. 39. The device according to claim 38, wherein the power/data splitter comprises a high pass filter between the first and second ports and a low pass filter between the first and third ports. 40. The device according to claim 38, wherein the power/data splitter comprises a transformer having windings and a capacitor connected between the transformer windings. 41. The device according to claim 30, wherein the transceiver comprises a LAN transceiver. 42. The device according to claim 41, wherein the LAN cable is based on, or uses, twisted-pair copper wires, the LAN transceiver is according to, compatible with, or based on, 10Base-T, 100Base-TX, or 1000Base-T, and the LAN connector is RJ-45 type connector. 43. The device according to claim 42, wherein the LAN cable is according to, compatible with, or based on, EIA/TIA-568 or EIA/TIA-570. 44. The device according to claim 42, wherein the LAN cable is a structured wiring cable according to, compatible with, or based on, Category 3, 4, 5e, 6, 6e, or 7 cable. 45. The device according to claim 42, wherein the cable comprises Unshielded Twisted Pair (UTP) or Shielded Twisted Pair (STP) wires. 46. The device according to claim 42, wherein the LAN is an Ethernet-based LAN that is according to, compatible with, or based on, IEEE 802.3-2008 standard. 47. The device according to claim 30, wherein the DC power and the digital video data are carried according to, compatible with, or based on, IEEE 802.3af-2003 or IEEE 802.3at-2009 standard. 48. The device according to claim 30, wherein the digital video camera comprises: an optical lens for focusing received light; a photosensitive image sensor array disposed approximately at an image focal point plane of the optical lens for capturing an image and producing electronic image information representing the image; and an analog-to-digital (A/D) converter coupled to the image sensor for generating a digital data representation of the image. 49. The device according to claim 48, wherein the image sensor array is based on, or uses, Charge-Coupled Devices (CCD) or Complementary Metal-Oxide-Semiconductor (CMOS) elements. 50. The device according to claim 30, further comprising a video compressor coupled between the digital video camera and the transceiver for compressing the captured digital video data according to a compression scheme. 51. The device according to claim 50, wherein the compression scheme is lossy or lossless type. 52. The device according to claim 50, wherein the compression scheme is according to, compatible with, or based on, JPEG (Joint Photographic Experts Group) or MPEG (Moving Picture Experts Group) standard. 53. The device according to claim 30, further for initiating and receiving telephone calls over a telephone network. 54. The device according to claim 53, wherein the telephone network is a cellular telephone network. 55. The device according to claim 54, further for initiating and receiving telephone calls over a cellular network, the device further comprising: a cellular antenna for over-the-air radio-frequency communication; and a cellular modem coupled to the cellular antenna for transmitting serial digital data to, or receiving serial digital data from, the cellular telephone network. 56. The device according to claim 55, further comprising, or consisting of, a cellular telephone device. 57. The device according to claim 55, wherein the communication over the cellular network is according to, compatible with, or is based on, GSM (Global System for Mobile Communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), EDGE (Enhanced Data Rates for GSM Evolution), 3GSM, DECT (Digital Enhanced Cordless Telecommunications), Digital AMPS, or iDEN (Integrated Digital Enhanced Network). 58. The device according to claim 30, wherein the element is a body part. 59. The device according to claim 58, wherein the body part is a human hand. 60. The device according to claim 59, wherein the element is a hand gesture. 61. The device according to claim 58, wherein the body part is a human face. 62. The device according to claim 30, further responsive to a change in a position of the detected element.
System and method for control using face detection or hand gesture detection algorithms in a captured image. Based on the existence of a detected human face or a hand gesture in an image captured by a digital camera (still or video), a control signal is generated and provided to a device. The control may provide power or disconnect power supply to the device (or part of the device circuits). Further, the location of the detected face in the image may be used to rotate a display screen horizontally, vertically or both, to achieve a better line of sight with a viewing person. If two or more faces are detected, the average location is calculated and used for line of sight correction. A linear feedback control loop is implemented wherein detected face deviation from the optimum is the error to be corrected by rotating the display to the required angular position. A hand gesture detection can be used as a replacement to a remote control wherein the various hand gestures control the various function of the controlled unit, such as a television set.1. A device for displaying serially carried digital video data by a video display, for use with a Local Area Network (LAN) cable simultaneously carrying DC power and the serial digital video data over the same wires, the device comprising: a LAN connector for connecting to the LAN cable; a transceiver coupled to the LAN connector receiving the serial digital video data from the LAN cable; a video connector for connecting to the video display, the video connector coupled to the transceiver for displaying the serial digital video data by the video display; software and a processor to execute the software coupled to control the transceiver; and a single enclosure housing the LAN connector, the transceiver, the video connector, and the processor, wherein the transceiver and the processor are coupled to the LAN connector for being powered by the DC power. 2. The device according to claim 1, further comprising the video display, wherein the single enclosure further houses the video display. 3. The device according to claim 2, wherein the video display is coupled to the LAN connector for being powered by the DC power. 4. The device according to claim 2, wherein the video display comprises a silicon-based flat screen. 5. The device according to claim 4, wherein the flat screen is LED (Light Emitting Diode), LCD (Liquid Crystal Display), or TFT (Thin-Film Transistor) based. 6. The device according to claim 1, further operative for displaying High Definition (HD) by the video display, wherein the video connector is an HDMI (High-Definition Multimedia Interface) connector for displaying HD video by the video display. 7. The device according to claim 1, further comprising a power/data splitter having first, second, and third ports for passing the serial digital video data between the first and second ports and for passing the DC power between the first and third ports, the first port coupled to the LAN connector, and the second port coupled to the transceiver. 8. The device according to claim 7, wherein the third port is coupled to the processor for powering the processor from the DC power. 9. The device according to claim 7, wherein the DC power and the serial digital video data are carried using Frequency Division/Domain Multiplexing (FDM), where the serial digital video data is carried in a frequency band above, and distinct from, the DC power. 10. The device according to claim 9, wherein the power/data splitter comprises a high pass filter between the first and second ports and a low pass filter between the first and third ports. 11. The device according to claim 9, wherein the power/data splitter comprises a transformer having windings and a capacitor connected between the transformer windings. 12. The device according to claim 1, further operative for receiving and displaying television channels by the video display. 13. The device according to claim 12, further comprising, or consisting of, a television set. 14. The device according to claim 1, wherein the transceiver comprises a LAN transceiver. 15. The device according to claim 14, wherein the LAN cable is based on, or uses, twisted-pair copper wires, the LAN transceiver is according to, compatible with, or based on, 10Base-T, 100Base-TX, or 1000Base-T, and the LAN connector is RJ-45 type connector. 16. The device according to claim 15, wherein the LAN cable is according to, compatible with, or based on, EIA/TIA-568 or EIA/TIA-570. 17. The device according to claim 15, wherein the LAN cable is a structured wiring cable according to, compatible with, or based on, Category 3, 4, 5e, 6, 6e, or 7 cable. 18. The device according to claim 15, wherein the cable comprises Unshielded Twisted Pair (UTP) or Shielded Twisted Pair (STP) wires. 19. The device according to claim 14, wherein the LAN is an Ethernet-based LAN that is according to, compatible with, or based on, IEEE 802.3-2008 standard. 20. The device according to claim 1, wherein the DC power and the serial digital video data are carried according to, compatible with, or based on, IEEE 802.3af-2003 or IEEE 802.3at-2009 standard. 21. The device according to claim 1, wherein the serial digital video data comprises captured digital video data that is compressed according to a compression scheme. 22. The device according to claim 21, wherein the compression scheme is lossy or lossless type. 23. The device according to claim 21, wherein the compression scheme is according to, compatible with, or based on, JPEG (Joint Photographic Experts Group) or MPEG (Moving Picture Experts Group) standard. 24. The device according to claim 1, further for initiating and receiving telephone calls over a telephone network. 25. The device according to claim 24, wherein the telephone network is a cellular telephone network. 26. The device according to claim 25, further for initiating and receiving telephone calls over a cellular network, the device further comprising in the single enclosure: a cellular antenna for over-the-air radio-frequency communication; and a cellular modem coupled to the cellular antenna for transmitting serial digital data to, or receiving serial digital data from, the cellular telephone network. 27. The device according to claim 26, further comprising, or consisting of, a cellular telephone device. 28. The device according to claim 26, wherein the communication over the cellular network is according to, compatible with, or is based on, GSM (Global System for Mobile Communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), EDGE (Enhanced Data Rates for GSM Evolution), 3GSM, DECT (Digital Enhanced Cordless Telecommunications), Digital AMPS, or iDEN (Integrated Digital Enhanced Network). 29. The device according to claim 26, wherein the cellular modem is coupled to the video display for receiving information from the cellular network and displaying the received information on the video display. 30. A device for use with a Local Area Network (LAN) cable simultaneously carrying DC power and bi-directional serial digital data over the same wires, the device comprising: a digital video camera for capturing digital video data; a LAN connector for connecting to the cable; a transceiver coupled between the LAN connector and the digital video camera for transmitting the digital video data to the cable; an image processor coupled between the digital video camera and the transceiver for receiving the digital video data, for detecting an element in the digital video data, and for transmitting a signal to the LAN cable responsive to the element detection; and software and a processor to execute the software coupled to control the image processor, the transceiver, and the digital video camera; and a single enclosure housing the digital video camera, the processor, the LAN connector, the image processor, and the transceiver, wherein the device is powered by the DC power. 31. The device according to claim 30, further comprising a video display coupled to display the digital video data, wherein the single enclosure further houses the video display. 32. The device according to claim 31, wherein the video display is coupled to the LAN connector for being powered by the DC power. 33. The device according to claim 31, wherein the video display comprises a silicon-based flat screen. 34. The device according to claim 33, wherein the flat screen is LED (Light Emitting Diode), LCD (Liquid Crystal Display), or TFT (Thin-Film Transistor) based. 35. The device according to claim 31, further operative for displaying High Definition (HD) by the video display, wherein the video display is an HDMI (High-Definition Multimedia Interface) display for displaying HD video. 36. The device according to claim 30, further comprising a power/data splitter having first, second, and third ports for passing the digital video data between the first and second ports and for passing the DC power between the first and third ports, the first port coupled to the LAN connector, and the second port coupled to the transceiver. 37. The device according to claim 36, wherein the third port is coupled to the processor for powering the processor from the DC power. 38. The device according to claim 36, wherein the DC power and the digital video data are carried using Frequency Division/Domain Multiplexing (FDM), where the digital video data is carried in a frequency band above, and distinct from, the DC power. 39. The device according to claim 38, wherein the power/data splitter comprises a high pass filter between the first and second ports and a low pass filter between the first and third ports. 40. The device according to claim 38, wherein the power/data splitter comprises a transformer having windings and a capacitor connected between the transformer windings. 41. The device according to claim 30, wherein the transceiver comprises a LAN transceiver. 42. The device according to claim 41, wherein the LAN cable is based on, or uses, twisted-pair copper wires, the LAN transceiver is according to, compatible with, or based on, 10Base-T, 100Base-TX, or 1000Base-T, and the LAN connector is RJ-45 type connector. 43. The device according to claim 42, wherein the LAN cable is according to, compatible with, or based on, EIA/TIA-568 or EIA/TIA-570. 44. The device according to claim 42, wherein the LAN cable is a structured wiring cable according to, compatible with, or based on, Category 3, 4, 5e, 6, 6e, or 7 cable. 45. The device according to claim 42, wherein the cable comprises Unshielded Twisted Pair (UTP) or Shielded Twisted Pair (STP) wires. 46. The device according to claim 42, wherein the LAN is an Ethernet-based LAN that is according to, compatible with, or based on, IEEE 802.3-2008 standard. 47. The device according to claim 30, wherein the DC power and the digital video data are carried according to, compatible with, or based on, IEEE 802.3af-2003 or IEEE 802.3at-2009 standard. 48. The device according to claim 30, wherein the digital video camera comprises: an optical lens for focusing received light; a photosensitive image sensor array disposed approximately at an image focal point plane of the optical lens for capturing an image and producing electronic image information representing the image; and an analog-to-digital (A/D) converter coupled to the image sensor for generating a digital data representation of the image. 49. The device according to claim 48, wherein the image sensor array is based on, or uses, Charge-Coupled Devices (CCD) or Complementary Metal-Oxide-Semiconductor (CMOS) elements. 50. The device according to claim 30, further comprising a video compressor coupled between the digital video camera and the transceiver for compressing the captured digital video data according to a compression scheme. 51. The device according to claim 50, wherein the compression scheme is lossy or lossless type. 52. The device according to claim 50, wherein the compression scheme is according to, compatible with, or based on, JPEG (Joint Photographic Experts Group) or MPEG (Moving Picture Experts Group) standard. 53. The device according to claim 30, further for initiating and receiving telephone calls over a telephone network. 54. The device according to claim 53, wherein the telephone network is a cellular telephone network. 55. The device according to claim 54, further for initiating and receiving telephone calls over a cellular network, the device further comprising: a cellular antenna for over-the-air radio-frequency communication; and a cellular modem coupled to the cellular antenna for transmitting serial digital data to, or receiving serial digital data from, the cellular telephone network. 56. The device according to claim 55, further comprising, or consisting of, a cellular telephone device. 57. The device according to claim 55, wherein the communication over the cellular network is according to, compatible with, or is based on, GSM (Global System for Mobile Communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), EDGE (Enhanced Data Rates for GSM Evolution), 3GSM, DECT (Digital Enhanced Cordless Telecommunications), Digital AMPS, or iDEN (Integrated Digital Enhanced Network). 58. The device according to claim 30, wherein the element is a body part. 59. The device according to claim 58, wherein the body part is a human hand. 60. The device according to claim 59, wherein the element is a hand gesture. 61. The device according to claim 58, wherein the body part is a human face. 62. The device according to claim 30, further responsive to a change in a position of the detected element.
2,600
9,948
9,948
15,213,876
2,621
System and method for control using face detection or hand gesture detection algorithms in a captured image. Based on the existence of a detected human face or a hand gesture in an image captured by a digital camera (still or video), a control signal is generated and provided to a device. The control may provide power or disconnect power supply to the device (or part of the device circuits). Further, the location of the detected face in the image may be used to rotate a display screen horizontally, vertically or both, to achieve a better line of sight with a viewing person. If two or more faces are detected, the average location is calculated and used for line of sight correction. A linear feedback control loop is implemented wherein detected face deviation from the optimum is the error to be corrected by rotating the display to the required angular position. A hand gesture detection can be used as a replacement to a remote control wherein the various hand gestures control the various function of the controlled unit, such as a television set.
1. A device for use with a cable simultaneously carrying DC power and bi-directional serial digital data over the same wires, the device comprising: a flat screen for visually displaying information; a digital video camera for capturing digital video data, the digital video camera having a center line of sight and being mechanically fixed so that the digital video camera is maintained in a fixed position relative to the flat screen; a power/data splitter having first, second, and third ports for passing the bi-directional serial digital data between the first and second ports and for passing the DC power between the first and third ports; a connector coupled to the first port for connecting to the cable; a transceiver coupled between the second port and the digital video camera for transmitting the digital video data to the cable; software and a processor to execute the software coupled to control the flat screen, the transceiver, and the digital video camera; and a single enclosure housing the flat screen, the digital video camera, the processor, the connector, the power/data splitter, and the transceiver. 2. The device according to claim 1, further for receiving and displaying television channels, wherein the flat screen is configured for displaying the television channels. 3. The device according to claim 1, further comprising, or consisting of, a television set. 4. The device according to claim 1, further operative to at least in part be powered from the DC power. 5. The device according to claim 4, wherein the third port is coupled to the flat screen for powering the flat screen from the DC power. 6. The device according to claim 1, wherein the transceiver is coupled to the flat screen for receiving information from the cable and for displaying the information on the flat screen. 7. The device according to claim 1, further comprising an image processor coupled to receive the digital video data from the digital video camera for applying an element detection algorithm to detect the element in the digital video data, and wherein the device responds to the element detection. 8. The device according to claim 1, wherein the digital video camera is positioned to capture a scene substantially in front of the flat screen. 9. The device according to claim 1, wherein the digital video camera comprises: an optical lens for focusing received light; a photosensitive image sensor array disposed approximately at an image focal point plane of the optical lens for capturing an image and producing electronic image information representing the image; and an analog-to-digital (A/D) converter coupled to the image sensor for generating a digital data representation of the image. 10. The device according to claim 9, wherein the image sensor array is based on, or uses, Charge-Coupled Devices (CCD) or Complementary Metal-Oxide-Semiconductor (CMOS) elements. 11. The device according to claim 1, wherein the cable comprises a Local Area Network (LAN) cable, the connector comprises a LAN connector, and the transceiver comprises a LAN transceiver. 12. The device according to claim 11, wherein the LAN is an Ethernet based LAN that is according to, compatible with, or based on, IEEE 802.3-2008 standard. 13. The device according to claim 12, wherein the LAN cable is based on, or uses, twisted-pair copper wires, the LAN transceiver is according to, compatible with, or based on, 10Base-T, 100Base-TX, or 1000Base-T, and the LAN connector is RJ-45 type connector. 14. The device according to claim 11, wherein the DC power and the serial digital data are carried according to, compatible with, or based on, IEEE 802.3af-2003 or IEEE 802.3at-2009 standard. 15. The device according to claim 1, further for use with a power source that supply at least part of the DC power, wherein the third port is coupled to the power source for supplying the DC power to the cable. 16. The device according to claim 1, wherein the DC power and the serial digital data are carried using Frequency Division/Domain Multiplexing (FDM), where the serial digital data is carried in a frequency band above, and distinct from, the DC power. 17. The device according to claim 16, wherein the power/data splitter comprises a high pass filter between the first and second ports and a low pass filter between the first and third ports. 18. The device according to claim 16, wherein the power/data splitter comprises a transformer having windings and a capacitor connected between the transformer windings. 19. The device according to claim 1, further comprising a video compressor coupled between the digital video camera and the transceiver for compressing the captured digital video data according to a compression scheme. 20. The device according to claim 19, wherein the compression scheme is lossy or lossless type. 21. The device according to claim 19, wherein the compression scheme is according to, compatible with, or based on, JPEG (Joint Photographic Experts Group) or MPEG (Moving Picture Experts Group) standard. 22. The device according to claim 1, further for initiating and receiving telephone calls over a telephone network. 23. The device according to claim 22, wherein the telephone network is a cellular telephone network. 24. The device according to claim 23, further comprising: a cellular antenna for over-the-air radio-frequency communication; and a cellular modem coupled to the cellular antenna for transmitting serial digital data to, or receiving serial digital data from, the cellular telephone network. 25. The device according to claim 24, further comprising, or consisting of, a cellular telephone device. 26. The device according to claim 24, wherein the communication over the cellular network is according to, compatible with, or is based on, GSM (Global System for Mobile Communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), EDGE (Enhanced Data Rates for GSM Evolution), 3GSM, DECT (Digital Enhanced Cordless Telecommunications), Digital AMPS, or iDEN (Integrated Digital Enhanced Network). 27. The device according to claim 24, wherein the cellular modem is coupled to the flat screen for receiving information from the cellular network and displaying the received information on the flat screen. 28. The device according to claim 1, wherein the flat screen is silicon-based. 29. The device according to claim 28, wherein the flat screen is LED (Light Emitting Diode), LCD (Liquid Crystal Display), or TFT (Thin-Film Transistor) based. 30. The device according to claim 1, further operative for displaying High Definition (HD), the device further comprising an HDMI (High-Definition Multimedia Interface) for receiving and displaying HD video on the flat screen. 31. A device for use with a Local Area Network (LAN) cable simultaneously carrying DC power and bi-directional serial digital data over the same wires, the device comprising: a flat screen for visually displaying information; a digital video camera for capturing digital video data, the digital video camera having a center line of sight and being mechanically fixed so that the digital video camera is maintained in a fixed position relative to the flat screen; a LAN connector for connecting to the LAN cable; a transceiver coupled between the LAN connector and the digital video camera for transmitting the digital video data to the LAN cable; software and a processor to execute the software coupled to control the flat screen, the transceiver, and the digital video camera; and a single enclosure housing the flat screen, the digital video camera, the processor, the LAN connector, and the transceiver. 32. The device according to claim 31, further operative to at least in part be powered from the DC power. 33. The device according to claim 31, further comprising a power/data splitter having first, second, and third ports for passing the bi-directional serial digital data between the first and second ports and for passing the DC power between the first and third ports, the first port coupled to the LAN connector, and the second port coupled to the transceiver. 34. The device according to claim 33, wherein the third port is coupled to the flat screen for powering the flat screen from the DC power. 35. The device according to claim 33, further for use with a power source that supply at least part of the DC power, wherein the third port is coupled to the power source for supplying the DC power to the LAN cable. 36. The device according to claim 33, wherein the DC power and the serial digital data are carried using Frequency Division/Domain Multiplexing (FDM), where the serial digital data is carried in a frequency band above, and distinct from, the DC power. 37. The device according to claim 36, wherein the power/data splitter comprises a high pass filter between the first and second ports and a low pass filter between the first and third ports. 38. The device according to claim 36, wherein the power/data splitter comprises a transformer having windings and a capacitor connected between the transformer windings. 39. The device according to claim 31, further operative for receiving and displaying television channels, wherein the flat screen is configured for displaying the television channels. 40. The device according to claim 39, further comprising, or consisting of, a television set. 41. The device according to claim 31, wherein the transceiver is coupled to the flat screen for receiving information from the LAN cable and for displaying the information on the flat screen. 42. The device according to claim 31, further comprising an image processor coupled to receive the digital video data from the digital video camera for applying an element detection algorithm to detect the element in the digital video data, and wherein the device responds to the element detection. 43. The device according to claim 31, wherein the digital video camera is positioned to capture a scene substantially in front of the flat screen. 44. The device according to claim 31, wherein the digital video camera comprises: an optical lens for focusing received light; a photosensitive image sensor array disposed approximately at an image focal point plane of the optical lens for capturing an image and producing electronic image information representing the image; and an analog-to-digital (A/D) converter coupled to the image sensor for generating a digital data representation of the image. 45. The device according to claim 44, wherein the image sensor array is based on, or uses, Charge-Coupled Devices (CCD) or Complementary Metal-Oxide-Semiconductor (CMOS) elements. 46. The device according to claim 31, wherein the transceiver comprises a LAN transceiver. 47. The device according to claim 46, wherein the LAN cable is based on, or uses, twisted-pair copper wires, the LAN transceiver is according to, compatible with, or based on, 10Base-T, 100Base-TX, or 1000Base-T, and the LAN connector is RJ-45 type connector. 48. The device according to claim 31, wherein the LAN is an Ethernet-based LAN that is according to, compatible with, or based on, IEEE 802.3-2008 standard. 49. The device according to claim 31, wherein the DC power and the serial digital data are carried according to, compatible with, or based on, IEEE 802.3af-2003 or IEEE 802.3at-2009 standard. 50. The device according to claim 31, further comprising a video compressor coupled between the digital video camera and the transceiver for compressing the captured digital video data according to a compression scheme. 51. The device according to claim 50, wherein the compression scheme is lossy or lossless type. 52. The device according to claim 50, wherein the compression scheme is according to, compatible with, or based on, JPEG (Joint Photographic Experts Group) or MPEG (Moving Picture Experts Group) standard. 53. The device according to claim 31, further for initiating and receiving telephone calls over a telephone network. 54. The device according to claim 53, wherein the telephone network is a cellular telephone network. 55. The device according to claim 54, further for initiating and receiving telephone calls over a cellular network, the device further comprising: a cellular antenna for over-the-air radio-frequency communication; and a cellular modem coupled to the cellular antenna for transmitting serial digital data to, or receiving serial digital data from, the cellular telephone network. 56. The device according to claim 55, further comprising, or consisting of, a cellular telephone device. 57. The device according to claim 55, wherein the communication over the cellular network is according to, compatible with, or is based on, GSM (Global System for Mobile Communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), EDGE (Enhanced Data Rates for GSM Evolution), 3GSM, DECT (Digital Enhanced Cordless Telecommunications), Digital AMPS, or iDEN (Integrated Digital Enhanced Network). 58. The device according to claim 55, wherein the cellular modem is coupled to the flat screen for receiving information from the cellular network and displaying the received information on the flat screen. 59. The device according to claim 31, wherein the flat screen is silicon-based. 60. The device according to claim 59, wherein the flat screen is LED (Light Emitting Diode), LCD (Liquid Crystal Display), or TFT (Thin-Film Transistor) based. 61. The device according to claim 31, further operative for displaying High Definition (HD), the device further comprising an HDMI (High-Definition Multimedia Interface) for receiving and displaying HD video on the flat screen.
System and method for control using face detection or hand gesture detection algorithms in a captured image. Based on the existence of a detected human face or a hand gesture in an image captured by a digital camera (still or video), a control signal is generated and provided to a device. The control may provide power or disconnect power supply to the device (or part of the device circuits). Further, the location of the detected face in the image may be used to rotate a display screen horizontally, vertically or both, to achieve a better line of sight with a viewing person. If two or more faces are detected, the average location is calculated and used for line of sight correction. A linear feedback control loop is implemented wherein detected face deviation from the optimum is the error to be corrected by rotating the display to the required angular position. A hand gesture detection can be used as a replacement to a remote control wherein the various hand gestures control the various function of the controlled unit, such as a television set.1. A device for use with a cable simultaneously carrying DC power and bi-directional serial digital data over the same wires, the device comprising: a flat screen for visually displaying information; a digital video camera for capturing digital video data, the digital video camera having a center line of sight and being mechanically fixed so that the digital video camera is maintained in a fixed position relative to the flat screen; a power/data splitter having first, second, and third ports for passing the bi-directional serial digital data between the first and second ports and for passing the DC power between the first and third ports; a connector coupled to the first port for connecting to the cable; a transceiver coupled between the second port and the digital video camera for transmitting the digital video data to the cable; software and a processor to execute the software coupled to control the flat screen, the transceiver, and the digital video camera; and a single enclosure housing the flat screen, the digital video camera, the processor, the connector, the power/data splitter, and the transceiver. 2. The device according to claim 1, further for receiving and displaying television channels, wherein the flat screen is configured for displaying the television channels. 3. The device according to claim 1, further comprising, or consisting of, a television set. 4. The device according to claim 1, further operative to at least in part be powered from the DC power. 5. The device according to claim 4, wherein the third port is coupled to the flat screen for powering the flat screen from the DC power. 6. The device according to claim 1, wherein the transceiver is coupled to the flat screen for receiving information from the cable and for displaying the information on the flat screen. 7. The device according to claim 1, further comprising an image processor coupled to receive the digital video data from the digital video camera for applying an element detection algorithm to detect the element in the digital video data, and wherein the device responds to the element detection. 8. The device according to claim 1, wherein the digital video camera is positioned to capture a scene substantially in front of the flat screen. 9. The device according to claim 1, wherein the digital video camera comprises: an optical lens for focusing received light; a photosensitive image sensor array disposed approximately at an image focal point plane of the optical lens for capturing an image and producing electronic image information representing the image; and an analog-to-digital (A/D) converter coupled to the image sensor for generating a digital data representation of the image. 10. The device according to claim 9, wherein the image sensor array is based on, or uses, Charge-Coupled Devices (CCD) or Complementary Metal-Oxide-Semiconductor (CMOS) elements. 11. The device according to claim 1, wherein the cable comprises a Local Area Network (LAN) cable, the connector comprises a LAN connector, and the transceiver comprises a LAN transceiver. 12. The device according to claim 11, wherein the LAN is an Ethernet based LAN that is according to, compatible with, or based on, IEEE 802.3-2008 standard. 13. The device according to claim 12, wherein the LAN cable is based on, or uses, twisted-pair copper wires, the LAN transceiver is according to, compatible with, or based on, 10Base-T, 100Base-TX, or 1000Base-T, and the LAN connector is RJ-45 type connector. 14. The device according to claim 11, wherein the DC power and the serial digital data are carried according to, compatible with, or based on, IEEE 802.3af-2003 or IEEE 802.3at-2009 standard. 15. The device according to claim 1, further for use with a power source that supply at least part of the DC power, wherein the third port is coupled to the power source for supplying the DC power to the cable. 16. The device according to claim 1, wherein the DC power and the serial digital data are carried using Frequency Division/Domain Multiplexing (FDM), where the serial digital data is carried in a frequency band above, and distinct from, the DC power. 17. The device according to claim 16, wherein the power/data splitter comprises a high pass filter between the first and second ports and a low pass filter between the first and third ports. 18. The device according to claim 16, wherein the power/data splitter comprises a transformer having windings and a capacitor connected between the transformer windings. 19. The device according to claim 1, further comprising a video compressor coupled between the digital video camera and the transceiver for compressing the captured digital video data according to a compression scheme. 20. The device according to claim 19, wherein the compression scheme is lossy or lossless type. 21. The device according to claim 19, wherein the compression scheme is according to, compatible with, or based on, JPEG (Joint Photographic Experts Group) or MPEG (Moving Picture Experts Group) standard. 22. The device according to claim 1, further for initiating and receiving telephone calls over a telephone network. 23. The device according to claim 22, wherein the telephone network is a cellular telephone network. 24. The device according to claim 23, further comprising: a cellular antenna for over-the-air radio-frequency communication; and a cellular modem coupled to the cellular antenna for transmitting serial digital data to, or receiving serial digital data from, the cellular telephone network. 25. The device according to claim 24, further comprising, or consisting of, a cellular telephone device. 26. The device according to claim 24, wherein the communication over the cellular network is according to, compatible with, or is based on, GSM (Global System for Mobile Communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), EDGE (Enhanced Data Rates for GSM Evolution), 3GSM, DECT (Digital Enhanced Cordless Telecommunications), Digital AMPS, or iDEN (Integrated Digital Enhanced Network). 27. The device according to claim 24, wherein the cellular modem is coupled to the flat screen for receiving information from the cellular network and displaying the received information on the flat screen. 28. The device according to claim 1, wherein the flat screen is silicon-based. 29. The device according to claim 28, wherein the flat screen is LED (Light Emitting Diode), LCD (Liquid Crystal Display), or TFT (Thin-Film Transistor) based. 30. The device according to claim 1, further operative for displaying High Definition (HD), the device further comprising an HDMI (High-Definition Multimedia Interface) for receiving and displaying HD video on the flat screen. 31. A device for use with a Local Area Network (LAN) cable simultaneously carrying DC power and bi-directional serial digital data over the same wires, the device comprising: a flat screen for visually displaying information; a digital video camera for capturing digital video data, the digital video camera having a center line of sight and being mechanically fixed so that the digital video camera is maintained in a fixed position relative to the flat screen; a LAN connector for connecting to the LAN cable; a transceiver coupled between the LAN connector and the digital video camera for transmitting the digital video data to the LAN cable; software and a processor to execute the software coupled to control the flat screen, the transceiver, and the digital video camera; and a single enclosure housing the flat screen, the digital video camera, the processor, the LAN connector, and the transceiver. 32. The device according to claim 31, further operative to at least in part be powered from the DC power. 33. The device according to claim 31, further comprising a power/data splitter having first, second, and third ports for passing the bi-directional serial digital data between the first and second ports and for passing the DC power between the first and third ports, the first port coupled to the LAN connector, and the second port coupled to the transceiver. 34. The device according to claim 33, wherein the third port is coupled to the flat screen for powering the flat screen from the DC power. 35. The device according to claim 33, further for use with a power source that supply at least part of the DC power, wherein the third port is coupled to the power source for supplying the DC power to the LAN cable. 36. The device according to claim 33, wherein the DC power and the serial digital data are carried using Frequency Division/Domain Multiplexing (FDM), where the serial digital data is carried in a frequency band above, and distinct from, the DC power. 37. The device according to claim 36, wherein the power/data splitter comprises a high pass filter between the first and second ports and a low pass filter between the first and third ports. 38. The device according to claim 36, wherein the power/data splitter comprises a transformer having windings and a capacitor connected between the transformer windings. 39. The device according to claim 31, further operative for receiving and displaying television channels, wherein the flat screen is configured for displaying the television channels. 40. The device according to claim 39, further comprising, or consisting of, a television set. 41. The device according to claim 31, wherein the transceiver is coupled to the flat screen for receiving information from the LAN cable and for displaying the information on the flat screen. 42. The device according to claim 31, further comprising an image processor coupled to receive the digital video data from the digital video camera for applying an element detection algorithm to detect the element in the digital video data, and wherein the device responds to the element detection. 43. The device according to claim 31, wherein the digital video camera is positioned to capture a scene substantially in front of the flat screen. 44. The device according to claim 31, wherein the digital video camera comprises: an optical lens for focusing received light; a photosensitive image sensor array disposed approximately at an image focal point plane of the optical lens for capturing an image and producing electronic image information representing the image; and an analog-to-digital (A/D) converter coupled to the image sensor for generating a digital data representation of the image. 45. The device according to claim 44, wherein the image sensor array is based on, or uses, Charge-Coupled Devices (CCD) or Complementary Metal-Oxide-Semiconductor (CMOS) elements. 46. The device according to claim 31, wherein the transceiver comprises a LAN transceiver. 47. The device according to claim 46, wherein the LAN cable is based on, or uses, twisted-pair copper wires, the LAN transceiver is according to, compatible with, or based on, 10Base-T, 100Base-TX, or 1000Base-T, and the LAN connector is RJ-45 type connector. 48. The device according to claim 31, wherein the LAN is an Ethernet-based LAN that is according to, compatible with, or based on, IEEE 802.3-2008 standard. 49. The device according to claim 31, wherein the DC power and the serial digital data are carried according to, compatible with, or based on, IEEE 802.3af-2003 or IEEE 802.3at-2009 standard. 50. The device according to claim 31, further comprising a video compressor coupled between the digital video camera and the transceiver for compressing the captured digital video data according to a compression scheme. 51. The device according to claim 50, wherein the compression scheme is lossy or lossless type. 52. The device according to claim 50, wherein the compression scheme is according to, compatible with, or based on, JPEG (Joint Photographic Experts Group) or MPEG (Moving Picture Experts Group) standard. 53. The device according to claim 31, further for initiating and receiving telephone calls over a telephone network. 54. The device according to claim 53, wherein the telephone network is a cellular telephone network. 55. The device according to claim 54, further for initiating and receiving telephone calls over a cellular network, the device further comprising: a cellular antenna for over-the-air radio-frequency communication; and a cellular modem coupled to the cellular antenna for transmitting serial digital data to, or receiving serial digital data from, the cellular telephone network. 56. The device according to claim 55, further comprising, or consisting of, a cellular telephone device. 57. The device according to claim 55, wherein the communication over the cellular network is according to, compatible with, or is based on, GSM (Global System for Mobile Communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), EDGE (Enhanced Data Rates for GSM Evolution), 3GSM, DECT (Digital Enhanced Cordless Telecommunications), Digital AMPS, or iDEN (Integrated Digital Enhanced Network). 58. The device according to claim 55, wherein the cellular modem is coupled to the flat screen for receiving information from the cellular network and displaying the received information on the flat screen. 59. The device according to claim 31, wherein the flat screen is silicon-based. 60. The device according to claim 59, wherein the flat screen is LED (Light Emitting Diode), LCD (Liquid Crystal Display), or TFT (Thin-Film Transistor) based. 61. The device according to claim 31, further operative for displaying High Definition (HD), the device further comprising an HDMI (High-Definition Multimedia Interface) for receiving and displaying HD video on the flat screen.
2,600
9,949
9,949
16,692,893
2,693
A computer mouse is provided which includes a housing; a lateral vertical wheel attached to the housing and configured to rotate with respect to the housing and in a first plane to control lateral vertical planar rotation of a visual object on a computer screen; a horizontal turn wheel attached to the housing and configured to rotate with respect to the housing and in a second plane which is perpendicular to the first plane to control horizontal planar rotation of the visual object on the computer screen; and a straight vertical turn wheel attached to the housing and configured to rotate with respect to the housing and in a third plane which is perpendicular to the first plane and the second plane to control rotation of the visual object on the computer screen in a direction which appears on the computer screen to be substantially perpendicular to vertical and horizontal directions. The mouse also includes a translational motion control button and resizing motion control button for motion and resizing of the visual object in horizontal, vertical and their perpendicular directions.
1. A computer mouse comprising means for controlling vertical direction movement in a first dimension of a visual object on a two dimensional computer screen; means for controlling horizontal direction movement in a second dimension of a visual object on a two dimensional computer screen; and means for controlling movement of a visual object in a direction on the two dimensional computer screen, which appears to be in a third dimension, substantially perpendicular to the two dimensional computer screen. 2. A computer mouse comprising a housing; a lateral vertical wheel attached to the housing and configured to rotate with respect to the housing and in a first plane to control simulated lateral vertical planar rotation of a visual object on a two dimensional computer screen; a horizontal turn wheel attached to the housing and configured to rotate with respect to the housing and in a second plane which is perpendicular to the first plane to control simulated horizontal planar rotation of the visual object on the two dimensional computer screen; and a straight vertical turn wheel attached to the housing and configured to rotate with respect to the housing and in a third plane which is perpendicular to the first plane and the second plane to control movement of the visual object on the computer screen in a direction which appears on the computer screen to be substantially perpendicular to the two dimensional computer screen. 3. The computer mouse of claim 2 further comprising a translational motion control button; and wherein holding the translational motion control button down while rotating the lateral vertical wheel is configured to cause the visual object to move from one location to another on the two dimensional computer screen along a vertical line; wherein holding the translational motion control button down while rotating the horizontal turn wheel is configured to cause the visual object to move from one location to another on the two dimensional computer screen along a horizontal line, which is perpendicular to the vertical line; and wherein holding the translational motion control button down while rotating the straight vertical turn wheel is configured to cause the visual object to appear to move from one location to another in a direction which appears on the computer screen to be along a straight line, which appears to be substantially perpendicular to the vertical line and to the horizontal line, and substantially perpendicular to the two dimensional computer screen. 4. The computer mouse of claim 2 further comprising a resizing motion control button; and wherein holding the resizing motion control button down while rotating the lateral vertical turn wheel is configured to cause the visual object to expand or contract in size on the two dimensional computer screen along a vertical line; wherein holding the resizing motion control button down while rotating the horizontal turn wheel is configured to cause the visual object to expand or contract in size along a horizontal line on the two dimensional computer screen, which is perpendicular to the vertical line; and wherein holding the resizing motion control button down while rotating the straight vertical turn wheel is configured to cause the visual object to expand or contract in size on the two dimensional computer screen in a manner which appears to be along a straight line, which appears to be perpendicular to the vertical line and to the horizontal line, and substantially perpendicular to the two dimensional computer screen. 5. A method comprising controlling vertical direction movement of a visual object on a two dimensional computer screen; controlling horizontal direction movement of a visual object on the two dimensional computer screen; and controlling movement of a visual object in a direction on the two dimensional computer screen, which is simulated to appear to be substantially perpendicular to vertical and horizontal directions, and substantially perpendicular to the two dimensional computer screen. 6. A method comprising rotating a lateral vertical turn wheel, which is attached to the housing of a computer mouse and configured to rotate with respect to the housing and in a first plane to control simulated lateral vertical planar rotation of a visual object on a two dimensional computer screen; rotating a horizontal turn wheel, which is attached to the housing of the computer mouse and configured to rotate with respect to the housing and in a second plane which is perpendicular to the first plane to control simulated horizontal planar rotation of the visual object on the two dimensional computer screen; and rotating a straight vertical turn wheel, which is attached to the housing of the computer mouse and configured to rotate with respect to the housing and in a third plane which is perpendicular to the first plane and the second plane to control simulated rotation of the visual object on the two dimensional computer screen in a plane which appears on the computer screen to be substantially perpendicular to lateral vertical and horizontal planes. 7. The method of claim 6 further comprising holding the translational motion control button down while rotating the lateral vertical turn wheel to cause the visual object to move from one location to another along a vertical line on the two dimensional computer screen; holding the translational motion control button down while rotating the horizontal turn wheel to cause the visual object to move from one location to another along a horizontal line, which is perpendicular to the first vertical line, on the two dimensional computer screen; and holding the translational motion control button down while rotating the straight vertical turn wheel to cause the visual object to be simulated to move from one location to another in a direction which appears on the two dimensional computer screen to be along a straight line, which appears to be substantially perpendicular to the vertical line and to the horizontal line, and substantially perpendicular to the two dimensional computer screen. 8. The method of claim 6 further comprising holding a resizing motion control button down while rotating the lateral vertical turn wheel to cause the visual object to expand or contract in size along a vertical line, on the two dimensional computer screen holding the resizing motion control button down while rotating the horizontal turn wheel is configured to cause the visual object to expand or contract in size along a horizontal line, which is perpendicular to the vertical line, on the two dimensional computer screen; and holding the resizing motion control button down while rotating the straight vertical turn wheel is configured to cause the visual object to appear to expand or contract in size on the computer screen in a manner which appears to be along a straight line, which appears to be perpendicular to the vertical line and to the horizontal line, and perpendicular to the two dimensional computer screen.
A computer mouse is provided which includes a housing; a lateral vertical wheel attached to the housing and configured to rotate with respect to the housing and in a first plane to control lateral vertical planar rotation of a visual object on a computer screen; a horizontal turn wheel attached to the housing and configured to rotate with respect to the housing and in a second plane which is perpendicular to the first plane to control horizontal planar rotation of the visual object on the computer screen; and a straight vertical turn wheel attached to the housing and configured to rotate with respect to the housing and in a third plane which is perpendicular to the first plane and the second plane to control rotation of the visual object on the computer screen in a direction which appears on the computer screen to be substantially perpendicular to vertical and horizontal directions. The mouse also includes a translational motion control button and resizing motion control button for motion and resizing of the visual object in horizontal, vertical and their perpendicular directions.1. A computer mouse comprising means for controlling vertical direction movement in a first dimension of a visual object on a two dimensional computer screen; means for controlling horizontal direction movement in a second dimension of a visual object on a two dimensional computer screen; and means for controlling movement of a visual object in a direction on the two dimensional computer screen, which appears to be in a third dimension, substantially perpendicular to the two dimensional computer screen. 2. A computer mouse comprising a housing; a lateral vertical wheel attached to the housing and configured to rotate with respect to the housing and in a first plane to control simulated lateral vertical planar rotation of a visual object on a two dimensional computer screen; a horizontal turn wheel attached to the housing and configured to rotate with respect to the housing and in a second plane which is perpendicular to the first plane to control simulated horizontal planar rotation of the visual object on the two dimensional computer screen; and a straight vertical turn wheel attached to the housing and configured to rotate with respect to the housing and in a third plane which is perpendicular to the first plane and the second plane to control movement of the visual object on the computer screen in a direction which appears on the computer screen to be substantially perpendicular to the two dimensional computer screen. 3. The computer mouse of claim 2 further comprising a translational motion control button; and wherein holding the translational motion control button down while rotating the lateral vertical wheel is configured to cause the visual object to move from one location to another on the two dimensional computer screen along a vertical line; wherein holding the translational motion control button down while rotating the horizontal turn wheel is configured to cause the visual object to move from one location to another on the two dimensional computer screen along a horizontal line, which is perpendicular to the vertical line; and wherein holding the translational motion control button down while rotating the straight vertical turn wheel is configured to cause the visual object to appear to move from one location to another in a direction which appears on the computer screen to be along a straight line, which appears to be substantially perpendicular to the vertical line and to the horizontal line, and substantially perpendicular to the two dimensional computer screen. 4. The computer mouse of claim 2 further comprising a resizing motion control button; and wherein holding the resizing motion control button down while rotating the lateral vertical turn wheel is configured to cause the visual object to expand or contract in size on the two dimensional computer screen along a vertical line; wherein holding the resizing motion control button down while rotating the horizontal turn wheel is configured to cause the visual object to expand or contract in size along a horizontal line on the two dimensional computer screen, which is perpendicular to the vertical line; and wherein holding the resizing motion control button down while rotating the straight vertical turn wheel is configured to cause the visual object to expand or contract in size on the two dimensional computer screen in a manner which appears to be along a straight line, which appears to be perpendicular to the vertical line and to the horizontal line, and substantially perpendicular to the two dimensional computer screen. 5. A method comprising controlling vertical direction movement of a visual object on a two dimensional computer screen; controlling horizontal direction movement of a visual object on the two dimensional computer screen; and controlling movement of a visual object in a direction on the two dimensional computer screen, which is simulated to appear to be substantially perpendicular to vertical and horizontal directions, and substantially perpendicular to the two dimensional computer screen. 6. A method comprising rotating a lateral vertical turn wheel, which is attached to the housing of a computer mouse and configured to rotate with respect to the housing and in a first plane to control simulated lateral vertical planar rotation of a visual object on a two dimensional computer screen; rotating a horizontal turn wheel, which is attached to the housing of the computer mouse and configured to rotate with respect to the housing and in a second plane which is perpendicular to the first plane to control simulated horizontal planar rotation of the visual object on the two dimensional computer screen; and rotating a straight vertical turn wheel, which is attached to the housing of the computer mouse and configured to rotate with respect to the housing and in a third plane which is perpendicular to the first plane and the second plane to control simulated rotation of the visual object on the two dimensional computer screen in a plane which appears on the computer screen to be substantially perpendicular to lateral vertical and horizontal planes. 7. The method of claim 6 further comprising holding the translational motion control button down while rotating the lateral vertical turn wheel to cause the visual object to move from one location to another along a vertical line on the two dimensional computer screen; holding the translational motion control button down while rotating the horizontal turn wheel to cause the visual object to move from one location to another along a horizontal line, which is perpendicular to the first vertical line, on the two dimensional computer screen; and holding the translational motion control button down while rotating the straight vertical turn wheel to cause the visual object to be simulated to move from one location to another in a direction which appears on the two dimensional computer screen to be along a straight line, which appears to be substantially perpendicular to the vertical line and to the horizontal line, and substantially perpendicular to the two dimensional computer screen. 8. The method of claim 6 further comprising holding a resizing motion control button down while rotating the lateral vertical turn wheel to cause the visual object to expand or contract in size along a vertical line, on the two dimensional computer screen holding the resizing motion control button down while rotating the horizontal turn wheel is configured to cause the visual object to expand or contract in size along a horizontal line, which is perpendicular to the vertical line, on the two dimensional computer screen; and holding the resizing motion control button down while rotating the straight vertical turn wheel is configured to cause the visual object to appear to expand or contract in size on the computer screen in a manner which appears to be along a straight line, which appears to be perpendicular to the vertical line and to the horizontal line, and perpendicular to the two dimensional computer screen.
2,600
9,950
9,950
15,372,589
2,613
Foveated rendering for rendering an image uses a ray tracing technique to process graphics data for a region of interest of the image, and a rasterisation technique is used to process graphics data for other regions of the image. A rendered image can be formed using the processed graphics data for the region of interest of the image and the processed graphics data for the other regions of the image. The region of interest may correspond to a foveal region of the image. Ray tracing naturally provides high detail and photo-realistic rendering, which human vision is particularly sensitive to in the foveal region; whereas rasterisation techniques are suited for providing temporal smoothing and anti-aliasing in a simple manner, and is therefore suited for use in the regions of the image that a user will see in the periphery of their vision.
1. A processing system configured to render one or more images, the processing system comprising: rendering logic configured to process graphics data to generate an initial image; region identification logic configured to identify one or more regions of the initial image; ray tracing logic configured to perform ray tracing to determine ray traced data for the identified one or more regions of the initial image; and update logic configured to update the initial image using the determined ray traced data for the identified one or more regions of the initial image, to thereby determine an updated image to be outputted for display. 2. The processing system of claim 1, wherein the rendering logic is configured to process the graphics data using a rasterisation technique to generate the initial image. 3. The processing system of claim 1, wherein the rendering logic is configured to process the graphics data using a ray tracing technique to generate the initial image. 4. The processing system of claim 1, wherein the initial image is a lower detail image than the updated image. 5. The processing system of claim 1, further comprising gaze tracking logic configured to determine one or more gaze positions for the initial image, wherein the region identification logic is configured to receive one or more indications of the one or more determined gaze positions, and to identify the one or more regions of the initial image based on the one or more determined gaze positions. 6. The processing system of claim 5, wherein the gaze tracking logic is configured to implement a predictive model to anticipate movements in gaze. 7. The processing system of claim 5, wherein one of the one or more identified regions of the initial image surrounds one of the one or more determined gaze positions, thereby representing a foveal region. 8. The processing system of claim 7, further comprising a camera pipeline which is configured to: receive image data from a camera which is arranged to capture images of a user looking at a display on which a rendered image is to be displayed; and process the received image data to generate a captured image; wherein the gaze tracking logic is configured to analyse the captured image to determine the gaze position for the initial image. 9. The processing system of claim 8, wherein the ray tracing logic and the rasterisation logic are implemented on a graphics processing unit, and wherein the camera pipeline and the graphics processing unit are implemented as part of a system on chip (SOC). 10. The processing system of claim 1, wherein the region identification logic is configured to analyse the initial image to determine one or more regions of high frequency, wherein the one or more determined regions of high frequency are one or more identified regions of the initial image. 11. The processing system of claim 1, wherein the rendering logic and the ray tracing logic are configured to operate asynchronously. 12. The processing system of claim 11, wherein the ray tracing logic, the region identification logic and the update logic are configured to operate at a first rate, and wherein the rendering logic is configured to operate at a second rate, wherein the first rate is faster than the second rate. 13. The processing system of claim 11, further comprising time warping logic configured to apply an image warping process to the updated image before it is sent for display. 14. The processing system of claim 1, further comprising acceleration structure building logic configured to determine an acceleration structure representing the graphics data of geometry in a scene of which an image is to be rendered. 15. The processing system of claim 14, wherein the processing system is configured to render a plurality of images representing a sequence of frames, and wherein the acceleration structure building logic is configured to determine the acceleration structure for a current frame by updating the acceleration structure for the preceding frame. 16. The processing system of claim 1, wherein the processing system is arranged to be included in a virtual reality system or an augmented reality system. 17. A method of rendering one or more images at a processing system, the method comprising: processing graphics data to generate an initial image; identifying one or more regions of the initial image; performing ray tracing to determine ray traced data for the identified one or more regions of the initial image; and updating the initial image using the determined ray traced data for the identified one or more regions of the initial image, to thereby determine an updated image to be outputted for display. 18. The method of claim 17, further comprising displaying an image based on the updated image. 19. The method of claim 17, wherein the processing graphics data to generate an initial image is performed at an asynchronous rate to the performing ray tracing to determine ray traced data. 20. A non-transitory computer readable storage medium having stored thereon a computer readable description of an integrated circuit that, when processed in an integrated circuit manufacturing system, causes the integrated circuit manufacturing system to manufacture a processing system comprising: rendering logic configured to process graphics data to generate an initial image; region identification logic configured to identify one or more regions of the initial image; ray tracing logic configured to perform ray tracing to determine ray traced data for the identified one or more regions of the initial image; and update logic configured to update the initial image using the determined ray traced data for the identified one or more regions of the initial image, to thereby determine an updated image to be outputted for display. 21-42. (canceled)
Foveated rendering for rendering an image uses a ray tracing technique to process graphics data for a region of interest of the image, and a rasterisation technique is used to process graphics data for other regions of the image. A rendered image can be formed using the processed graphics data for the region of interest of the image and the processed graphics data for the other regions of the image. The region of interest may correspond to a foveal region of the image. Ray tracing naturally provides high detail and photo-realistic rendering, which human vision is particularly sensitive to in the foveal region; whereas rasterisation techniques are suited for providing temporal smoothing and anti-aliasing in a simple manner, and is therefore suited for use in the regions of the image that a user will see in the periphery of their vision.1. A processing system configured to render one or more images, the processing system comprising: rendering logic configured to process graphics data to generate an initial image; region identification logic configured to identify one or more regions of the initial image; ray tracing logic configured to perform ray tracing to determine ray traced data for the identified one or more regions of the initial image; and update logic configured to update the initial image using the determined ray traced data for the identified one or more regions of the initial image, to thereby determine an updated image to be outputted for display. 2. The processing system of claim 1, wherein the rendering logic is configured to process the graphics data using a rasterisation technique to generate the initial image. 3. The processing system of claim 1, wherein the rendering logic is configured to process the graphics data using a ray tracing technique to generate the initial image. 4. The processing system of claim 1, wherein the initial image is a lower detail image than the updated image. 5. The processing system of claim 1, further comprising gaze tracking logic configured to determine one or more gaze positions for the initial image, wherein the region identification logic is configured to receive one or more indications of the one or more determined gaze positions, and to identify the one or more regions of the initial image based on the one or more determined gaze positions. 6. The processing system of claim 5, wherein the gaze tracking logic is configured to implement a predictive model to anticipate movements in gaze. 7. The processing system of claim 5, wherein one of the one or more identified regions of the initial image surrounds one of the one or more determined gaze positions, thereby representing a foveal region. 8. The processing system of claim 7, further comprising a camera pipeline which is configured to: receive image data from a camera which is arranged to capture images of a user looking at a display on which a rendered image is to be displayed; and process the received image data to generate a captured image; wherein the gaze tracking logic is configured to analyse the captured image to determine the gaze position for the initial image. 9. The processing system of claim 8, wherein the ray tracing logic and the rasterisation logic are implemented on a graphics processing unit, and wherein the camera pipeline and the graphics processing unit are implemented as part of a system on chip (SOC). 10. The processing system of claim 1, wherein the region identification logic is configured to analyse the initial image to determine one or more regions of high frequency, wherein the one or more determined regions of high frequency are one or more identified regions of the initial image. 11. The processing system of claim 1, wherein the rendering logic and the ray tracing logic are configured to operate asynchronously. 12. The processing system of claim 11, wherein the ray tracing logic, the region identification logic and the update logic are configured to operate at a first rate, and wherein the rendering logic is configured to operate at a second rate, wherein the first rate is faster than the second rate. 13. The processing system of claim 11, further comprising time warping logic configured to apply an image warping process to the updated image before it is sent for display. 14. The processing system of claim 1, further comprising acceleration structure building logic configured to determine an acceleration structure representing the graphics data of geometry in a scene of which an image is to be rendered. 15. The processing system of claim 14, wherein the processing system is configured to render a plurality of images representing a sequence of frames, and wherein the acceleration structure building logic is configured to determine the acceleration structure for a current frame by updating the acceleration structure for the preceding frame. 16. The processing system of claim 1, wherein the processing system is arranged to be included in a virtual reality system or an augmented reality system. 17. A method of rendering one or more images at a processing system, the method comprising: processing graphics data to generate an initial image; identifying one or more regions of the initial image; performing ray tracing to determine ray traced data for the identified one or more regions of the initial image; and updating the initial image using the determined ray traced data for the identified one or more regions of the initial image, to thereby determine an updated image to be outputted for display. 18. The method of claim 17, further comprising displaying an image based on the updated image. 19. The method of claim 17, wherein the processing graphics data to generate an initial image is performed at an asynchronous rate to the performing ray tracing to determine ray traced data. 20. A non-transitory computer readable storage medium having stored thereon a computer readable description of an integrated circuit that, when processed in an integrated circuit manufacturing system, causes the integrated circuit manufacturing system to manufacture a processing system comprising: rendering logic configured to process graphics data to generate an initial image; region identification logic configured to identify one or more regions of the initial image; ray tracing logic configured to perform ray tracing to determine ray traced data for the identified one or more regions of the initial image; and update logic configured to update the initial image using the determined ray traced data for the identified one or more regions of the initial image, to thereby determine an updated image to be outputted for display. 21-42. (canceled)
2,600
9,951
9,951
15,341,777
2,683
Media rendering system including a remote control device and associated docking station. The remote control device interfaces with a remote server to stream media content for local and/or external playback. The remote control device may interface with a docking station to playback rendered media on one or more entertainment appliances. The portable device preferably has standard remote control capability in order to enable advanced features and functions for media playback.
1. A system, comprising: a hand-held, portable device; and a docking station adapted to releasably receive the hand-held, portable device having a machine readable identifier; wherein programming of the hand-held, portable device is adapted to read the identifier when the hand-held, portable device is received within the docking station and to use the identifier as read from the docking station to cause the hand-held, portable device to be automatically configured to remotely control functional operations of a plurality of different appliances wherein the plurality of different appliances were caused to be associated with the identifier at a time prior to the hand-held, portable device being releasably received therein. 2. The system as recited in claim 1, wherein the docking station recharges a battery of the hand-held, portable device when the hand-held, portable device is received within the docking station. 3. The system as recited in claim 1, wherein the programming of the hand-held, portable device further causes the hand-held, portable device to transmit one or more remote control commands to at least one of the plurality of appliances in response to the hand-held, portable device being received into the docking station at a time subsequent to the hand-held, portable device being automatically configured. 4. The system as recited in claim 1, wherein the programming of the hand-held, portable device causes the hand-held, portable device to transmit one or more remote control commands to at least one of the plurality of appliances in response to the hand-held, portable device being removed from the docking station at a time subsequent to the hand-held, portable device being automatically configured. 5. The system as recited in claim 1, further comprising an appliance adapted to render media in communication with the docking station wherein the hand-held portable device exchanges communications directly with both the docking station and the appliance adapted to render media and wherein the programming of the hand-held portable device causes the hand-held, portable device to automatically route media that is currently being rendered by the hand-held portable device to the appliance adapted to render media, via the docking station, for rendering by the appliance adapted to render media in lieu of the hand-held, portable device in response to the hand-held, portable device being be received into the docking station. 6. The system as recited in claim 1, wherein the identifier comprises an address assigned to the docking station. 7. The system as recited in claim 1, wherein the plurality of different appliances were caused to be associated with the identifier by causing data representative of the identifier to be mapped to a command code set for each of the plurality of different appliances.
Media rendering system including a remote control device and associated docking station. The remote control device interfaces with a remote server to stream media content for local and/or external playback. The remote control device may interface with a docking station to playback rendered media on one or more entertainment appliances. The portable device preferably has standard remote control capability in order to enable advanced features and functions for media playback.1. A system, comprising: a hand-held, portable device; and a docking station adapted to releasably receive the hand-held, portable device having a machine readable identifier; wherein programming of the hand-held, portable device is adapted to read the identifier when the hand-held, portable device is received within the docking station and to use the identifier as read from the docking station to cause the hand-held, portable device to be automatically configured to remotely control functional operations of a plurality of different appliances wherein the plurality of different appliances were caused to be associated with the identifier at a time prior to the hand-held, portable device being releasably received therein. 2. The system as recited in claim 1, wherein the docking station recharges a battery of the hand-held, portable device when the hand-held, portable device is received within the docking station. 3. The system as recited in claim 1, wherein the programming of the hand-held, portable device further causes the hand-held, portable device to transmit one or more remote control commands to at least one of the plurality of appliances in response to the hand-held, portable device being received into the docking station at a time subsequent to the hand-held, portable device being automatically configured. 4. The system as recited in claim 1, wherein the programming of the hand-held, portable device causes the hand-held, portable device to transmit one or more remote control commands to at least one of the plurality of appliances in response to the hand-held, portable device being removed from the docking station at a time subsequent to the hand-held, portable device being automatically configured. 5. The system as recited in claim 1, further comprising an appliance adapted to render media in communication with the docking station wherein the hand-held portable device exchanges communications directly with both the docking station and the appliance adapted to render media and wherein the programming of the hand-held portable device causes the hand-held, portable device to automatically route media that is currently being rendered by the hand-held portable device to the appliance adapted to render media, via the docking station, for rendering by the appliance adapted to render media in lieu of the hand-held, portable device in response to the hand-held, portable device being be received into the docking station. 6. The system as recited in claim 1, wherein the identifier comprises an address assigned to the docking station. 7. The system as recited in claim 1, wherein the plurality of different appliances were caused to be associated with the identifier by causing data representative of the identifier to be mapped to a command code set for each of the plurality of different appliances.
2,600
9,952
9,952
14,948,471
2,656
An electrosurgical system is provided. The electrosurgical system includes an electrosurgical generator including a computer having one or more microprocessors in operable communication with memory for storing information pertaining to the electrosurgical generator. An audio output module is in operable communication with the computer and configured to generate an audio output having the information pertaining to the electrosurgical generator embedded therein. A speaker is in operable communication with the audio output module for outputting the audio output. A recording device is configured to record the audio output. An audio collector is configured to receive the audio output from the recording device and decipher the embedded audio so that the information pertaining to the electrosurgical generator may be utilized for future use.
1-19. (canceled) 20. An electrosurgical generator comprising: a memory configured to store data pertaining to the electrosurgical generator; a processor in operable communication with the memory; an audio output module coupled to the processor and configured to generate an audio signal encoding the data; a speaker coupled to the audio output module and configured to output the audio signal; and an audio collector configured to receive the audio signal and decode the data encoded in the audio signal. 21. The electrosurgical generator according to claim 20, wherein the audio output module is further configured to encrypt the audio signal. 22. The electrosurgical generator according to claim 21, the audio collector is configured to decrypt the encrypted audio signal. 23. The electrosurgical generator according to claim 20, wherein the data is selected from the group consisting of date and time of an electrosurgical procedure, activation time of the electrosurgical generator, type of an electrosurgical instrument connected to the electrosurgical generator, serial number of the electrosurgical generator, amount of electrosurgical energy delivered to an electrosurgical instrument, and shut off condition. 24. An electrosurgical system comprising: an electrosurgical instrument; an electrosurgical generator configured to supply electrosurgical energy to the electrosurgical instrument, the electrosurgical generator including: a memory configured to store data pertaining to the electrosurgical generator; a processor in operable communication with the memory; an audio output module coupled to the processor and configured to generate an audio signal encoding the data; a speaker coupled to the audio output module and configured to output the audio signal; and an audio collector configured to receive the audio signal and decode the data encoded in the audio signal. 25. The electrosurgical system according to claim 24, wherein the audio output module is further configured to encrypt the audio signal. 26. The electrosurgical system according to claim 25, the audio collector is configured to decrypt the encrypted audio signal. 27. The electrosurgical system according to claim 24, wherein the data is selected from the group consisting of date and time of an electrosurgical procedure, activation time of one of the electrosurgical generator or the electrosurgical instrument, type of the electrosurgical instrument, serial number of at least one of the electrosurgical generator or the electrosurgical instrument, amount of electrosurgical energy delivered to the electrosurgical instrument, and shut off condition. 28. A method for transferring information pertaining to an electrosurgical generator, comprising: encoding data in an audio signal at an audio output module, the data pertaining to an electrosurgical generator; outputting the audio signal through a speaker coupled to the audio signal; receiving the audio signal at an audio collector; and decoding the data encoded in the audio signal at the audio collector. 29. The method according to claim 28, further comprising: encrypting the audio signal prior to outputting the audio signal. 30. The method according to claim 29, further comprising: decrypting the audio signal at the audio collector.
An electrosurgical system is provided. The electrosurgical system includes an electrosurgical generator including a computer having one or more microprocessors in operable communication with memory for storing information pertaining to the electrosurgical generator. An audio output module is in operable communication with the computer and configured to generate an audio output having the information pertaining to the electrosurgical generator embedded therein. A speaker is in operable communication with the audio output module for outputting the audio output. A recording device is configured to record the audio output. An audio collector is configured to receive the audio output from the recording device and decipher the embedded audio so that the information pertaining to the electrosurgical generator may be utilized for future use.1-19. (canceled) 20. An electrosurgical generator comprising: a memory configured to store data pertaining to the electrosurgical generator; a processor in operable communication with the memory; an audio output module coupled to the processor and configured to generate an audio signal encoding the data; a speaker coupled to the audio output module and configured to output the audio signal; and an audio collector configured to receive the audio signal and decode the data encoded in the audio signal. 21. The electrosurgical generator according to claim 20, wherein the audio output module is further configured to encrypt the audio signal. 22. The electrosurgical generator according to claim 21, the audio collector is configured to decrypt the encrypted audio signal. 23. The electrosurgical generator according to claim 20, wherein the data is selected from the group consisting of date and time of an electrosurgical procedure, activation time of the electrosurgical generator, type of an electrosurgical instrument connected to the electrosurgical generator, serial number of the electrosurgical generator, amount of electrosurgical energy delivered to an electrosurgical instrument, and shut off condition. 24. An electrosurgical system comprising: an electrosurgical instrument; an electrosurgical generator configured to supply electrosurgical energy to the electrosurgical instrument, the electrosurgical generator including: a memory configured to store data pertaining to the electrosurgical generator; a processor in operable communication with the memory; an audio output module coupled to the processor and configured to generate an audio signal encoding the data; a speaker coupled to the audio output module and configured to output the audio signal; and an audio collector configured to receive the audio signal and decode the data encoded in the audio signal. 25. The electrosurgical system according to claim 24, wherein the audio output module is further configured to encrypt the audio signal. 26. The electrosurgical system according to claim 25, the audio collector is configured to decrypt the encrypted audio signal. 27. The electrosurgical system according to claim 24, wherein the data is selected from the group consisting of date and time of an electrosurgical procedure, activation time of one of the electrosurgical generator or the electrosurgical instrument, type of the electrosurgical instrument, serial number of at least one of the electrosurgical generator or the electrosurgical instrument, amount of electrosurgical energy delivered to the electrosurgical instrument, and shut off condition. 28. A method for transferring information pertaining to an electrosurgical generator, comprising: encoding data in an audio signal at an audio output module, the data pertaining to an electrosurgical generator; outputting the audio signal through a speaker coupled to the audio signal; receiving the audio signal at an audio collector; and decoding the data encoded in the audio signal at the audio collector. 29. The method according to claim 28, further comprising: encrypting the audio signal prior to outputting the audio signal. 30. The method according to claim 29, further comprising: decrypting the audio signal at the audio collector.
2,600
9,953
9,953
15,353,752
2,672
On a touch-panel display of an image forming apparatus, pieces of information are displayed divided into five areas, that is, a system area, a function selecting area, a preview area, an action panel area and a task trigger area, of which arrangement is kept unchanged even when operational modes are switched. With such an arrangement, the same or similar pieces of information are displayed on an area arranged at the same position even in different operational modes.
1. (canceled) 2. An operation console provided on an image processing apparatus operated in an operational mode selected by a user from a plurality of operational modes, comprising: a display; and a display controller dividing said display into a plurality of areas including at least an area for displaying a task trigger key operated for operating said image processing apparatus, and displaying information; wherein said display controller displays said area for displaying said task trigger key at the same position of said display even when said selected operational mode is changed, and displays a key for instructing a start of operation of said image processing apparatus as said task trigger key. 3. The operation console according to claim 2, wherein said area for displaying said task trigger key is displayed on a lower right area on said display. 4. The operation console according to claim 2, wherein said plurality of areas include an area in which a selection menu for setting functions of said image processing apparatus is displayed and/or an area in which an image to be processed by said image processing apparatus is displayed as a preview image. 5. The operation console according to claim 4, wherein said area for displaying said task trigger key is displayed next to at least one area between said area in which a selection menu for setting functions of said image processing apparatus is displayed and said area in which an image to be processed by said image processing apparatus is displayed as a preview image. 6. An image processing apparatus provided with the operation console according to claim 2. 7. A control method of controlling an operation console provided on an image processing apparatus operated in an operational mode selected by a user from a plurality of operational modes and including a display, comprising the steps of: dividing said display into a plurality of areas including at least an area for displaying a task trigger key operated for operating said image processing apparatus, and displaying information; displaying said area for displaying said task trigger key at the same position of said display even when said selected operational mode is changed; and displaying a key for instructing a start of operation of said image processing apparatus as said task trigger key. 8. The control method according to claim 7, wherein said area for displaying said task trigger key is displayed on a lower right area on said display. 9. The control method according to claim 7, wherein said plurality of areas include an area in which a selection menu for setting functions of said image processing apparatus is displayed and/or an area in which an image to be processed by said image processing apparatus is displayed as a preview image. 10. The control method according to claim 9, wherein said area for displaying said task trigger key is displayed next to at least one area between said area in which a selection menu for setting functions of said image processing apparatus is displayed and said area in which an image to be processed by said image processing apparatus is displayed as a preview image.
On a touch-panel display of an image forming apparatus, pieces of information are displayed divided into five areas, that is, a system area, a function selecting area, a preview area, an action panel area and a task trigger area, of which arrangement is kept unchanged even when operational modes are switched. With such an arrangement, the same or similar pieces of information are displayed on an area arranged at the same position even in different operational modes.1. (canceled) 2. An operation console provided on an image processing apparatus operated in an operational mode selected by a user from a plurality of operational modes, comprising: a display; and a display controller dividing said display into a plurality of areas including at least an area for displaying a task trigger key operated for operating said image processing apparatus, and displaying information; wherein said display controller displays said area for displaying said task trigger key at the same position of said display even when said selected operational mode is changed, and displays a key for instructing a start of operation of said image processing apparatus as said task trigger key. 3. The operation console according to claim 2, wherein said area for displaying said task trigger key is displayed on a lower right area on said display. 4. The operation console according to claim 2, wherein said plurality of areas include an area in which a selection menu for setting functions of said image processing apparatus is displayed and/or an area in which an image to be processed by said image processing apparatus is displayed as a preview image. 5. The operation console according to claim 4, wherein said area for displaying said task trigger key is displayed next to at least one area between said area in which a selection menu for setting functions of said image processing apparatus is displayed and said area in which an image to be processed by said image processing apparatus is displayed as a preview image. 6. An image processing apparatus provided with the operation console according to claim 2. 7. A control method of controlling an operation console provided on an image processing apparatus operated in an operational mode selected by a user from a plurality of operational modes and including a display, comprising the steps of: dividing said display into a plurality of areas including at least an area for displaying a task trigger key operated for operating said image processing apparatus, and displaying information; displaying said area for displaying said task trigger key at the same position of said display even when said selected operational mode is changed; and displaying a key for instructing a start of operation of said image processing apparatus as said task trigger key. 8. The control method according to claim 7, wherein said area for displaying said task trigger key is displayed on a lower right area on said display. 9. The control method according to claim 7, wherein said plurality of areas include an area in which a selection menu for setting functions of said image processing apparatus is displayed and/or an area in which an image to be processed by said image processing apparatus is displayed as a preview image. 10. The control method according to claim 9, wherein said area for displaying said task trigger key is displayed next to at least one area between said area in which a selection menu for setting functions of said image processing apparatus is displayed and said area in which an image to be processed by said image processing apparatus is displayed as a preview image.
2,600
9,954
9,954
14,261,112
2,621
A portable electronic device displays icons (e.g., graphical objects) in one or more regions of a user interface of a touch-sensitive display, and detects user input specifying an exchange of positions of icons in the user interface. In some aspects, the respective positions of two icons in a user interface can be selected to exchange positions in the one or more regions of the user interface, and one or both icons can change their visual appearance to indicate their selection status.
1. A portable electronic device comprising: a touch-sensitive display; one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: displaying a first icon in a first position of the touch-sensitive display; detecting a touch input on the touch-sensitive display for more than a predefined time period, wherein the touch input causes the first icon to be selected; in response to detecting the touch input on the touch-sensitive display for more than the predefined time period, modifying a visual appearance of the first icon; detecting movement of the touch input from the first position to a second position on the touch-sensitive display, wherein the detected movement of the touch input causes the first icon to be moved from the first position to the second position on the touch-sensitive display; in response to detecting movement of the touch input from the first position to the second position on the touch-sensitive display, modifying a visual appearance of the second position. 2. The device of claim 1, wherein the instructions further comprises: detecting a lift off of the touch input at the second position on the touch-sensitive display; in response to detecting the lift off of the touch input at the second position on the touch-sensitive display, displaying the first icon in the second position on the touch-sensitive display; and providing feedback to a user in response to displaying the first icon in the second position. 3. The device of claim 2, wherein the feedback comprises at least one of audio feedback and tactile feedback. 4. The device of claim 1, wherein modifying the visual appearance of the second position comprises displaying a marker. 5. The device of claim 1, wherein modifying the visual appearance of the second position comprises displaying a line indicating at least partially a boundary of the second position. 6. The device of claim 1, wherein the second position is occupied by a second icon. 7. The device of claim 1, wherein the instructions further comprises: displaying a second icon on the touch-sensitive display; detecting movement of the touch input that causes the first icon to be moved from the first position to a position within a proximity of the second icon on the touch-sensitive display; in response to detecting movement of the touch input that causes the first icon to be moved from the first position to the position within a proximity of the second icon on the touch-sensitive display, modifying a visual appearance of the second icon. 8. The device of claim 7, wherein modifying the visual appearance of the second icon comprises at least one of: applying a glowing effect to the second icon, and exchanging positions of the first and second icons. 9. The device of claim 1, wherein the touch-sensitive display is a multi-touch-sensitive display that is responsive to finger gestures. 10. The device of claim 1, wherein modifying the visual appearance of the first icon comprises scaling the first icon to a different size. 11. The device of claim 1, wherein detecting movement of the touch input that causes the first icon to be moved from the first position to the second position comprises: determining whether the detected movement of the touch input has touched or crossed a boundary line at least partially surrounding the second position on the touch-sensitive display. 12. The device of claim 1, wherein the touch input on the touch-sensitive display that causes the first icon to be selected is a stationary touch contact with the touch-sensitive display on the first icon that lasts for more than the predefined time period. 13. The device of claim 1, wherein the predefined time period is at or between 0.5 to 2.0 seconds. 14. A non-transitory computer readable storage medium having one or more programs stored thereon, which, when executed by a portable electronic device with a touch-sensitive display, cause the device to perform operations comprising: displaying a first icon in a first position of the touch-sensitive display; detecting a touch input on the touch-sensitive display for more than a predefined time period, wherein the touch input causes the first icon to be selected; in response to detecting the touch input on the touch-sensitive display for more than the predefined time period, modifying a visual appearance of the first icon; detecting movement of the touch input from the first position to a second position on the touch-sensitive display, wherein the detected movement of the touch input causes the first icon to be moved from the first position to the second position on the touch-sensitive display; in response to detecting movement of the touch input from the first position to the second position on the touch-sensitive display, modifying a visual appearance of the second position. 15. The storage medium of claim 14, wherein the operations further comprises: detecting a lift off of the touch input at the second position on the touch-sensitive display; in response to detecting the lift off of the touch input at the second position on the touch-sensitive display, displaying the first icon in the second position on the touch-sensitive display; and providing feedback to a user in response to displaying the first icon in the second position. 16. The storage medium of claim 15, wherein the feedback comprises at least one of audio feedback and tactile feedback. 17. The storage medium of claim 14, wherein modifying the visual appearance of the second position comprises displaying a marker. 18. The storage medium of claim 14, wherein modifying the visual appearance of the second position comprises displaying a line indicating at least partially a boundary of the second position. 19. The storage medium of claim 14, wherein the second position is occupied by a second icon. 20. The storage medium of claim 14, wherein the operations further comprises: displaying a second icon on the touch-sensitive display; detecting movement of the touch input that causes the first icon to be moved from the first position to a position within a proximity of the second icon on the touch-sensitive display; in response to detecting movement of the touch input that causes the first icon to be moved from the first position to the position within a proximity of the second icon on the touch-sensitive display, modifying a visual appearance of the second icon. 21. The storage medium of claim 20, wherein modifying the visual appearance of the second icon comprises at least one of: applying a glowing effect to the second icon, and exchanging positions of the first and second icons. 22. The storage medium of claim 14, wherein the touch-sensitive display is a multi-touch-sensitive display that is responsive to finger gestures. 23. The storage medium of claim 14, wherein modifying the visual appearance of the first icon comprises scaling the first icon to a different size. 24. The storage medium of claim 14, wherein detecting movement of the touch input that causes the first icon to be moved from the first position to the second position comprises: determining whether the detected movement of the touch input has touched or crossed a boundary line at least partially surrounding the second position on the touch-sensitive display. 25. The storage medium of claim 14, wherein the touch input on the touch-sensitive display that causes the first icon to be selected is a stationary touch contact with the touch-sensitive display on the first icon that lasts for more than the predefined time period. 26. The storage medium of claim 14, wherein the predefined time period is at or between 0.5 to 2.0 seconds. 27. A method comprising: at a portable electronic device with a touch-sensitive display: displaying a first icon in a first position of the touch-sensitive display; detecting a touch input on the touch-sensitive display on the touch-sensitive display for more than a predefined time period, wherein the touch input causes the first icon to be selected; in response to detecting the touch input on the touch-sensitive display for more than the predefined time period, modifying a visual appearance of the first icon; detecting movement of the touch input from the first position to a second position on the touch-sensitive display, wherein the detected movement of the touch input causes the first icon to be moved from the first position to the second position on the touch-sensitive display; in response to detecting movement of the touch input from the first position to the second position on the touch-sensitive display, modifying a visual appearance of the second position. 28. The method of claim 27, wherein the second position is occupied by a second icon. 29. The method of claim 27, wherein modifying the visual appearance of the second position comprises at least one of: displaying a marker, and displaying a line indicating at least partially a boundary of the second position. 30. The method of claim 27, wherein the touch input on the touch-sensitive display that causes the first icon to be selected is a stationary touch contact with the touch-sensitive display on the first icon that lasts for more than the predefined time period.
A portable electronic device displays icons (e.g., graphical objects) in one or more regions of a user interface of a touch-sensitive display, and detects user input specifying an exchange of positions of icons in the user interface. In some aspects, the respective positions of two icons in a user interface can be selected to exchange positions in the one or more regions of the user interface, and one or both icons can change their visual appearance to indicate their selection status.1. A portable electronic device comprising: a touch-sensitive display; one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: displaying a first icon in a first position of the touch-sensitive display; detecting a touch input on the touch-sensitive display for more than a predefined time period, wherein the touch input causes the first icon to be selected; in response to detecting the touch input on the touch-sensitive display for more than the predefined time period, modifying a visual appearance of the first icon; detecting movement of the touch input from the first position to a second position on the touch-sensitive display, wherein the detected movement of the touch input causes the first icon to be moved from the first position to the second position on the touch-sensitive display; in response to detecting movement of the touch input from the first position to the second position on the touch-sensitive display, modifying a visual appearance of the second position. 2. The device of claim 1, wherein the instructions further comprises: detecting a lift off of the touch input at the second position on the touch-sensitive display; in response to detecting the lift off of the touch input at the second position on the touch-sensitive display, displaying the first icon in the second position on the touch-sensitive display; and providing feedback to a user in response to displaying the first icon in the second position. 3. The device of claim 2, wherein the feedback comprises at least one of audio feedback and tactile feedback. 4. The device of claim 1, wherein modifying the visual appearance of the second position comprises displaying a marker. 5. The device of claim 1, wherein modifying the visual appearance of the second position comprises displaying a line indicating at least partially a boundary of the second position. 6. The device of claim 1, wherein the second position is occupied by a second icon. 7. The device of claim 1, wherein the instructions further comprises: displaying a second icon on the touch-sensitive display; detecting movement of the touch input that causes the first icon to be moved from the first position to a position within a proximity of the second icon on the touch-sensitive display; in response to detecting movement of the touch input that causes the first icon to be moved from the first position to the position within a proximity of the second icon on the touch-sensitive display, modifying a visual appearance of the second icon. 8. The device of claim 7, wherein modifying the visual appearance of the second icon comprises at least one of: applying a glowing effect to the second icon, and exchanging positions of the first and second icons. 9. The device of claim 1, wherein the touch-sensitive display is a multi-touch-sensitive display that is responsive to finger gestures. 10. The device of claim 1, wherein modifying the visual appearance of the first icon comprises scaling the first icon to a different size. 11. The device of claim 1, wherein detecting movement of the touch input that causes the first icon to be moved from the first position to the second position comprises: determining whether the detected movement of the touch input has touched or crossed a boundary line at least partially surrounding the second position on the touch-sensitive display. 12. The device of claim 1, wherein the touch input on the touch-sensitive display that causes the first icon to be selected is a stationary touch contact with the touch-sensitive display on the first icon that lasts for more than the predefined time period. 13. The device of claim 1, wherein the predefined time period is at or between 0.5 to 2.0 seconds. 14. A non-transitory computer readable storage medium having one or more programs stored thereon, which, when executed by a portable electronic device with a touch-sensitive display, cause the device to perform operations comprising: displaying a first icon in a first position of the touch-sensitive display; detecting a touch input on the touch-sensitive display for more than a predefined time period, wherein the touch input causes the first icon to be selected; in response to detecting the touch input on the touch-sensitive display for more than the predefined time period, modifying a visual appearance of the first icon; detecting movement of the touch input from the first position to a second position on the touch-sensitive display, wherein the detected movement of the touch input causes the first icon to be moved from the first position to the second position on the touch-sensitive display; in response to detecting movement of the touch input from the first position to the second position on the touch-sensitive display, modifying a visual appearance of the second position. 15. The storage medium of claim 14, wherein the operations further comprises: detecting a lift off of the touch input at the second position on the touch-sensitive display; in response to detecting the lift off of the touch input at the second position on the touch-sensitive display, displaying the first icon in the second position on the touch-sensitive display; and providing feedback to a user in response to displaying the first icon in the second position. 16. The storage medium of claim 15, wherein the feedback comprises at least one of audio feedback and tactile feedback. 17. The storage medium of claim 14, wherein modifying the visual appearance of the second position comprises displaying a marker. 18. The storage medium of claim 14, wherein modifying the visual appearance of the second position comprises displaying a line indicating at least partially a boundary of the second position. 19. The storage medium of claim 14, wherein the second position is occupied by a second icon. 20. The storage medium of claim 14, wherein the operations further comprises: displaying a second icon on the touch-sensitive display; detecting movement of the touch input that causes the first icon to be moved from the first position to a position within a proximity of the second icon on the touch-sensitive display; in response to detecting movement of the touch input that causes the first icon to be moved from the first position to the position within a proximity of the second icon on the touch-sensitive display, modifying a visual appearance of the second icon. 21. The storage medium of claim 20, wherein modifying the visual appearance of the second icon comprises at least one of: applying a glowing effect to the second icon, and exchanging positions of the first and second icons. 22. The storage medium of claim 14, wherein the touch-sensitive display is a multi-touch-sensitive display that is responsive to finger gestures. 23. The storage medium of claim 14, wherein modifying the visual appearance of the first icon comprises scaling the first icon to a different size. 24. The storage medium of claim 14, wherein detecting movement of the touch input that causes the first icon to be moved from the first position to the second position comprises: determining whether the detected movement of the touch input has touched or crossed a boundary line at least partially surrounding the second position on the touch-sensitive display. 25. The storage medium of claim 14, wherein the touch input on the touch-sensitive display that causes the first icon to be selected is a stationary touch contact with the touch-sensitive display on the first icon that lasts for more than the predefined time period. 26. The storage medium of claim 14, wherein the predefined time period is at or between 0.5 to 2.0 seconds. 27. A method comprising: at a portable electronic device with a touch-sensitive display: displaying a first icon in a first position of the touch-sensitive display; detecting a touch input on the touch-sensitive display on the touch-sensitive display for more than a predefined time period, wherein the touch input causes the first icon to be selected; in response to detecting the touch input on the touch-sensitive display for more than the predefined time period, modifying a visual appearance of the first icon; detecting movement of the touch input from the first position to a second position on the touch-sensitive display, wherein the detected movement of the touch input causes the first icon to be moved from the first position to the second position on the touch-sensitive display; in response to detecting movement of the touch input from the first position to the second position on the touch-sensitive display, modifying a visual appearance of the second position. 28. The method of claim 27, wherein the second position is occupied by a second icon. 29. The method of claim 27, wherein modifying the visual appearance of the second position comprises at least one of: displaying a marker, and displaying a line indicating at least partially a boundary of the second position. 30. The method of claim 27, wherein the touch input on the touch-sensitive display that causes the first icon to be selected is a stationary touch contact with the touch-sensitive display on the first icon that lasts for more than the predefined time period.
2,600
9,955
9,955
14,617,791
2,613
Animation coordination system and methods are provided that manage animation context transitions between and/or among multiple applications. A global coordinator can obtain initial information, such as initial graphical representations and object types, initial positions, etc., from initiator applications and final information, such as final graphical representations and object types, final positions, etc. from destination applications. The global coordination creates an animation context transition between initiator applications and destination applications based upon the initial information and the final information.
1-20. (canceled) 21. A computing device for providing animated graphical transitions between applications, the computing device comprising: a memory and a processor, wherein the memory and the processor are respectively configured to store and execute computer executable instructions, including instructions for performing operations including: receiving a request to transition a context of the computing device from a first context that is associated with an initiator application to a second context that is associated with a destination application other than the initiator application; coordinating an animated graphical transition from the initiator application to the destination application based at least in part on information from the initiator application and on information from the destination application, wherein the coordinated animated visual transition provides an appearance of originating from a representation of an object displayed by the initiator application and ending in another representation of the object displayed by the destination application, and the coordinated animated visual transition depicting the opening of the object with the destination application; and displaying the coordinated animated graphical transition from the initiator application to the destination application on a display device associated with the computing device. 22. The computing device of claim 21, wherein: coordinating the animated graphical transition includes: receiving, by an animation coordination module, the information from the initiator application; receiving, by the animation coordination module, the information from the destination application; and rendering the coordinated animated graphical transition from the initiator application to the destination application based at least in part on information from the initiator application and on information from the destination application. 23. The computing device of claim 21, wherein the operations further include: switching the context of the computing device from the first context to the second context in conjunction with displaying the coordinated animated graphical transition. 24. The computing device of claim 21, wherein the operations further include: rendering the coordinated animated graphical transition, including: creating a frame order for graphical representations of a set of graphical representations; rendering multiple graphical representations of the set of graphical representations; and determining, relative to each other, when the individual graphical representations of the multiple graphical representations are to be displayed. 25. The computing device of claim 21, wherein the request to transition the context of the computing device is also a request to open the object with the destination application. 26. The computing device of claim 21, wherein the operations further include: generating an animation identifier that corresponds to the animated graphical transition; issuing the animation identifier to the initiator application and to the destination application; and storing the information from the initiator application and information from the destination application in association with the animation identifier. 27. The computing device of claim 21, wherein displaying the coordinated animated graphical transition includes: displaying the coordinated animated graphical transition at a top-most z-position on the display device. 28. The computing device of claim 21, displaying the coordinated animated graphical transition includes: waiting for a readiness indicator from the destination application before displaying of the coordinated animated graphical transition. 29. A computing device for providing animated graphical transitions between applications, the computing device comprising: a display component configured to display graphical images; and a memory and a processor, wherein the memory and the processor are respectively configured to store and execute computer executable instructions, including instructions organized into: a first application that causes a representation of an object to be displayed by the display component, and that receives an indication of an actuation of the representation of the object; a second application that is invoked in response to the indication of the activation of the representation of the object; and an animation coordinator that receives information from the first application and from the second application, and that coordinates an animated visual transition from the first application to the second application based at least in part on the information from the first application and from the second application, wherein the coordinated animated visual transition provides an appearance of originating from the representation of the object and ending in a display of the second application, and the coordinated animated visual transition depicting the opening of the object with the second application. 30. The computing device of claim 29, wherein the animation coordinator also renders the coordinated animated visual transition. 31. The computing device of claim 29, wherein the coordinated animated visual transition visually depicts a shift in context of the computing device from the first application to the second application. 32. The computing device of claim 29, wherein the information from the first application includes a graphical representation of the object, a type of the object, and/or an initial position of the object. 33. A method for providing animated graphical transitions between applications, the method comprising: receiving a request to open an object represented in a first application with a second application; coordinating an animated graphical transition from the first application to the destination application based at least in part on information from the first application and on information from the second application, wherein the coordinated animated graphical transition provides an appearance of starting from the representation of the object in the first application and ending in a display of the second application, and the coordinated animated graphical transition depicting the opening of the object with the second application; and displaying the coordinated animated graphical transition from the first application to the second application on a display device associated with the computing device. 34. The method of claim 33, wherein: the request to open the object depicted in the first application with the second application represents a request to transition the context of the computing device; and coordinating the animated graphical transition includes: receiving, by an animation coordination module, the information from the first application; receiving, by the animation coordination module, the information from the second application; and coordinating the coordinated animated graphical transition from the first application to the second application based at least in part on the information from the first application and on the information from the second application. 35. The method of claim 33, wherein coordinating the coordinated animated graphical transition includes: creating a frame order for graphical representations of a set of graphical representations; rendering multiple graphical representations of the set of graphical representations; and determining, relative to each other, when the individual graphical representations of the multiple graphical representations are to be displayed. 36. The method of claim 33, wherein the method further comprises: generating an animation identifier that corresponds to the animated graphical transition; issuing the animation identifier to the first application and to the second application; and storing the information from the first application and information from the second application in association with the animation identifier. 37. The method of claim 33, wherein the method further comprises: opening the object with the second application. 38. The method of claim 33, wherein displaying the coordinated animated graphical transition includes: displaying the coordinated animated graphical transition at an enforced top-most z-position on the display device. 39. The method of claim 33, wherein the information from the first application includes a graphical representation of the object, a type of the object, and/or an initial position of the object. 40. The method of claim 33, wherein the information from the second application includes a final graphical representation for the animated graphical transition.
Animation coordination system and methods are provided that manage animation context transitions between and/or among multiple applications. A global coordinator can obtain initial information, such as initial graphical representations and object types, initial positions, etc., from initiator applications and final information, such as final graphical representations and object types, final positions, etc. from destination applications. The global coordination creates an animation context transition between initiator applications and destination applications based upon the initial information and the final information.1-20. (canceled) 21. A computing device for providing animated graphical transitions between applications, the computing device comprising: a memory and a processor, wherein the memory and the processor are respectively configured to store and execute computer executable instructions, including instructions for performing operations including: receiving a request to transition a context of the computing device from a first context that is associated with an initiator application to a second context that is associated with a destination application other than the initiator application; coordinating an animated graphical transition from the initiator application to the destination application based at least in part on information from the initiator application and on information from the destination application, wherein the coordinated animated visual transition provides an appearance of originating from a representation of an object displayed by the initiator application and ending in another representation of the object displayed by the destination application, and the coordinated animated visual transition depicting the opening of the object with the destination application; and displaying the coordinated animated graphical transition from the initiator application to the destination application on a display device associated with the computing device. 22. The computing device of claim 21, wherein: coordinating the animated graphical transition includes: receiving, by an animation coordination module, the information from the initiator application; receiving, by the animation coordination module, the information from the destination application; and rendering the coordinated animated graphical transition from the initiator application to the destination application based at least in part on information from the initiator application and on information from the destination application. 23. The computing device of claim 21, wherein the operations further include: switching the context of the computing device from the first context to the second context in conjunction with displaying the coordinated animated graphical transition. 24. The computing device of claim 21, wherein the operations further include: rendering the coordinated animated graphical transition, including: creating a frame order for graphical representations of a set of graphical representations; rendering multiple graphical representations of the set of graphical representations; and determining, relative to each other, when the individual graphical representations of the multiple graphical representations are to be displayed. 25. The computing device of claim 21, wherein the request to transition the context of the computing device is also a request to open the object with the destination application. 26. The computing device of claim 21, wherein the operations further include: generating an animation identifier that corresponds to the animated graphical transition; issuing the animation identifier to the initiator application and to the destination application; and storing the information from the initiator application and information from the destination application in association with the animation identifier. 27. The computing device of claim 21, wherein displaying the coordinated animated graphical transition includes: displaying the coordinated animated graphical transition at a top-most z-position on the display device. 28. The computing device of claim 21, displaying the coordinated animated graphical transition includes: waiting for a readiness indicator from the destination application before displaying of the coordinated animated graphical transition. 29. A computing device for providing animated graphical transitions between applications, the computing device comprising: a display component configured to display graphical images; and a memory and a processor, wherein the memory and the processor are respectively configured to store and execute computer executable instructions, including instructions organized into: a first application that causes a representation of an object to be displayed by the display component, and that receives an indication of an actuation of the representation of the object; a second application that is invoked in response to the indication of the activation of the representation of the object; and an animation coordinator that receives information from the first application and from the second application, and that coordinates an animated visual transition from the first application to the second application based at least in part on the information from the first application and from the second application, wherein the coordinated animated visual transition provides an appearance of originating from the representation of the object and ending in a display of the second application, and the coordinated animated visual transition depicting the opening of the object with the second application. 30. The computing device of claim 29, wherein the animation coordinator also renders the coordinated animated visual transition. 31. The computing device of claim 29, wherein the coordinated animated visual transition visually depicts a shift in context of the computing device from the first application to the second application. 32. The computing device of claim 29, wherein the information from the first application includes a graphical representation of the object, a type of the object, and/or an initial position of the object. 33. A method for providing animated graphical transitions between applications, the method comprising: receiving a request to open an object represented in a first application with a second application; coordinating an animated graphical transition from the first application to the destination application based at least in part on information from the first application and on information from the second application, wherein the coordinated animated graphical transition provides an appearance of starting from the representation of the object in the first application and ending in a display of the second application, and the coordinated animated graphical transition depicting the opening of the object with the second application; and displaying the coordinated animated graphical transition from the first application to the second application on a display device associated with the computing device. 34. The method of claim 33, wherein: the request to open the object depicted in the first application with the second application represents a request to transition the context of the computing device; and coordinating the animated graphical transition includes: receiving, by an animation coordination module, the information from the first application; receiving, by the animation coordination module, the information from the second application; and coordinating the coordinated animated graphical transition from the first application to the second application based at least in part on the information from the first application and on the information from the second application. 35. The method of claim 33, wherein coordinating the coordinated animated graphical transition includes: creating a frame order for graphical representations of a set of graphical representations; rendering multiple graphical representations of the set of graphical representations; and determining, relative to each other, when the individual graphical representations of the multiple graphical representations are to be displayed. 36. The method of claim 33, wherein the method further comprises: generating an animation identifier that corresponds to the animated graphical transition; issuing the animation identifier to the first application and to the second application; and storing the information from the first application and information from the second application in association with the animation identifier. 37. The method of claim 33, wherein the method further comprises: opening the object with the second application. 38. The method of claim 33, wherein displaying the coordinated animated graphical transition includes: displaying the coordinated animated graphical transition at an enforced top-most z-position on the display device. 39. The method of claim 33, wherein the information from the first application includes a graphical representation of the object, a type of the object, and/or an initial position of the object. 40. The method of claim 33, wherein the information from the second application includes a final graphical representation for the animated graphical transition.
2,600
9,956
9,956
15,617,506
2,689
A field device for an industrial process includes a current detector for detecting the amount of current consumed by the field device and an interface allowing the field device to communicate with at least one other device. The interface communicates a first alert when the amount of current consumed by the field device exceeds a first threshold and a second alert when the amount of current consumed by the field device is below a second threshold.
1. A field device for an industrial process, the field device comprising: a current detector for detecting the amount of current consumed by the field device; an interface, allowing the field device to communicate with at least one other device, such that the interface communicates a first alert when the amount of current consumed by the field device exceeds a first threshold and a second alert when the amount of current consumed by the field device is below a second threshold. 2. The field device of claim 1 wherein the current detector detects an amount of current flowing through a power regulator in the field device. 3. The field device of claim 2 wherein the current detector comprises an amplifier stage and a comparator stage. 4. The field device of claim 3 wherein the amplifier stage removes communication signals present in the current flowing through the power regulator. 5. The field device of claim 3 wherein the amplifier stage provides an amplified voltage output and the comparator stage comprises two parallel comparators that each receives the amplified voltage as input. 6. The field device of claim 5 wherein a first of the two parallel comparators changes a first binary output when the amplified voltage is above a first threshold and a second of the two parallel comparators changes a second binary output when the amplified voltage is below a second threshold. 7. The field device of claim 6 further comprising a microcontroller that receives the first binary output and the second binary output and controls the interface to control when the first alert and the second alert are communicated based on the first binary output and the second binary output. 8. The field device of claim 1 wherein the interface communicates the first alert before the field device is unable to perform a function designated for the field device. 9. The field device of claim 8 wherein the function designated for the field device is to provide a value for a process variable. 10. The field device of claim 1 wherein the field device is part of a segment of field devices and wherein the interface communicates the first alert before the field device impacts the operation of other field devices in the segment of field devices. 11. The field device of claim 1 wherein the field device further comprises circuitry that is triggered to perform diagnostic tests on the field device when the amount of current consumed by the field device exceeds a first threshold. 12. The field device of claim 1 wherein the current detector provides a digital value representative of the amount of current consumed by the field device to a microcontroller and the microcontroller controls the interface to control when the first alert and the second alert are communicated based on the digital value. 13. A field device for an industrial process, the field device comprising: an interface, allowing the field device to communicate with at least one other device, the interface communicating an alert that indicates that the field device is functioning improperly when an amount of current consumed at the field device is below a threshold. 14. The field device of claim 13 wherein the interface communicates a second alert that indicates that the field device is functioning improperly when an amount of current consumed at the field device is above a second threshold. 15. The field device of claim 14 further comprising a quiescent current detector that detects whether the amount of current consumed at the field device is below the threshold by determining if an amount of current passing through a power regulator in the field device is below the threshold and that detects whether the amount of current consumed at the field device is above the second threshold by determining if an amount of current passing through the power regulator is above the second threshold. 16. The field device of claim 15 wherein the quiescent current detector comprises an amplifier stage and a comparator stage wherein the amplifier stage amplifies a voltage that is based on the current passing through the power regulator and the comparator stage comprises two parallel comparators that each receives an output of the amplifier stage. 17. The field device of claim 13 wherein the interface communicates an alert that indicates that the field device is functioning improperly before the field device is unable to perform a function expected of the field device. 18. An industrial process field device comprising: device circuitry; a two-wire loop connection; a power regulator that provides regulated power to the device circuitry based on unregulated power provided on the two-wire loop connection; a quiescent current detector that provides a binary indication of whether current passing through the power regulator exceeds a threshold, wherein the binary indication is provided to the device circuitry; interface circuitry connected to the device circuitry and the two-wire loop connection and providing a signal to the two-wire loop connection indicative of an alert that the field device is functioning improperly when the device circuitry receives the binary indication that the current passing through the power regulator exceeds the threshold. 19. The industrial process field device of claim 18 wherein the device circuitry is triggered to perform diagnostic tests on the industrial process field device when the device circuitry receives the binary indication that the current passing through the power regulator exceeds the threshold. 20. The industrial process field device of claim 18 wherein the quiescent current detector provides a binary indication of whether current passing through the power regulator is below a second threshold, wherein the binary indication of whether current passing through the power regulator is below the second threshold is provided to the device circuitry; and the interface circuitry provides a signal to the two-wire loop connection indicative of a second alert that the field device is functioning improperly when the device circuitry receives the binary indication that the current passing through the power regulator is below the second threshold. 21. The industrial process field device of claim 18 wherein the quiescent current detector comprises an amplifier stage and a comparator stage. 22. The industrial process field device of claim 21 wherein the amplifier stage comprises an amplifier with an inverting and non-inverting input, wherein the inverting input is coupled to a first side of a resistance in the power regulator and the non-inverting input is coupled to a second side of the resistance in the power regulator. 23. The industrial process field device of claim 22 wherein the inverting input and the non-inverting input are coupled to ground through respective capacitors to remove communication signals present on the two-wire loop connection. 24. The industrial process field device of claim 21 wherein the comparator stage comprises two parallel comparators that each receives a same output from the amplifier stage. 25. The industrial process field device of claim 24 wherein a first of the two parallel comparators compares the output from the amplifier stage to a first voltage and a second of the two parallel comparators compares the output from the amplifier stage to a second voltage. 26. The industrial process field device of claim 18 wherein the signal indicative of the alert that the field device is functioning improperly is provided to the two-wire loop connection before the field device provides an erroneous value for a measured process variable. 27. The industrial process field device of claim 18 wherein the signal indicative of the alert that the field device is functioning improperly is provided to the two-wire loop connection before a segment of industrial process field devices connected to the industrial process field device fails.
A field device for an industrial process includes a current detector for detecting the amount of current consumed by the field device and an interface allowing the field device to communicate with at least one other device. The interface communicates a first alert when the amount of current consumed by the field device exceeds a first threshold and a second alert when the amount of current consumed by the field device is below a second threshold.1. A field device for an industrial process, the field device comprising: a current detector for detecting the amount of current consumed by the field device; an interface, allowing the field device to communicate with at least one other device, such that the interface communicates a first alert when the amount of current consumed by the field device exceeds a first threshold and a second alert when the amount of current consumed by the field device is below a second threshold. 2. The field device of claim 1 wherein the current detector detects an amount of current flowing through a power regulator in the field device. 3. The field device of claim 2 wherein the current detector comprises an amplifier stage and a comparator stage. 4. The field device of claim 3 wherein the amplifier stage removes communication signals present in the current flowing through the power regulator. 5. The field device of claim 3 wherein the amplifier stage provides an amplified voltage output and the comparator stage comprises two parallel comparators that each receives the amplified voltage as input. 6. The field device of claim 5 wherein a first of the two parallel comparators changes a first binary output when the amplified voltage is above a first threshold and a second of the two parallel comparators changes a second binary output when the amplified voltage is below a second threshold. 7. The field device of claim 6 further comprising a microcontroller that receives the first binary output and the second binary output and controls the interface to control when the first alert and the second alert are communicated based on the first binary output and the second binary output. 8. The field device of claim 1 wherein the interface communicates the first alert before the field device is unable to perform a function designated for the field device. 9. The field device of claim 8 wherein the function designated for the field device is to provide a value for a process variable. 10. The field device of claim 1 wherein the field device is part of a segment of field devices and wherein the interface communicates the first alert before the field device impacts the operation of other field devices in the segment of field devices. 11. The field device of claim 1 wherein the field device further comprises circuitry that is triggered to perform diagnostic tests on the field device when the amount of current consumed by the field device exceeds a first threshold. 12. The field device of claim 1 wherein the current detector provides a digital value representative of the amount of current consumed by the field device to a microcontroller and the microcontroller controls the interface to control when the first alert and the second alert are communicated based on the digital value. 13. A field device for an industrial process, the field device comprising: an interface, allowing the field device to communicate with at least one other device, the interface communicating an alert that indicates that the field device is functioning improperly when an amount of current consumed at the field device is below a threshold. 14. The field device of claim 13 wherein the interface communicates a second alert that indicates that the field device is functioning improperly when an amount of current consumed at the field device is above a second threshold. 15. The field device of claim 14 further comprising a quiescent current detector that detects whether the amount of current consumed at the field device is below the threshold by determining if an amount of current passing through a power regulator in the field device is below the threshold and that detects whether the amount of current consumed at the field device is above the second threshold by determining if an amount of current passing through the power regulator is above the second threshold. 16. The field device of claim 15 wherein the quiescent current detector comprises an amplifier stage and a comparator stage wherein the amplifier stage amplifies a voltage that is based on the current passing through the power regulator and the comparator stage comprises two parallel comparators that each receives an output of the amplifier stage. 17. The field device of claim 13 wherein the interface communicates an alert that indicates that the field device is functioning improperly before the field device is unable to perform a function expected of the field device. 18. An industrial process field device comprising: device circuitry; a two-wire loop connection; a power regulator that provides regulated power to the device circuitry based on unregulated power provided on the two-wire loop connection; a quiescent current detector that provides a binary indication of whether current passing through the power regulator exceeds a threshold, wherein the binary indication is provided to the device circuitry; interface circuitry connected to the device circuitry and the two-wire loop connection and providing a signal to the two-wire loop connection indicative of an alert that the field device is functioning improperly when the device circuitry receives the binary indication that the current passing through the power regulator exceeds the threshold. 19. The industrial process field device of claim 18 wherein the device circuitry is triggered to perform diagnostic tests on the industrial process field device when the device circuitry receives the binary indication that the current passing through the power regulator exceeds the threshold. 20. The industrial process field device of claim 18 wherein the quiescent current detector provides a binary indication of whether current passing through the power regulator is below a second threshold, wherein the binary indication of whether current passing through the power regulator is below the second threshold is provided to the device circuitry; and the interface circuitry provides a signal to the two-wire loop connection indicative of a second alert that the field device is functioning improperly when the device circuitry receives the binary indication that the current passing through the power regulator is below the second threshold. 21. The industrial process field device of claim 18 wherein the quiescent current detector comprises an amplifier stage and a comparator stage. 22. The industrial process field device of claim 21 wherein the amplifier stage comprises an amplifier with an inverting and non-inverting input, wherein the inverting input is coupled to a first side of a resistance in the power regulator and the non-inverting input is coupled to a second side of the resistance in the power regulator. 23. The industrial process field device of claim 22 wherein the inverting input and the non-inverting input are coupled to ground through respective capacitors to remove communication signals present on the two-wire loop connection. 24. The industrial process field device of claim 21 wherein the comparator stage comprises two parallel comparators that each receives a same output from the amplifier stage. 25. The industrial process field device of claim 24 wherein a first of the two parallel comparators compares the output from the amplifier stage to a first voltage and a second of the two parallel comparators compares the output from the amplifier stage to a second voltage. 26. The industrial process field device of claim 18 wherein the signal indicative of the alert that the field device is functioning improperly is provided to the two-wire loop connection before the field device provides an erroneous value for a measured process variable. 27. The industrial process field device of claim 18 wherein the signal indicative of the alert that the field device is functioning improperly is provided to the two-wire loop connection before a segment of industrial process field devices connected to the industrial process field device fails.
2,600
9,957
9,957
12,592,581
2,646
The present invention provides a method and apparatus for completing a transaction using a wireless mobile communication channel and another communication channel, particularly another communication channel that provides for near field radio channels (NFC), as well as other communication channels, such as Bluetooth or WIFI. The present invention also provides a method of completing a transaction in which a management server assists a transaction server and a point of sale terminal in forwarding transaction information to a hand-held mobile device, with the transaction having originated from the hand-held mobile device. There is also provided a hand-held mobile device that wirelessly communicates between a secure element and a radio element that are associated with the hand-held mobile device.
1. A method for at least one user to complete a transaction using a hand-held mobile device as part of a system that also includes a point-of-sale terminal, a transaction server, and a management server, at least one of the hand-held mobile devices having associated therewith a telephone number, a processor, a visual display coupled to the processor using a display connection, a secure element that has an identification code stored therein, the secure element being adapted to transmit an identification code signal that has the identification code therein over a first communication channel to any of a plurality of point-of-sale terminals, and a radio transceiver adapted to send outgoing voice and data signals and receive incoming voice and data signals over a second communication channel that is different than the first communication channel, wherein the radio transceiver is coupled to the processor using a wired connection, and wherein a management server that receives the identification code and accesses a database indicating correspondence of the identification code and the telephone number of the hand-held mobile device is adapted to initiate transmission over the second communication channel, the method comprising the steps of: initiating the transaction without using the processor, without using the visual display, and without using the radio transceiver, the step of initiating including wirelessly providing the identification code stored on the secure element to any one of the plurality of point-of-sale terminals using the first communication channel, the identification code associated with the user of the hand-held mobile device; receiving certain transaction data associated with the transaction at the second transceiver over the second communication channel; and displaying at least some of the certain transaction data on the visual display associated with the hand-held mobile device. 2. The method according to claim 1 further including a step of the transmitting, from the hand-held mobile device directly to another hand-held mobile device at least a portion of data associated with the certain transaction data. 3. (canceled) 4. (canceled) 5. The method according to claim 6 wherein the tag of the secure element is an adhesive tag secured to an external surface of the body of the hand-held mobile device. 6. The method according to claim 7 wherein the secure element is disposed on an tag secured externally to the body of the hand-held mobile device. 7. The method according to claim 1 wherein the radio processor is internally disposed within a body of the hand-held mobile device, wherein the secure element is external to the body of the hand-held mobile device. 8. The method according to claim 7 wherein the wireless communication channel used in the step of communicating is a near field communication channel. 9. The method according to claim 7 wherein the wireless communication channel used in the step of communicating is one of a Bluetooth and WIFI communication channel. 10. The method according to claim 1 wherein the identification code signal is used in placing an order and the certain transaction data comprises information from a receipt from a credit card transaction. 11. The method according to claim 1 wherein the identification code signal is used in placing an order and the certain transaction data comprises information from a receipt from one of a debit card and cash card transaction. 12. The method according to claim 1 wherein the identification code signal is used in placing an order and the certain transaction data is a ticket. 13. The method according to claim 1 wherein the identification code signal is used in placing an order and the certain transaction data is a coupon. 14. (canceled) 15. An apparatus that is used in a system for assisting a user to complete a transaction initiated by transmission of an identification code over a first communication channel to a point-of-sale terminal and completion being indicated by receipt of certain transaction data over a second communication channel that is different from the first communication channel that is initiated by a management server that receives the identification code and contains a database indicating correspondence of the identification code and the telephone number of the hand-held mobile device, the apparatus comprising: a hand-held mobile device having the telephone number associated therewith, the hand-held mobile device having: a processor; a secure element that has the identification code stored therein that allows for identification by the management server of the telephone number of the hand-held mobile device, the secure element being adapted to transmit an identification code signal that has the identification code therein over the first communication channel to the point-of-sale terminal without using the processor; a radio transceiver coupled to the processor and adapted to send outgoing voice and data signals and receive incoming voice and data signals over the second communication channel, the incoming and outgoing data signals including the certain transaction data associated with the transaction, wherein the radio transceiver is not used to transmit the identification code signal that has the identification code therein over the first communication channel; and a visual display coupled to the processor and adapted to display some of the certain transaction data upon receipt thereof by the radio transceiver and the processor, wherein the visual display is not used to transmit the identification code signal that has the identification code therein over the first communication channel. 16. (canceled) 17. (canceled) 18. The apparatus according to claim 19 wherein secure element is disposed on an adhesive tag secured externally to the body of the hand-held mobile device. 19. The apparatus according to claim 20 wherein the secure element is disposed on a tag external to the body of the hand-held mobile device. 20. The apparatus according to claim 18, wherein the processor is internally disposed within a body of the hand-held mobile device, and wherein the secure element is external to the body of the hand-held mobile device and wherein the secure element communicates using a wireless communication channel as the first communication channel. 21. The apparatus according to claim 20 wherein the wireless communication channel is one of a Bluetooth and WIFI communication channel. 22. The apparatus according to claim 20 wherein the wireless communication channel is an NFC communication channel. 23. (canceled) 24. (canceled) 25. (canceled) 26. The method according to claim 1 wherein the step of initiating uses a NFC device as the secure element. 27. The method according to claim 6 wherein the step of initiating uses a NFC device as the secure element. 28. The method according to claim 5 wherein the step of initiating uses a NFC device as the secure element. 29. The apparatus according to claim 15 wherein the secure element is a NFC device. 30. The apparatus according to claim 19 wherein the secure element is a NFC device. 31. The apparatus according to claim 18 wherein the secure element is a NFC device. 32. The method according to claim 1 further including the step of receiving other transaction data at the secure element using the first communication channel. 33. The method according to claim 1 wherein the processor is implemented as a radio processor coupled to the radio transceiver, and wherein the secure element includes a secure processor, and, after the step of initiating, further including the step of secure processor communicating transaction signals to the radio processor for usage with a transaction application executed by the radio processor. 34. The method according to claim 7 wherein the processor is implemented as a radio processor coupled to the radio transceiver and is internally disposed within a body of the hand-held mobile device, wherein the secure element includes a secure processor and is disposed within a slot that exists on the body of the hand-held mobile device, and further including the step of secure processor, after the step of initiating, communicating transaction signals to the radio processor using a wired communication channel for usage with a transaction application executed by the radio processor. 35. The apparatus according to claim 15 wherein the processor is implemented as a radio processor coupled to the radio transceiver, and wherein the secure element includes a secure processor, and, wherein the secure processor communicates transaction signals to the radio processor for usage with a transaction application executed by the radio processor. 36. The apparatus according to claim 19 wherein the processor is implemented as a radio processor coupled to the radio transceiver and is internally disposed within a body of the hand-held mobile device, wherein the secure element includes a secure processor and is disposed within a slot that exists on the body of the hand-held mobile device, and wherein the secure processor communicates transaction signals to the radio processor using a wired communication channel for usage with a transaction application executed by the radio processor.
The present invention provides a method and apparatus for completing a transaction using a wireless mobile communication channel and another communication channel, particularly another communication channel that provides for near field radio channels (NFC), as well as other communication channels, such as Bluetooth or WIFI. The present invention also provides a method of completing a transaction in which a management server assists a transaction server and a point of sale terminal in forwarding transaction information to a hand-held mobile device, with the transaction having originated from the hand-held mobile device. There is also provided a hand-held mobile device that wirelessly communicates between a secure element and a radio element that are associated with the hand-held mobile device.1. A method for at least one user to complete a transaction using a hand-held mobile device as part of a system that also includes a point-of-sale terminal, a transaction server, and a management server, at least one of the hand-held mobile devices having associated therewith a telephone number, a processor, a visual display coupled to the processor using a display connection, a secure element that has an identification code stored therein, the secure element being adapted to transmit an identification code signal that has the identification code therein over a first communication channel to any of a plurality of point-of-sale terminals, and a radio transceiver adapted to send outgoing voice and data signals and receive incoming voice and data signals over a second communication channel that is different than the first communication channel, wherein the radio transceiver is coupled to the processor using a wired connection, and wherein a management server that receives the identification code and accesses a database indicating correspondence of the identification code and the telephone number of the hand-held mobile device is adapted to initiate transmission over the second communication channel, the method comprising the steps of: initiating the transaction without using the processor, without using the visual display, and without using the radio transceiver, the step of initiating including wirelessly providing the identification code stored on the secure element to any one of the plurality of point-of-sale terminals using the first communication channel, the identification code associated with the user of the hand-held mobile device; receiving certain transaction data associated with the transaction at the second transceiver over the second communication channel; and displaying at least some of the certain transaction data on the visual display associated with the hand-held mobile device. 2. The method according to claim 1 further including a step of the transmitting, from the hand-held mobile device directly to another hand-held mobile device at least a portion of data associated with the certain transaction data. 3. (canceled) 4. (canceled) 5. The method according to claim 6 wherein the tag of the secure element is an adhesive tag secured to an external surface of the body of the hand-held mobile device. 6. The method according to claim 7 wherein the secure element is disposed on an tag secured externally to the body of the hand-held mobile device. 7. The method according to claim 1 wherein the radio processor is internally disposed within a body of the hand-held mobile device, wherein the secure element is external to the body of the hand-held mobile device. 8. The method according to claim 7 wherein the wireless communication channel used in the step of communicating is a near field communication channel. 9. The method according to claim 7 wherein the wireless communication channel used in the step of communicating is one of a Bluetooth and WIFI communication channel. 10. The method according to claim 1 wherein the identification code signal is used in placing an order and the certain transaction data comprises information from a receipt from a credit card transaction. 11. The method according to claim 1 wherein the identification code signal is used in placing an order and the certain transaction data comprises information from a receipt from one of a debit card and cash card transaction. 12. The method according to claim 1 wherein the identification code signal is used in placing an order and the certain transaction data is a ticket. 13. The method according to claim 1 wherein the identification code signal is used in placing an order and the certain transaction data is a coupon. 14. (canceled) 15. An apparatus that is used in a system for assisting a user to complete a transaction initiated by transmission of an identification code over a first communication channel to a point-of-sale terminal and completion being indicated by receipt of certain transaction data over a second communication channel that is different from the first communication channel that is initiated by a management server that receives the identification code and contains a database indicating correspondence of the identification code and the telephone number of the hand-held mobile device, the apparatus comprising: a hand-held mobile device having the telephone number associated therewith, the hand-held mobile device having: a processor; a secure element that has the identification code stored therein that allows for identification by the management server of the telephone number of the hand-held mobile device, the secure element being adapted to transmit an identification code signal that has the identification code therein over the first communication channel to the point-of-sale terminal without using the processor; a radio transceiver coupled to the processor and adapted to send outgoing voice and data signals and receive incoming voice and data signals over the second communication channel, the incoming and outgoing data signals including the certain transaction data associated with the transaction, wherein the radio transceiver is not used to transmit the identification code signal that has the identification code therein over the first communication channel; and a visual display coupled to the processor and adapted to display some of the certain transaction data upon receipt thereof by the radio transceiver and the processor, wherein the visual display is not used to transmit the identification code signal that has the identification code therein over the first communication channel. 16. (canceled) 17. (canceled) 18. The apparatus according to claim 19 wherein secure element is disposed on an adhesive tag secured externally to the body of the hand-held mobile device. 19. The apparatus according to claim 20 wherein the secure element is disposed on a tag external to the body of the hand-held mobile device. 20. The apparatus according to claim 18, wherein the processor is internally disposed within a body of the hand-held mobile device, and wherein the secure element is external to the body of the hand-held mobile device and wherein the secure element communicates using a wireless communication channel as the first communication channel. 21. The apparatus according to claim 20 wherein the wireless communication channel is one of a Bluetooth and WIFI communication channel. 22. The apparatus according to claim 20 wherein the wireless communication channel is an NFC communication channel. 23. (canceled) 24. (canceled) 25. (canceled) 26. The method according to claim 1 wherein the step of initiating uses a NFC device as the secure element. 27. The method according to claim 6 wherein the step of initiating uses a NFC device as the secure element. 28. The method according to claim 5 wherein the step of initiating uses a NFC device as the secure element. 29. The apparatus according to claim 15 wherein the secure element is a NFC device. 30. The apparatus according to claim 19 wherein the secure element is a NFC device. 31. The apparatus according to claim 18 wherein the secure element is a NFC device. 32. The method according to claim 1 further including the step of receiving other transaction data at the secure element using the first communication channel. 33. The method according to claim 1 wherein the processor is implemented as a radio processor coupled to the radio transceiver, and wherein the secure element includes a secure processor, and, after the step of initiating, further including the step of secure processor communicating transaction signals to the radio processor for usage with a transaction application executed by the radio processor. 34. The method according to claim 7 wherein the processor is implemented as a radio processor coupled to the radio transceiver and is internally disposed within a body of the hand-held mobile device, wherein the secure element includes a secure processor and is disposed within a slot that exists on the body of the hand-held mobile device, and further including the step of secure processor, after the step of initiating, communicating transaction signals to the radio processor using a wired communication channel for usage with a transaction application executed by the radio processor. 35. The apparatus according to claim 15 wherein the processor is implemented as a radio processor coupled to the radio transceiver, and wherein the secure element includes a secure processor, and, wherein the secure processor communicates transaction signals to the radio processor for usage with a transaction application executed by the radio processor. 36. The apparatus according to claim 19 wherein the processor is implemented as a radio processor coupled to the radio transceiver and is internally disposed within a body of the hand-held mobile device, wherein the secure element includes a secure processor and is disposed within a slot that exists on the body of the hand-held mobile device, and wherein the secure processor communicates transaction signals to the radio processor using a wired communication channel for usage with a transaction application executed by the radio processor.
2,600
9,958
9,958
14,283,111
2,646
In a universal plug module for use in connection with a universal mobile telephone holder comprising a frame for connection to the telephone holder and a plug carrier block, a plug holder insert is provided for installation in the plug carrier block, which plug holder insert has an opening for accommodating for example a charger plug which can be fixed in position together with the plug disposed in the plug carrier block insert by means of a clamping element.
1. A universal plug module for use in connection with, a universal mobile telephone holder, comprising: a frame (1) for connection to the telephone holder, a plug carrier block (2) and a plug holder insert (3) for removable installation in the plug carrier block (2), the plug holder insert (3) having an accommodation opening (31) for the insertion of a charger plug or a data cable plug, and a clamping element (4) for fixing the plug holder insert (3) together with the charger or data cable plug inserted into the charger plug engaged in the plug carrier block (2). 2. The universal plug module according to claim 1, wherein the frame (1) is provided with at least one coupling means (11) for connection to a lower edge area of the mobile telephone holder. 3. The universal plug module according to claim 2, wherein the coupling means (11) is a profile strip which is insertable into a guide groove provided in the lower edge area of the mobile telephone holder for accommodating support legs. 4. The universal plug module according to claim 1, wherein the clamping element (4) is a clamping screw which engages and fixes the plug holder insert (3) together with the plug (5) received therein in the plug carrier block (2). 5. The universal plug module according to claim 1, wherein the plug carrier block (2) is arranged so as to be adjustable by a certain adjustment distance in the forward-backward direction. 6. The universal plug module according to claim 5, wherein the plug carrier block (2) is adjustable in steps in the forward-backward direction by a ratchet mechanism (12) within a certain adjustment distance. 7. The universal plug module according to claim 1, wherein the plug carrier block (2) is fixedly arranged on the frame (1) and the plug carrier insert (3) has in the forward-backward direction a depth which is sufficient to arrange the accommodation opening (31) for inserting the plug in the forward-backward direction in the proper position corresponding to the respective plug and the associated mobile telephone model. 8. The universal plug module according to claim 1, wherein several plug holder inserts (3) with different accommodation openings (3) are provided which are adapted to accommodate different common plugs.
In a universal plug module for use in connection with a universal mobile telephone holder comprising a frame for connection to the telephone holder and a plug carrier block, a plug holder insert is provided for installation in the plug carrier block, which plug holder insert has an opening for accommodating for example a charger plug which can be fixed in position together with the plug disposed in the plug carrier block insert by means of a clamping element.1. A universal plug module for use in connection with, a universal mobile telephone holder, comprising: a frame (1) for connection to the telephone holder, a plug carrier block (2) and a plug holder insert (3) for removable installation in the plug carrier block (2), the plug holder insert (3) having an accommodation opening (31) for the insertion of a charger plug or a data cable plug, and a clamping element (4) for fixing the plug holder insert (3) together with the charger or data cable plug inserted into the charger plug engaged in the plug carrier block (2). 2. The universal plug module according to claim 1, wherein the frame (1) is provided with at least one coupling means (11) for connection to a lower edge area of the mobile telephone holder. 3. The universal plug module according to claim 2, wherein the coupling means (11) is a profile strip which is insertable into a guide groove provided in the lower edge area of the mobile telephone holder for accommodating support legs. 4. The universal plug module according to claim 1, wherein the clamping element (4) is a clamping screw which engages and fixes the plug holder insert (3) together with the plug (5) received therein in the plug carrier block (2). 5. The universal plug module according to claim 1, wherein the plug carrier block (2) is arranged so as to be adjustable by a certain adjustment distance in the forward-backward direction. 6. The universal plug module according to claim 5, wherein the plug carrier block (2) is adjustable in steps in the forward-backward direction by a ratchet mechanism (12) within a certain adjustment distance. 7. The universal plug module according to claim 1, wherein the plug carrier block (2) is fixedly arranged on the frame (1) and the plug carrier insert (3) has in the forward-backward direction a depth which is sufficient to arrange the accommodation opening (31) for inserting the plug in the forward-backward direction in the proper position corresponding to the respective plug and the associated mobile telephone model. 8. The universal plug module according to claim 1, wherein several plug holder inserts (3) with different accommodation openings (3) are provided which are adapted to accommodate different common plugs.
2,600
9,959
9,959
14,924,312
2,683
In a competitive athletic event the disclosed signaling system provides for visually signaling of participants, for example, a lane specific visible indication to begin a race. A sequence of light colors is used to signal the start of a race, and is believed advantageous over an audible start signal, particularly for athletes who are hearing impaired.
1. An athletic competition signaling apparatus for the hearing impaired, comprising: a translucent housing, said housing including an attachment component coupling the housing to a structure, and at least three independent signaling elements operatively associated with said housing, wherein the three signaling elements produce a visual output viewable from a plurality of positions including both a starting position and a staging position. 2. The signaling apparatus according to claim 1, further including a remote control circuit to independently energize each of the signaling elements. 3. The signaling apparatus according to claim 2, wherein a plurality of similarly configured housings are employed for a plurality of staging and starting positions in a competition. 4. The signaling apparatus according to claim 3, wherein the athletes are swimmers and the signaling elements produces a visual output viewable from a plurality of starting positions, and where the plurality of housings are operatively associated with separate swim starting blocks and are employed to stage and start a swimming competition. 5. The signaling apparatus according to claim 1 wherein each signaling element emits light of a distinct color. 6. The signaling apparatus according to claim 1 wherein at least a portion of said translucent housing is viewable about the entire periphery of the housing to provide 360-degree visibility. 7. The signaling apparatus according to claim 1 wherein each of the signaling elements designates one operation in staging and starting of an athletic competition. 8. An athletic competition signaling apparatus, comprising: a translucent housing, said housing including an attachment component coupling the housing to a support structure, and a plurality of signaling elements operatively associated within said housing, wherein the signaling elements each produce a distinguishable visual output viewable from a plurality of positions including both a starting position and a staging position; and control circuitry, associated with the signaling elements, said circuitry controlling the color and on/off state for the signaling elements. 9. The signaling apparatus according to claim 8 further including a source of power). 10. The signaling apparatus according to claim 8 wherein said support structure includes a base with a battery that rests on a generally horizontal surface), and includes circuitry for receiving signals to control the signaling elements. 11. The signaling apparatus according to claim 8 wherein said signaling elements include a ribbon of light-emitting diodes wrapped about a cylindrical core and inserted within a translucent outer tube, said tube including caps applied to the ends thereof to hold the light-emitting diodes inside and to isolate the light-emitting diodes from environmental exposure.) 12. The signaling apparatus according to claim 8 further including a wireless transmitter and receiver, the receiver being operatively connected to the signaling elements so as to produce a desired visual signal in response to a user depressing one or more buttons on the wireless transmitter. 13. An athletic competition signaling apparatus, comprising: a base resting on a surface adjacent the starting position of an athletic competition, said base including a battery compartment therein for holding a battery, and an L-shaped attachment arm extending from said base; a rod, adjustably attached to said attachment arm; a translucent light housing, said light housing including a linear tape with a plurality of light emitting diodes of at least two different and individually activated colors sequentially spaced along the linear tape, said linear tape spirally wrapped about a core and inserted within a translucent hollow tube, said tube also including at least one end cap on an end thereof, and a second attachment component operatively attached between the light housing and a support structure such as the rod; and control circuitry, operatively connected to said battery and the signaling elements, said circuitry controlling, in response to a plurality of external signals, the on/off state for the signaling elements; wherein the light emitting diodes produce a visual output viewable from a plurality of different radial positions about the perimeter of the light housing. 14. The apparatus according to claim 13, wherein said control circuitry includes a microprocessor for controlling the on/off state of the light-emitting diodes so as to produce a predefined sequence of states.
In a competitive athletic event the disclosed signaling system provides for visually signaling of participants, for example, a lane specific visible indication to begin a race. A sequence of light colors is used to signal the start of a race, and is believed advantageous over an audible start signal, particularly for athletes who are hearing impaired.1. An athletic competition signaling apparatus for the hearing impaired, comprising: a translucent housing, said housing including an attachment component coupling the housing to a structure, and at least three independent signaling elements operatively associated with said housing, wherein the three signaling elements produce a visual output viewable from a plurality of positions including both a starting position and a staging position. 2. The signaling apparatus according to claim 1, further including a remote control circuit to independently energize each of the signaling elements. 3. The signaling apparatus according to claim 2, wherein a plurality of similarly configured housings are employed for a plurality of staging and starting positions in a competition. 4. The signaling apparatus according to claim 3, wherein the athletes are swimmers and the signaling elements produces a visual output viewable from a plurality of starting positions, and where the plurality of housings are operatively associated with separate swim starting blocks and are employed to stage and start a swimming competition. 5. The signaling apparatus according to claim 1 wherein each signaling element emits light of a distinct color. 6. The signaling apparatus according to claim 1 wherein at least a portion of said translucent housing is viewable about the entire periphery of the housing to provide 360-degree visibility. 7. The signaling apparatus according to claim 1 wherein each of the signaling elements designates one operation in staging and starting of an athletic competition. 8. An athletic competition signaling apparatus, comprising: a translucent housing, said housing including an attachment component coupling the housing to a support structure, and a plurality of signaling elements operatively associated within said housing, wherein the signaling elements each produce a distinguishable visual output viewable from a plurality of positions including both a starting position and a staging position; and control circuitry, associated with the signaling elements, said circuitry controlling the color and on/off state for the signaling elements. 9. The signaling apparatus according to claim 8 further including a source of power). 10. The signaling apparatus according to claim 8 wherein said support structure includes a base with a battery that rests on a generally horizontal surface), and includes circuitry for receiving signals to control the signaling elements. 11. The signaling apparatus according to claim 8 wherein said signaling elements include a ribbon of light-emitting diodes wrapped about a cylindrical core and inserted within a translucent outer tube, said tube including caps applied to the ends thereof to hold the light-emitting diodes inside and to isolate the light-emitting diodes from environmental exposure.) 12. The signaling apparatus according to claim 8 further including a wireless transmitter and receiver, the receiver being operatively connected to the signaling elements so as to produce a desired visual signal in response to a user depressing one or more buttons on the wireless transmitter. 13. An athletic competition signaling apparatus, comprising: a base resting on a surface adjacent the starting position of an athletic competition, said base including a battery compartment therein for holding a battery, and an L-shaped attachment arm extending from said base; a rod, adjustably attached to said attachment arm; a translucent light housing, said light housing including a linear tape with a plurality of light emitting diodes of at least two different and individually activated colors sequentially spaced along the linear tape, said linear tape spirally wrapped about a core and inserted within a translucent hollow tube, said tube also including at least one end cap on an end thereof, and a second attachment component operatively attached between the light housing and a support structure such as the rod; and control circuitry, operatively connected to said battery and the signaling elements, said circuitry controlling, in response to a plurality of external signals, the on/off state for the signaling elements; wherein the light emitting diodes produce a visual output viewable from a plurality of different radial positions about the perimeter of the light housing. 14. The apparatus according to claim 13, wherein said control circuitry includes a microprocessor for controlling the on/off state of the light-emitting diodes so as to produce a predefined sequence of states.
2,600
9,960
9,960
15,244,918
2,627
An electronic device includes a first housing portion and a second housing portion. A reflective surface defines a major surface of the second housing portion. A signal emitter and a signal receiver are supported by the first housing portion. The signal emitter delivers signals to the reflective surface and the signal receiver receives reflections of the signals to determine one or more of location of an object or whether the object is touching the reflective surface. One or more haptic devices are supported by the second housing portion. One or more processors, operable with the signal receiver and the one or more haptic devices, actuate at least one haptic device when the signal receiver detects the object touching the reflective surface.
1. A device, comprising: a first housing portion and a second housing portion; a reflective surface defining a substantially planar major surface of the second housing portion; a signal emitter and a signal receiver, supported by the first housing portion, the signal emitter delivering non-visible signals to the reflective surface and the signal receiver receiving reflections of the non-visible signals; one or more haptic devices, supported by the second housing portion; and one or more processors, operable with the signal receiver and the one or more haptic devices and actuating at least one haptic device when the signal receiver detects an object touching the reflective surface from the reflections. 2. The device of claim 1, the signal receiver comprising an infrared imager and capturing one or more images of the reflective surface, the one or more processors determining a location of the object along the reflective surface from the one or more images. 3. The device of claim 2, the one or more haptic devices comprising a plurality of haptic devices, the one or more processors selecting a subset of the plurality of haptic devices as a function of the location of the object. 4. The device of claim 3, the subset comprising at least two haptic devices, the one or more processors applying drive signals to the at least two haptic devices as another function of a distance between the location of the object and the at least two haptic devices. 5. The device of claim 4, the one or more processors applying a greater drive signal to a haptic device that is farther from the location than to another haptic device that is closer to the location. 6. The device of claim 1, the non-visible signals comprising infrared signals. 7. The device of claim 2, further comprising a projector supported by the first housing portion, the projector delivering images to the reflective surface. 8. The device of claim 7, the images defining one or more user actuation targets. 9. The device of claim 8, the one or more processors identifying the object touching the reflective surface as user input when the location coincides with a user actuation target. 10. The device of claim 8, the one or more processors actuating the at least one haptic device only when the location coincides with a user actuation target. 11. The device of claim 1, further comprising: a hinge coupling the first housing portion to the second housing portion; and a reflector, coupled to the first housing portion and movable relative to the first housing portion between at least a first position and a second position; the reflector, when in the second position, redirecting received signals to the reflective surface when the first housing portion is radially displaced from the second housing portion about the hinge. 12. The device of claim 11, further comprising an adjuster coupled to the reflector, wherein movement of the adjuster moves the reflector. 13. The device of claim 12, the reflector operable with the hinge such that when the first housing portion pivots about the hinge by a radial displacement amount, the reflector pivots relative to the first housing portion in an amount proportional to the radial displacement amount such that a surface of the reflector maintains a line of sight relationship with the reflective surface. 14. The device of claim 7, wherein the signal emitter and the signal receiver are selectively detachable from the first housing portion. 15. The device of claim 1, the substantially planar major surface defining a continuous surface. 16. A device, comprising: a first housing portion and a second housing portion; a reflective surface defining a substantially planar major surface of the second housing portion; a signal emitter and a signal receiver, supported by the first housing portion, the signal emitter delivering infrared light to the reflective surface and the signal receiver receiving reflections of the infrared light; a projector supported by the first housing portion and delivering images to the reflective surface defining a user interface; one or more haptic devices, supported by the second housing portion; and one or more processors, operable with the signal receiver and the one or more haptic devices, the one or more processors actuating at least one haptic device when the signal receiver detects, from the reflections, an object interacting with a user actuation target projected upon the reflective surface. 17. The device of claim 16, the signal receiver comprising an infrared imager, and capturing one or more images of the reflective surface, the one or more processors determining a location of the object along the reflective surface from the one or more images. 18. The device of claim 16, further comprising a hinge coupling the first housing portion to the second housing portion and a reflector supported by the first housing portion and movable relative to the first housing portion between at least a first position and a second position to maintain a line of sight relationship with the reflective surface as the first housing portion pivots about the hinge relative to the second housing portion. 19. A method, comprising: projecting, with a projector, images defining a user interface along a reflective surface of a device; receiving, with a signal receiver, reflections of non-visible light from the reflective surface; determining, with one or more processors, an object interacting with a user actuation target of the user interface; and actuating, with the one or more processors in response to the object interacting with the user actuation target, at least one haptic device to deliver haptic feedback to the reflective surface. 20. The method of claim 19, further comprising: capturing, when the signal receiver comprises an imager, one or more images of the reflective surface to determine a location of the object along the reflective surface; and selecting, with the one or more processors, one or more haptic devices for actuation as a function of the location.
An electronic device includes a first housing portion and a second housing portion. A reflective surface defines a major surface of the second housing portion. A signal emitter and a signal receiver are supported by the first housing portion. The signal emitter delivers signals to the reflective surface and the signal receiver receives reflections of the signals to determine one or more of location of an object or whether the object is touching the reflective surface. One or more haptic devices are supported by the second housing portion. One or more processors, operable with the signal receiver and the one or more haptic devices, actuate at least one haptic device when the signal receiver detects the object touching the reflective surface.1. A device, comprising: a first housing portion and a second housing portion; a reflective surface defining a substantially planar major surface of the second housing portion; a signal emitter and a signal receiver, supported by the first housing portion, the signal emitter delivering non-visible signals to the reflective surface and the signal receiver receiving reflections of the non-visible signals; one or more haptic devices, supported by the second housing portion; and one or more processors, operable with the signal receiver and the one or more haptic devices and actuating at least one haptic device when the signal receiver detects an object touching the reflective surface from the reflections. 2. The device of claim 1, the signal receiver comprising an infrared imager and capturing one or more images of the reflective surface, the one or more processors determining a location of the object along the reflective surface from the one or more images. 3. The device of claim 2, the one or more haptic devices comprising a plurality of haptic devices, the one or more processors selecting a subset of the plurality of haptic devices as a function of the location of the object. 4. The device of claim 3, the subset comprising at least two haptic devices, the one or more processors applying drive signals to the at least two haptic devices as another function of a distance between the location of the object and the at least two haptic devices. 5. The device of claim 4, the one or more processors applying a greater drive signal to a haptic device that is farther from the location than to another haptic device that is closer to the location. 6. The device of claim 1, the non-visible signals comprising infrared signals. 7. The device of claim 2, further comprising a projector supported by the first housing portion, the projector delivering images to the reflective surface. 8. The device of claim 7, the images defining one or more user actuation targets. 9. The device of claim 8, the one or more processors identifying the object touching the reflective surface as user input when the location coincides with a user actuation target. 10. The device of claim 8, the one or more processors actuating the at least one haptic device only when the location coincides with a user actuation target. 11. The device of claim 1, further comprising: a hinge coupling the first housing portion to the second housing portion; and a reflector, coupled to the first housing portion and movable relative to the first housing portion between at least a first position and a second position; the reflector, when in the second position, redirecting received signals to the reflective surface when the first housing portion is radially displaced from the second housing portion about the hinge. 12. The device of claim 11, further comprising an adjuster coupled to the reflector, wherein movement of the adjuster moves the reflector. 13. The device of claim 12, the reflector operable with the hinge such that when the first housing portion pivots about the hinge by a radial displacement amount, the reflector pivots relative to the first housing portion in an amount proportional to the radial displacement amount such that a surface of the reflector maintains a line of sight relationship with the reflective surface. 14. The device of claim 7, wherein the signal emitter and the signal receiver are selectively detachable from the first housing portion. 15. The device of claim 1, the substantially planar major surface defining a continuous surface. 16. A device, comprising: a first housing portion and a second housing portion; a reflective surface defining a substantially planar major surface of the second housing portion; a signal emitter and a signal receiver, supported by the first housing portion, the signal emitter delivering infrared light to the reflective surface and the signal receiver receiving reflections of the infrared light; a projector supported by the first housing portion and delivering images to the reflective surface defining a user interface; one or more haptic devices, supported by the second housing portion; and one or more processors, operable with the signal receiver and the one or more haptic devices, the one or more processors actuating at least one haptic device when the signal receiver detects, from the reflections, an object interacting with a user actuation target projected upon the reflective surface. 17. The device of claim 16, the signal receiver comprising an infrared imager, and capturing one or more images of the reflective surface, the one or more processors determining a location of the object along the reflective surface from the one or more images. 18. The device of claim 16, further comprising a hinge coupling the first housing portion to the second housing portion and a reflector supported by the first housing portion and movable relative to the first housing portion between at least a first position and a second position to maintain a line of sight relationship with the reflective surface as the first housing portion pivots about the hinge relative to the second housing portion. 19. A method, comprising: projecting, with a projector, images defining a user interface along a reflective surface of a device; receiving, with a signal receiver, reflections of non-visible light from the reflective surface; determining, with one or more processors, an object interacting with a user actuation target of the user interface; and actuating, with the one or more processors in response to the object interacting with the user actuation target, at least one haptic device to deliver haptic feedback to the reflective surface. 20. The method of claim 19, further comprising: capturing, when the signal receiver comprises an imager, one or more images of the reflective surface to determine a location of the object along the reflective surface; and selecting, with the one or more processors, one or more haptic devices for actuation as a function of the location.
2,600
9,961
9,961
14,911,239
2,653
Active Noise Cancellation (ANC) systems and methods that reduce latency to improve performance. In certain embodiments the systems sample a noise signal using a sample period to create a stream of digital signal data that is representative of the noise signal. A data transport layer carries the digital signal data to a signal processor. The transport layer temporally organizes the digital signal data to place the digital signal data within an initial phase of a sample period. The remaining phase of the sample period is set to a duration that allows the signal processor to process the digital signal data carried in the initial phase and to output the processed data during the same sample period. In this way, the processing of data occurs within one sample period and the latency is reduced and predictable.
1. A noise cancellation system, comprising one or more analog to digital converters capable of sampling a noise signal and generating a stream of digital data at a higher rate than a sample period, a digital transport layer coupled to the one or more of the analog to digital converters for organizing the stream of digital data to place the digital data in an initial phase of the sample period, a signal processing filter capable of receiving the digital data carried in the initial phase of the sample period and generating processed signals during the sample period in which the digital data was received, and a digital to analog converter for converting the processed signals to an analog form. 2. The noise cancellation system of claim 1, wherein the signal processing filter further comprises a sequencing processor capable of arranging the processed signals in an output stream to have the digital transport layer carry selected processed signals to the digital to analog converters ahead of other processed signals. 3. The noise cancellation system of claim 1, wherein the digital transport layer extends between the analog to digital converters and the digital to analog converters. 4. The noise cancellation system of claim 1, wherein the digital transport layer comprises a multi-channel digital audio transportation protocol. 5. The noise cancellation system of claim 4, wherein the multi-channel digital audio transportation protocol is selected from one of I2S, A2B, TDM, parallel data, LVDS, SPDIF, SoundWire, BlueTooth, and byte level transport. 6. The noise cancellation system of claim 1, wherein the analog to digital converter includes a power control process for powering the analog to digital converter for a period of time that is less than the sample rate. 7. The noise cancellation system of claim 1, further comprising a microphone pre-amplifier circuit coupled to the analog to digital converter. 8. The noise cancellation system of claim 1, further comprising a speaker amplifier coupled to the digital to analog converter. 9. The noise cancellation system of claim 1, wherein the signal processing filter includes more than one digital filter. 10. The noise cancellation system of claim 1, wherein the signal processing filter start is skewed to lower the instructions per second (MIPS) required to generate the noise cancellation signals. 11. The noise cancellation system of claim 1, wherein the ADC to DAC share a digital transport layer. 12. The noise cancellation system of claim 1, wherein the digital transport layer is capable of organizing the digital data in a sample period to have one or more contiguous processing phases to provide a processing phase having a duration set to be less than or equal to a time determined for the signal processor to generate the processed signals. 13. The noise cancellation system of claim 12 wherein the one or more contiguous processing phases have a duration selected to support more than one noise cancellation system. 14. The noise cancellation system of claim 13 wherein the more than one contiguous processing sections have a duration selected to allow the generation of noise cancellation signals for noise signals from independent input sources. 15. The noise cancellation system of claim 13 where the more than one contiguous processing sections have a duration selected to allow the generation of noise cancellation signals generated by independent output sources for cancelling noise from one input source. 16. A method for noise cancellation, comprising oversampling an analog noise signal above a sample period to generate a stream of digital data, providing a digital transport layer for organizing the stream of digital data to place the digital data in an initial phase of the sample period, carrying the digital data to a signal processing filter capable of receiving the digital data carried in the initial phase of the sample period and generating processed signals during the sample period in which the digital data was received, and converting the processed signals to an analog form. 17. The method of claim 16, further comprising sequencing the processed signals in an output stream to have the digital transport layer carry selected processed signals to the digital to analog converters ahead of other processed signals. 18. The method of claim 16, further comprising powering the analog to digital converters for a period of time that is less than the sample rate. 19. The method of claim 16, further comprising organizing the digital data in a sample period to have one or more contiguous processing phases to provide a processing phase having a duration set to be less than or equal to a time determined for the signal processor to generate the processed signals. 20. The method of claim 19, wherein the one or more contiguous processing phases have a duration selected to support more than one noise cancellation system.
Active Noise Cancellation (ANC) systems and methods that reduce latency to improve performance. In certain embodiments the systems sample a noise signal using a sample period to create a stream of digital signal data that is representative of the noise signal. A data transport layer carries the digital signal data to a signal processor. The transport layer temporally organizes the digital signal data to place the digital signal data within an initial phase of a sample period. The remaining phase of the sample period is set to a duration that allows the signal processor to process the digital signal data carried in the initial phase and to output the processed data during the same sample period. In this way, the processing of data occurs within one sample period and the latency is reduced and predictable.1. A noise cancellation system, comprising one or more analog to digital converters capable of sampling a noise signal and generating a stream of digital data at a higher rate than a sample period, a digital transport layer coupled to the one or more of the analog to digital converters for organizing the stream of digital data to place the digital data in an initial phase of the sample period, a signal processing filter capable of receiving the digital data carried in the initial phase of the sample period and generating processed signals during the sample period in which the digital data was received, and a digital to analog converter for converting the processed signals to an analog form. 2. The noise cancellation system of claim 1, wherein the signal processing filter further comprises a sequencing processor capable of arranging the processed signals in an output stream to have the digital transport layer carry selected processed signals to the digital to analog converters ahead of other processed signals. 3. The noise cancellation system of claim 1, wherein the digital transport layer extends between the analog to digital converters and the digital to analog converters. 4. The noise cancellation system of claim 1, wherein the digital transport layer comprises a multi-channel digital audio transportation protocol. 5. The noise cancellation system of claim 4, wherein the multi-channel digital audio transportation protocol is selected from one of I2S, A2B, TDM, parallel data, LVDS, SPDIF, SoundWire, BlueTooth, and byte level transport. 6. The noise cancellation system of claim 1, wherein the analog to digital converter includes a power control process for powering the analog to digital converter for a period of time that is less than the sample rate. 7. The noise cancellation system of claim 1, further comprising a microphone pre-amplifier circuit coupled to the analog to digital converter. 8. The noise cancellation system of claim 1, further comprising a speaker amplifier coupled to the digital to analog converter. 9. The noise cancellation system of claim 1, wherein the signal processing filter includes more than one digital filter. 10. The noise cancellation system of claim 1, wherein the signal processing filter start is skewed to lower the instructions per second (MIPS) required to generate the noise cancellation signals. 11. The noise cancellation system of claim 1, wherein the ADC to DAC share a digital transport layer. 12. The noise cancellation system of claim 1, wherein the digital transport layer is capable of organizing the digital data in a sample period to have one or more contiguous processing phases to provide a processing phase having a duration set to be less than or equal to a time determined for the signal processor to generate the processed signals. 13. The noise cancellation system of claim 12 wherein the one or more contiguous processing phases have a duration selected to support more than one noise cancellation system. 14. The noise cancellation system of claim 13 wherein the more than one contiguous processing sections have a duration selected to allow the generation of noise cancellation signals for noise signals from independent input sources. 15. The noise cancellation system of claim 13 where the more than one contiguous processing sections have a duration selected to allow the generation of noise cancellation signals generated by independent output sources for cancelling noise from one input source. 16. A method for noise cancellation, comprising oversampling an analog noise signal above a sample period to generate a stream of digital data, providing a digital transport layer for organizing the stream of digital data to place the digital data in an initial phase of the sample period, carrying the digital data to a signal processing filter capable of receiving the digital data carried in the initial phase of the sample period and generating processed signals during the sample period in which the digital data was received, and converting the processed signals to an analog form. 17. The method of claim 16, further comprising sequencing the processed signals in an output stream to have the digital transport layer carry selected processed signals to the digital to analog converters ahead of other processed signals. 18. The method of claim 16, further comprising powering the analog to digital converters for a period of time that is less than the sample rate. 19. The method of claim 16, further comprising organizing the digital data in a sample period to have one or more contiguous processing phases to provide a processing phase having a duration set to be less than or equal to a time determined for the signal processor to generate the processed signals. 20. The method of claim 19, wherein the one or more contiguous processing phases have a duration selected to support more than one noise cancellation system.
2,600
9,962
9,962
15,371,633
2,647
Apparatus for constructing a digital telephone message including a message defining unit, configured for allowing a sender to define a message for sending to a recipient, and a response defining unit, configured for allowing the sender to predefine a recipient response, and to include the predefined recipient response in the message for activation at the recipient. Apparatus for receiving a digital telephone message, the message including an activatable sender-defined response, the apparatus including a receiving unit for receiving the message, a notification unit for notifying a recipient of the arrival of the message, and a response activation unit for displaying the sender-defined response, and associating the sender-defined response with a user action for providing user input to send the response. Related apparatus and methods are also described.
1-74. (canceled) 75. A method for providing status-based responses to digital messages, comprising: constructing a digital message, the digital message including at least one predetermined activatable recipient response; sending the constructed digital message to a recipient device; determining whether each of the at least one activatable recipient response of the sent digital message has been activated; determining a status of a response to the sent digital message from the recipient device, wherein the status of the response is determined based on whether any of the at least one activatable recipient response has been activated; and displaying the determined status. 76. The method of claim 75, wherein each status is at least one of: that no response has been received, and that a response has been received. 77. The method of claim 75, further comprising: determining, based on the determined status, whether a response has been received from the recipient device within a predetermined period of time. 78. The method of claim 77, further comprising: producing an alert, when it is determined that the response has not been received from the recipient device within the predetermined period of time. 79. The method of claim 77, further comprising: resending the digital message, when it is determined that the response has not been received from the recipient device within the predetermined period of time. 80. The method of claim 77, further comprising: determining an amount of time that has passed since the digital message was sent, when it is determined that the response has not been received from the recipient device within the predetermined period of time; and displaying the determined amount of time. 81. The method of claim 75, further comprising: sending the constructed digital message to at least one other recipient device; determining, for each other recipient device, whether each activatable recipient response of the sent digital message has been activated; determining, for each other recipient device, a status of a response to the sent digital message from the other recipient device, wherein the status of the response is determined based on whether any of the at least one activatable recipient response has been activated at the other recipient device; and displaying the determined status of each other recipient device. 82. The method of claim 81, further comprising: maintaining, based on the determined statuses, a list of recipients, wherein the list indicates the status of the response from the recipient device of each recipient. 83. The method of claim 75, further comprising: tracking the sent digital message, wherein the tracking includes receiving the response to the digital message, wherein determining whether each activatable recipient response has been activated is based on the received response. 84. A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process for providing status-based responses to digital messages, the process comprising: constructing a digital message, the digital message including at least one predetermined activatable recipient response; sending the constructed digital message to a recipient device; determining whether each of the at least one activatable recipient response of the sent digital message has been activated; determining a status of a response to the sent digital message from the recipient device, wherein the status of the response is determined based on whether any of the at least one activatable recipient response has been activated; and displaying the determined status. 85. A user terminal for providing status-based responses to digital messages, comprising: a display; a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the user terminal to: construct a digital message, the digital message including at least one predetermined activatable recipient response; send the constructed digital message to a recipient device; determine whether each of the at least one activatable recipient response of the sent digital message has been activated; determine a status of a response to the sent digital message from the recipient device, wherein the status of the response is determined based on whether any of the at least one activatable recipient response has been activated; and display, via the display, the determined status. 86. The user terminal of claim 85, wherein each status is at least one of: that no response has been received, and that a response has been received. 87. The user terminal of claim 86, wherein the user terminal is further configured to: determine, based on the determined status, whether a response has been received from the recipient device within a predetermined period of time. 88. The user terminal of claim 87, wherein the user terminal is further configured to: produce an alert, when it is determined that the response has not been received from the recipient device within the predetermined period of time. 89. The user terminal of claim 87, wherein the user terminal is further configured to: resend the digital message, when it is determined that the response has not been received from the recipient device within the predetermined period of time. 90. The user terminal of claim 87, wherein the user terminal is further configured to: determine an amount of time that has passed since the digital message was sent, when it is determined that the response has not been received from the recipient device within the predetermined period of time; and display, via the display, the determined amount of time. 91. The user terminal of claim 85, wherein the user terminal is further configured to: send the constructed digital message to at least one other recipient device; determine, for each other recipient device, whether each activatable recipient response of the sent digital message has been activated; determine, for each other recipient device, a status of a response to the sent digital message from the other recipient device, wherein the status of the response is determined based on whether any of the at least one activatable recipient response has been activated at the other recipient device; and display, via the display, the determined status of each other recipient device. 92. The user terminal of claim 91, wherein the user terminal is further configured to: maintain, based on the determined statuses, a list of recipients, wherein the list indicates the status of the response from the recipient device of each recipient. 93. The user terminal of claim 85, wherein the user terminal is further configured to: track the sent digital message, wherein the tracking includes receiving the response to the digital message, wherein determining whether each activatable recipient response has been activated is based on the received response. 94. The method of claim 75, wherein the at least one predetermined activatable recipient response includes at least a reception flag and a non-reception flag, wherein the status is determined further based on a selection of the reception flag or the non-reception flag, wherein the displayed status includes an icon indicating the selected flag. 95. The user terminal of claim 85, wherein the at least one predetermined activatable recipient response includes at least a reception flag and a non-reception flag, wherein the status is determined further based on a selection of the reception flag or the non-reception flag, wherein the displayed status includes an icon indicating the selected flag.
Apparatus for constructing a digital telephone message including a message defining unit, configured for allowing a sender to define a message for sending to a recipient, and a response defining unit, configured for allowing the sender to predefine a recipient response, and to include the predefined recipient response in the message for activation at the recipient. Apparatus for receiving a digital telephone message, the message including an activatable sender-defined response, the apparatus including a receiving unit for receiving the message, a notification unit for notifying a recipient of the arrival of the message, and a response activation unit for displaying the sender-defined response, and associating the sender-defined response with a user action for providing user input to send the response. Related apparatus and methods are also described.1-74. (canceled) 75. A method for providing status-based responses to digital messages, comprising: constructing a digital message, the digital message including at least one predetermined activatable recipient response; sending the constructed digital message to a recipient device; determining whether each of the at least one activatable recipient response of the sent digital message has been activated; determining a status of a response to the sent digital message from the recipient device, wherein the status of the response is determined based on whether any of the at least one activatable recipient response has been activated; and displaying the determined status. 76. The method of claim 75, wherein each status is at least one of: that no response has been received, and that a response has been received. 77. The method of claim 75, further comprising: determining, based on the determined status, whether a response has been received from the recipient device within a predetermined period of time. 78. The method of claim 77, further comprising: producing an alert, when it is determined that the response has not been received from the recipient device within the predetermined period of time. 79. The method of claim 77, further comprising: resending the digital message, when it is determined that the response has not been received from the recipient device within the predetermined period of time. 80. The method of claim 77, further comprising: determining an amount of time that has passed since the digital message was sent, when it is determined that the response has not been received from the recipient device within the predetermined period of time; and displaying the determined amount of time. 81. The method of claim 75, further comprising: sending the constructed digital message to at least one other recipient device; determining, for each other recipient device, whether each activatable recipient response of the sent digital message has been activated; determining, for each other recipient device, a status of a response to the sent digital message from the other recipient device, wherein the status of the response is determined based on whether any of the at least one activatable recipient response has been activated at the other recipient device; and displaying the determined status of each other recipient device. 82. The method of claim 81, further comprising: maintaining, based on the determined statuses, a list of recipients, wherein the list indicates the status of the response from the recipient device of each recipient. 83. The method of claim 75, further comprising: tracking the sent digital message, wherein the tracking includes receiving the response to the digital message, wherein determining whether each activatable recipient response has been activated is based on the received response. 84. A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process for providing status-based responses to digital messages, the process comprising: constructing a digital message, the digital message including at least one predetermined activatable recipient response; sending the constructed digital message to a recipient device; determining whether each of the at least one activatable recipient response of the sent digital message has been activated; determining a status of a response to the sent digital message from the recipient device, wherein the status of the response is determined based on whether any of the at least one activatable recipient response has been activated; and displaying the determined status. 85. A user terminal for providing status-based responses to digital messages, comprising: a display; a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the user terminal to: construct a digital message, the digital message including at least one predetermined activatable recipient response; send the constructed digital message to a recipient device; determine whether each of the at least one activatable recipient response of the sent digital message has been activated; determine a status of a response to the sent digital message from the recipient device, wherein the status of the response is determined based on whether any of the at least one activatable recipient response has been activated; and display, via the display, the determined status. 86. The user terminal of claim 85, wherein each status is at least one of: that no response has been received, and that a response has been received. 87. The user terminal of claim 86, wherein the user terminal is further configured to: determine, based on the determined status, whether a response has been received from the recipient device within a predetermined period of time. 88. The user terminal of claim 87, wherein the user terminal is further configured to: produce an alert, when it is determined that the response has not been received from the recipient device within the predetermined period of time. 89. The user terminal of claim 87, wherein the user terminal is further configured to: resend the digital message, when it is determined that the response has not been received from the recipient device within the predetermined period of time. 90. The user terminal of claim 87, wherein the user terminal is further configured to: determine an amount of time that has passed since the digital message was sent, when it is determined that the response has not been received from the recipient device within the predetermined period of time; and display, via the display, the determined amount of time. 91. The user terminal of claim 85, wherein the user terminal is further configured to: send the constructed digital message to at least one other recipient device; determine, for each other recipient device, whether each activatable recipient response of the sent digital message has been activated; determine, for each other recipient device, a status of a response to the sent digital message from the other recipient device, wherein the status of the response is determined based on whether any of the at least one activatable recipient response has been activated at the other recipient device; and display, via the display, the determined status of each other recipient device. 92. The user terminal of claim 91, wherein the user terminal is further configured to: maintain, based on the determined statuses, a list of recipients, wherein the list indicates the status of the response from the recipient device of each recipient. 93. The user terminal of claim 85, wherein the user terminal is further configured to: track the sent digital message, wherein the tracking includes receiving the response to the digital message, wherein determining whether each activatable recipient response has been activated is based on the received response. 94. The method of claim 75, wherein the at least one predetermined activatable recipient response includes at least a reception flag and a non-reception flag, wherein the status is determined further based on a selection of the reception flag or the non-reception flag, wherein the displayed status includes an icon indicating the selected flag. 95. The user terminal of claim 85, wherein the at least one predetermined activatable recipient response includes at least a reception flag and a non-reception flag, wherein the status is determined further based on a selection of the reception flag or the non-reception flag, wherein the displayed status includes an icon indicating the selected flag.
2,600
9,963
9,963
15,371,644
2,647
Apparatus for constructing a digital telephone message including a message defining unit, configured for allowing a sender to define a message for sending to a recipient, and a response defining unit, configured for allowing the sender to predefine a recipient response, and to include the predefined recipient response in the message for activation at the recipient. Apparatus for receiving a digital telephone message, the message including an activatable sender-defined response, the apparatus including a receiving unit for receiving the message, a notification unit for notifying a recipient of the arrival of the message, and a response activation unit for displaying the sender-defined response, and associating the sender-defined response with a user action for providing user input to send the response. Related apparatus and methods are also described.
1-74. (canceled) 75. A method for displaying response information for a digital group message, comprising: constructing the digital group message, the digital group message including at least one predetermined recipient response for activation at each of a plurality of recipient devices; sending the constructed digital group message to the plurality of recipient devices; determining, for each of the plurality of recipient devices, at least one response identifier, wherein each response identifier indicates at least a status of a response from the recipient device to the digital group message; and displaying the response identifiers. 76. The method of claim 75, wherein each response identifier further indicates at least one of: content of the response, a type of the response, a time when the response was made, and a time when the response was received. 77. The method of claim 75, further comprising: determining, for each of the plurality of recipient devices, a recipient identifier, wherein each recipient identifier indicates a recipient associated with the recipient device. 78. The method of claim 77, wherein each recipient identifier includes at least one of: a name of the associated recipient, a phone number of the associated recipient, and an email address of the associated recipient. 79. The method of claim 75, wherein each status is at least one of: that no response has been received, and that a response has been received. 80. The method of claim 79, wherein a response is received from a recipient device in response to activation of one of the at least one predetermined recipient response included in the sent digital message. 81. The method of claim 79, wherein each response identifier indicating a status that no response has been received further indicates an amount of time that has passed without response since the digital message was sent. 82. The method of claim 79, further comprising: determining, for each response identifier indicating a status that a response has been received, a type of the response; generating a table indicating a number of responses of each type; and displaying the generated table. 83. The method of claim 75, further comprising: determining, based on the determined response identifiers, at least one functional group, wherein each functional group is of a subset of the plurality of recipient devices. 84. A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process for displaying response information for a digital group message, the process comprising: constructing a digital group message, the digital group message including at least one predetermined recipient response for activation at each of a plurality of recipient devices; sending the constructed digital group message to the plurality of recipient devices; determining, for each of the plurality of recipient devices, at least one response identifier, wherein each response identifier indicates at least a status of a response from the recipient device to the digital message; and displaying the response identifiers. 85. A user terminal for displaying response information for a digital group message, comprising: a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the user terminal to: construct a digital group message, the digital group message including at least one predetermined recipient response for activation at each of a plurality of recipient devices; send the constructed digital group message to the plurality of recipient devices; determine, for each of the plurality of recipient devices, at least one response identifier, wherein each response identifier indicates at least a status of a response from the recipient device to the digital message; and display the response identifiers. 86. The user terminal of claim 85, wherein each response identifier further indicates at least one of: content of the response, a type of the response, a time when the response was made, and a time when the response was received. 87. The user terminal of claim 85, wherein the user terminal is further configured to: determine, for each of the plurality of recipient devices, a recipient identifier, wherein each recipient identifier indicates a recipient associated with the recipient device. 88. The user terminal of claim 87, wherein each recipient identifier includes at least one of: a name of the associated recipient, a phone number of the associated recipient, and an email address of the associated recipient. 89. The user terminal of claim 85, wherein each status is at least one of: that no response has been received, and that a response has been received. 90. The user terminal of claim 89, wherein a response is received from a recipient device in response to activation of one of the at least one predetermined recipient response included in the sent digital message. 91. The user terminal of claim 89, wherein each response identifier indicating a status that no response has been received further indicates an amount of time that has passed without response since the digital message was sent. 92. The user terminal of claim 89, wherein the user terminal is further configured to: determine, for each response identifier indicating a status that a response has been received, a type of the response; generate a table, the table indicating a number of responses of each type; and display the generated table. 93. The user terminal of claim 85, wherein the user terminal is further configured to: determine, based on the determined response identifiers, at least one functional group, wherein each functional group is of a subset of the plurality of recipient devices. 94. The method of claim 75, wherein the digital group message is sent via at least one of: cellular telephony, line telephony, satellite, cable, internet protocol network, Bluetooth, Worldwide Interoperability of Microwave Access (WiMax), Infrared, and a wireless network. 95. The user terminal of claim 85, wherein the user terminal is configured to send the digital group message via at least one of: cellular telephony, line telephony, satellite, cable, internet protocol network, Bluetooth, WiMax, Infrared, and a wireless network.
Apparatus for constructing a digital telephone message including a message defining unit, configured for allowing a sender to define a message for sending to a recipient, and a response defining unit, configured for allowing the sender to predefine a recipient response, and to include the predefined recipient response in the message for activation at the recipient. Apparatus for receiving a digital telephone message, the message including an activatable sender-defined response, the apparatus including a receiving unit for receiving the message, a notification unit for notifying a recipient of the arrival of the message, and a response activation unit for displaying the sender-defined response, and associating the sender-defined response with a user action for providing user input to send the response. Related apparatus and methods are also described.1-74. (canceled) 75. A method for displaying response information for a digital group message, comprising: constructing the digital group message, the digital group message including at least one predetermined recipient response for activation at each of a plurality of recipient devices; sending the constructed digital group message to the plurality of recipient devices; determining, for each of the plurality of recipient devices, at least one response identifier, wherein each response identifier indicates at least a status of a response from the recipient device to the digital group message; and displaying the response identifiers. 76. The method of claim 75, wherein each response identifier further indicates at least one of: content of the response, a type of the response, a time when the response was made, and a time when the response was received. 77. The method of claim 75, further comprising: determining, for each of the plurality of recipient devices, a recipient identifier, wherein each recipient identifier indicates a recipient associated with the recipient device. 78. The method of claim 77, wherein each recipient identifier includes at least one of: a name of the associated recipient, a phone number of the associated recipient, and an email address of the associated recipient. 79. The method of claim 75, wherein each status is at least one of: that no response has been received, and that a response has been received. 80. The method of claim 79, wherein a response is received from a recipient device in response to activation of one of the at least one predetermined recipient response included in the sent digital message. 81. The method of claim 79, wherein each response identifier indicating a status that no response has been received further indicates an amount of time that has passed without response since the digital message was sent. 82. The method of claim 79, further comprising: determining, for each response identifier indicating a status that a response has been received, a type of the response; generating a table indicating a number of responses of each type; and displaying the generated table. 83. The method of claim 75, further comprising: determining, based on the determined response identifiers, at least one functional group, wherein each functional group is of a subset of the plurality of recipient devices. 84. A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process for displaying response information for a digital group message, the process comprising: constructing a digital group message, the digital group message including at least one predetermined recipient response for activation at each of a plurality of recipient devices; sending the constructed digital group message to the plurality of recipient devices; determining, for each of the plurality of recipient devices, at least one response identifier, wherein each response identifier indicates at least a status of a response from the recipient device to the digital message; and displaying the response identifiers. 85. A user terminal for displaying response information for a digital group message, comprising: a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the user terminal to: construct a digital group message, the digital group message including at least one predetermined recipient response for activation at each of a plurality of recipient devices; send the constructed digital group message to the plurality of recipient devices; determine, for each of the plurality of recipient devices, at least one response identifier, wherein each response identifier indicates at least a status of a response from the recipient device to the digital message; and display the response identifiers. 86. The user terminal of claim 85, wherein each response identifier further indicates at least one of: content of the response, a type of the response, a time when the response was made, and a time when the response was received. 87. The user terminal of claim 85, wherein the user terminal is further configured to: determine, for each of the plurality of recipient devices, a recipient identifier, wherein each recipient identifier indicates a recipient associated with the recipient device. 88. The user terminal of claim 87, wherein each recipient identifier includes at least one of: a name of the associated recipient, a phone number of the associated recipient, and an email address of the associated recipient. 89. The user terminal of claim 85, wherein each status is at least one of: that no response has been received, and that a response has been received. 90. The user terminal of claim 89, wherein a response is received from a recipient device in response to activation of one of the at least one predetermined recipient response included in the sent digital message. 91. The user terminal of claim 89, wherein each response identifier indicating a status that no response has been received further indicates an amount of time that has passed without response since the digital message was sent. 92. The user terminal of claim 89, wherein the user terminal is further configured to: determine, for each response identifier indicating a status that a response has been received, a type of the response; generate a table, the table indicating a number of responses of each type; and display the generated table. 93. The user terminal of claim 85, wherein the user terminal is further configured to: determine, based on the determined response identifiers, at least one functional group, wherein each functional group is of a subset of the plurality of recipient devices. 94. The method of claim 75, wherein the digital group message is sent via at least one of: cellular telephony, line telephony, satellite, cable, internet protocol network, Bluetooth, Worldwide Interoperability of Microwave Access (WiMax), Infrared, and a wireless network. 95. The user terminal of claim 85, wherein the user terminal is configured to send the digital group message via at least one of: cellular telephony, line telephony, satellite, cable, internet protocol network, Bluetooth, WiMax, Infrared, and a wireless network.
2,600
9,964
9,964
13,456,184
2,626
A method and apparatus of detecting an input gesture command are disclosed. According to one example method of operation, a digital image may be obtained from a digital camera of a pre-defined controlled movement area. The method may also include comparing the digital image to a pre-stored background image previously obtained from the digital camera of the same pre-defined controlled movement area. The method may also include identifying one or more pixel differences between the digital image and the pre-stored background image and designating the digital image as having a detected input gesture command.
1. A method of detecting an input gesture command comprising: obtaining at least one digital image from a digital camera of a pre-defined controlled movement area; comparing, via a processor, the at least one digital image to at least one pre-stored background image previously obtained from the digital camera of the same pre-defined controlled movement area; identifying, via the processor, at least one pixel difference between the at least one digital image and the at least one pre-stored background image; and designating, via the processor, the at least one digital image as having a detected input gesture command. 2. The method of claim 1, further comprising: triggering the digital camera to begin obtaining the at least one digital image based on a movement detected by an infrared (IR) sensor coupled to the processor associated with the digital camera. 3. The method of claim 1, further comprising: converting content of the at least one digital image to a linear representation to identify a type of input gesture command. 4. The method of claim 3, wherein the linear representation comprises a plurality of gridpoints used to identify the user's body part used for the input gesture command. 5. The method of claim 4, further comprising: comparing the linear representation to a pre-stored linear representation to identify the type of input gesture command; and identifying the type of input gesture command. 6. The method of claim 5, further comprising: transmitting a command to a remote device based on the identified type of input gesture command. 7. The method of claim 1, wherein the digital camera is a complementary symmetry metal oxide semiconductor (CMOS) camera. 8. An apparatus configured to detect an input gesture command comprising: a digital camera; a receiver configured to receive at least one digital image from the digital camera of a pre-defined controlled movement area; and a processor configured to compare the at least one digital image to at least one pre-stored background image previously obtained from the digital camera of the same pre-defined controlled movement area, identify at least one pixel difference between the at least one digital image and the at least one pre-stored background image, and designate the at least one digital image as having a detected input gesture command. 9. The apparatus of claim 8 wherein the processor is further configured to trigger the digital camera to begin obtaining the at least one digital image based on a movement detected by an infrared (IR) sensor coupled to the processor associated with the digital camera. 10. The apparatus of claim 8, wherein the processor is further configured to convert content of the at least one digital image to a linear representation to identify a type of input gesture command. 11. The apparatus of claim 10, wherein the linear representation comprises a plurality of gridpoints used to identify the user's body part used for the input gesture command. 12. The apparatus of claim 11, wherein the processor is further configured to compare the linear representation to a pre-stored linear representation to identify the type of input gesture command and identify the type of input gesture command. 13. The apparatus of claim 12, further comprising: a transmitter configured to transmit a command to a remote device based on the identified type of input gesture command. 14. The apparatus of claim 1, wherein the digital camera is a complementary symmetry metal oxide semiconductor (CMOS) camera. 15. A non-transitory computer readable storage medium configured to store instructions that when executed cause a processor to detect an input gesture command, the processor being further configured to perform: obtaining at least one digital image from a digital camera of a pre-defined controlled movement area; comparing, via a processor, the at least one digital image to at least one pre-stored background image previously obtained from the digital camera of the same pre-defined controlled movement area; identifying, via the processor, at least one pixel difference between the at least one digital image and the at least one pre-stored background image; and designating, via the processor, the at least one digital image as having a detected input gesture command. 16. The non-transitory computer readable storage medium of claim 15, wherein the processor is further configured to perform: triggering the digital camera to begin obtaining the at least one digital image based on a movement detected by an infrared (IR) sensor coupled to the processor associated with the digital camera. 17. The non-transitory computer readable storage medium of claim 15, wherein the processor is further configured to perform: converting content of the at least one digital image to a linear representation to identify a type of input gesture command. 18. The non-transitory computer readable storage medium of claim 17, wherein the linear representation comprises a plurality of gridpoints used to identify the user's body part used for the input gesture command. 19. The non-transitory computer readable storage medium of claim 18, wherein the processor is further configured to perform: comparing the linear representation to a pre-stored linear representation to identify the type of input gesture command; and identifying the type of input gesture command. 20. The non-transitory computer readable storage medium of claim 15, wherein the processor is further configured to perform: transmitting a command to a remote device based on the identified type of input gesture command.
A method and apparatus of detecting an input gesture command are disclosed. According to one example method of operation, a digital image may be obtained from a digital camera of a pre-defined controlled movement area. The method may also include comparing the digital image to a pre-stored background image previously obtained from the digital camera of the same pre-defined controlled movement area. The method may also include identifying one or more pixel differences between the digital image and the pre-stored background image and designating the digital image as having a detected input gesture command.1. A method of detecting an input gesture command comprising: obtaining at least one digital image from a digital camera of a pre-defined controlled movement area; comparing, via a processor, the at least one digital image to at least one pre-stored background image previously obtained from the digital camera of the same pre-defined controlled movement area; identifying, via the processor, at least one pixel difference between the at least one digital image and the at least one pre-stored background image; and designating, via the processor, the at least one digital image as having a detected input gesture command. 2. The method of claim 1, further comprising: triggering the digital camera to begin obtaining the at least one digital image based on a movement detected by an infrared (IR) sensor coupled to the processor associated with the digital camera. 3. The method of claim 1, further comprising: converting content of the at least one digital image to a linear representation to identify a type of input gesture command. 4. The method of claim 3, wherein the linear representation comprises a plurality of gridpoints used to identify the user's body part used for the input gesture command. 5. The method of claim 4, further comprising: comparing the linear representation to a pre-stored linear representation to identify the type of input gesture command; and identifying the type of input gesture command. 6. The method of claim 5, further comprising: transmitting a command to a remote device based on the identified type of input gesture command. 7. The method of claim 1, wherein the digital camera is a complementary symmetry metal oxide semiconductor (CMOS) camera. 8. An apparatus configured to detect an input gesture command comprising: a digital camera; a receiver configured to receive at least one digital image from the digital camera of a pre-defined controlled movement area; and a processor configured to compare the at least one digital image to at least one pre-stored background image previously obtained from the digital camera of the same pre-defined controlled movement area, identify at least one pixel difference between the at least one digital image and the at least one pre-stored background image, and designate the at least one digital image as having a detected input gesture command. 9. The apparatus of claim 8 wherein the processor is further configured to trigger the digital camera to begin obtaining the at least one digital image based on a movement detected by an infrared (IR) sensor coupled to the processor associated with the digital camera. 10. The apparatus of claim 8, wherein the processor is further configured to convert content of the at least one digital image to a linear representation to identify a type of input gesture command. 11. The apparatus of claim 10, wherein the linear representation comprises a plurality of gridpoints used to identify the user's body part used for the input gesture command. 12. The apparatus of claim 11, wherein the processor is further configured to compare the linear representation to a pre-stored linear representation to identify the type of input gesture command and identify the type of input gesture command. 13. The apparatus of claim 12, further comprising: a transmitter configured to transmit a command to a remote device based on the identified type of input gesture command. 14. The apparatus of claim 1, wherein the digital camera is a complementary symmetry metal oxide semiconductor (CMOS) camera. 15. A non-transitory computer readable storage medium configured to store instructions that when executed cause a processor to detect an input gesture command, the processor being further configured to perform: obtaining at least one digital image from a digital camera of a pre-defined controlled movement area; comparing, via a processor, the at least one digital image to at least one pre-stored background image previously obtained from the digital camera of the same pre-defined controlled movement area; identifying, via the processor, at least one pixel difference between the at least one digital image and the at least one pre-stored background image; and designating, via the processor, the at least one digital image as having a detected input gesture command. 16. The non-transitory computer readable storage medium of claim 15, wherein the processor is further configured to perform: triggering the digital camera to begin obtaining the at least one digital image based on a movement detected by an infrared (IR) sensor coupled to the processor associated with the digital camera. 17. The non-transitory computer readable storage medium of claim 15, wherein the processor is further configured to perform: converting content of the at least one digital image to a linear representation to identify a type of input gesture command. 18. The non-transitory computer readable storage medium of claim 17, wherein the linear representation comprises a plurality of gridpoints used to identify the user's body part used for the input gesture command. 19. The non-transitory computer readable storage medium of claim 18, wherein the processor is further configured to perform: comparing the linear representation to a pre-stored linear representation to identify the type of input gesture command; and identifying the type of input gesture command. 20. The non-transitory computer readable storage medium of claim 15, wherein the processor is further configured to perform: transmitting a command to a remote device based on the identified type of input gesture command.
2,600
9,965
9,965
13,720,037
2,624
An apparatus includes a multiplexed liquid crystal display (LCD) controller. The LCD controller is adapted to operate in at least first and second phases of operation. The LCD controller is adapted to drive a plurality of signal lines to a first set of voltages during the first phase of operation and to a second set of voltages during the second phase of operation. The LCD controller is further adapted to couple to a node at least some of the plurality of signal lines between the first and second phases of operation.
1. An apparatus, comprising a multiplexed liquid crystal display (LCD) controller, adapted to operate in at least first and second phases of operation, the LCD controller adapted to drive a plurality of signal lines to a first set of voltages during the first phase of operation and to a second set of voltages during the second phase of operation, wherein the LCD controller is further adapted to couple to a node at least some of the plurality of signal lines between the first and second phases of operation. 2. The apparatus according to claim 1, wherein the plurality of signal lines comprises a plurality of common lines. 3. The apparatus according to claim 2, wherein the plurality of signal lines further comprises a plurality of segment lines. 4. The apparatus according to claim 3, wherein the LCD controller is adapted to couple the plurality of common lines and the plurality of segment lines to the node between the first and second phases of operation. 5. The apparatus according to claim 3, wherein the LCD controller is adapted to couple the node to a ground potential. 6. The apparatus according to claim 3, wherein the LCD controller is adapted to couple the node to a majority voltage of the plurality of common lines for the first phase of operation. 7. The apparatus according to claim 3, wherein the LCD controller is adapted to couple the node to a majority voltage of the plurality of common lines for the second phase of operation. 8. An apparatus, comprising: a multiplexed liquid crystal display (LCD), having at least first and second phases of operation; and a controller coupled to the LCD, wherein the controller is adapted to perform segment resetting between the first and second phases of operation of the LCD. 9. The apparatus according to claim 8, wherein the controller is adapted to perform segment resetting by coupling a plurality of common lines of the LCD to a plurality of segment lines of the LCD. 10. The apparatus according to claim 8, wherein the controller is adapted to perform segment resetting by coupling a plurality of common lines of the LCD and a plurality of segment lines of the LCD to a ground potential of the apparatus. 11. The apparatus according to claim 8, wherein the controller is adapted to perform segment resetting by coupling a plurality of common lines of the LCD and a plurality of segment lines of the LCD to a bias voltage. 12. The apparatus according to claim 11, wherein the bias voltage is a majority voltage of the common lines of the LCD. 13. The apparatus according to claim 12, wherein the controller comprises a plurality of switches adapted to selectively couple the plurality of common lines of the LCD and the plurality of segment lines of the LCD to the majority voltage. 14. A method of operating a liquid crystal display (LCD), the method comprising: operating the LCD in a first phase of operation; performing segment resetting after operating the LCD in the first phase of operation; and operating the LCD in a second phase of operation after performing segment resetting. 15. The method according to claim 14, wherein performing segment resetting further comprises coupling a plurality of common lines of the LCD to a plurality of segment lines of the LCD. 16. The method according to claim 14, wherein performing segment resetting further comprises coupling a plurality of common lines of the LCD and a plurality of segment lines of the LCD to a ground potential. 17. The method according to claim 14, wherein performing segment resetting further comprises coupling a plurality of common lines of the LCD and a plurality of segment lines of the LCD to a bias voltage. 18. The method according to claim 17, wherein the bias voltage comprises a majority voltage of the common lines of the LCD. 19. The method according to claim 18, wherein the majority voltage varies depending on whether the LCD is operated in the first or second phases of operation. 20. The method according to claim 17, wherein the bias voltage comprises the majority voltage of the common lines of the LCD during the first phase of operation.
An apparatus includes a multiplexed liquid crystal display (LCD) controller. The LCD controller is adapted to operate in at least first and second phases of operation. The LCD controller is adapted to drive a plurality of signal lines to a first set of voltages during the first phase of operation and to a second set of voltages during the second phase of operation. The LCD controller is further adapted to couple to a node at least some of the plurality of signal lines between the first and second phases of operation.1. An apparatus, comprising a multiplexed liquid crystal display (LCD) controller, adapted to operate in at least first and second phases of operation, the LCD controller adapted to drive a plurality of signal lines to a first set of voltages during the first phase of operation and to a second set of voltages during the second phase of operation, wherein the LCD controller is further adapted to couple to a node at least some of the plurality of signal lines between the first and second phases of operation. 2. The apparatus according to claim 1, wherein the plurality of signal lines comprises a plurality of common lines. 3. The apparatus according to claim 2, wherein the plurality of signal lines further comprises a plurality of segment lines. 4. The apparatus according to claim 3, wherein the LCD controller is adapted to couple the plurality of common lines and the plurality of segment lines to the node between the first and second phases of operation. 5. The apparatus according to claim 3, wherein the LCD controller is adapted to couple the node to a ground potential. 6. The apparatus according to claim 3, wherein the LCD controller is adapted to couple the node to a majority voltage of the plurality of common lines for the first phase of operation. 7. The apparatus according to claim 3, wherein the LCD controller is adapted to couple the node to a majority voltage of the plurality of common lines for the second phase of operation. 8. An apparatus, comprising: a multiplexed liquid crystal display (LCD), having at least first and second phases of operation; and a controller coupled to the LCD, wherein the controller is adapted to perform segment resetting between the first and second phases of operation of the LCD. 9. The apparatus according to claim 8, wherein the controller is adapted to perform segment resetting by coupling a plurality of common lines of the LCD to a plurality of segment lines of the LCD. 10. The apparatus according to claim 8, wherein the controller is adapted to perform segment resetting by coupling a plurality of common lines of the LCD and a plurality of segment lines of the LCD to a ground potential of the apparatus. 11. The apparatus according to claim 8, wherein the controller is adapted to perform segment resetting by coupling a plurality of common lines of the LCD and a plurality of segment lines of the LCD to a bias voltage. 12. The apparatus according to claim 11, wherein the bias voltage is a majority voltage of the common lines of the LCD. 13. The apparatus according to claim 12, wherein the controller comprises a plurality of switches adapted to selectively couple the plurality of common lines of the LCD and the plurality of segment lines of the LCD to the majority voltage. 14. A method of operating a liquid crystal display (LCD), the method comprising: operating the LCD in a first phase of operation; performing segment resetting after operating the LCD in the first phase of operation; and operating the LCD in a second phase of operation after performing segment resetting. 15. The method according to claim 14, wherein performing segment resetting further comprises coupling a plurality of common lines of the LCD to a plurality of segment lines of the LCD. 16. The method according to claim 14, wherein performing segment resetting further comprises coupling a plurality of common lines of the LCD and a plurality of segment lines of the LCD to a ground potential. 17. The method according to claim 14, wherein performing segment resetting further comprises coupling a plurality of common lines of the LCD and a plurality of segment lines of the LCD to a bias voltage. 18. The method according to claim 17, wherein the bias voltage comprises a majority voltage of the common lines of the LCD. 19. The method according to claim 18, wherein the majority voltage varies depending on whether the LCD is operated in the first or second phases of operation. 20. The method according to claim 17, wherein the bias voltage comprises the majority voltage of the common lines of the LCD during the first phase of operation.
2,600
9,966
9,966
14,654,441
2,646
Embodiments are disclosed for systems for a vehicle. An example system for a vehicle includes a central unit and an input device, wherein the central unit has a first interface for connecting to a first mobile device and a second interface for connecting to a second mobile device, wherein the central unit is configured to assign the first mobile device to a driver and the second mobile device to a passenger in the vehicle, wherein the central unit is configured to output information on a call directed to the first mobile device, wherein the central unit is configured to detect an input by means of the input device during the outputting of information, and wherein the central unit is configured, based on the detection of the input, to redirect the call to the second mobile device.
1. A system for a vehicle, comprising a central unit and an input device, wherein the central unit has a first interface for connecting to a first mobile device and a second interface for connecting to a second mobile device, wherein the central unit is configured to assign the first mobile device to a driver and the second mobile device to a passenger in the vehicle, wherein the central unit is configured to output information on a call directed to the first mobile device, wherein the central unit is configured to detect an input by means of the input device during the outputting of information, and wherein the central unit is configured, based on the detection of the input, to redirect the call to the second mobile device. 2. The system according to claim 1, having a first near field communication device, having a second near field communication device, wherein the central unit is configured to assign the first mobile device, coupled to the first near field communication device, to the driver, and wherein the central unit is configured to assign the second mobile device, coupled to the second near field communication device, to the passenger. 3. The system according to claim 2, wherein the first near field communication device is positioned toward a first retainer for the first mobile device, and wherein the second near field communication device is positioned toward a second retainer for the second mobile device. 4. The system according to claim 1, wherein the central unit is configured to output the call information on a display as a component of graphical data, and wherein the display is configured for the graphical illustration of the information. 5. The system according to claim 1, wherein the central unit is configured to change the information after the redirection of the call. 6. The system according to claim 1, wherein the input device has a sensor for contactless input, and wherein the sensor is arranged in a dashboard of the vehicle. 7. The system according to claim 1, wherein the first interface is configured for a first wireless connection to the first mobile device, and wherein the second interface is configured for a second wireless connection to the second mobile device. 8. The system according claim 1 wherein the central unit is configured for redirection to send a signal for activating a service of the ad hoc call redirection to a service provider. 9. The system according to claim 1, wherein the central unit is configured for call redirection to stream audio signals, associated with the call, between a transceiver and the second mobile device. 10. The system according to claim 1, wherein the central unit is configured to receive first data of a first SIM card via the first interface from the first mobile device, and wherein the central unit is configured to receive second data of a second SIM card via the second interface from the second mobile device. 11. The system according to claim 1, wherein the central unit is configured to additionally redirect the call automatically. 12. The system according to claim 1, wherein the central unit is configured to determine a workload level of the driver based on estimated traffic conditions and/or navigation data, wherein the central unit is configured to suppress the outputting of information and to redirect the call automatically or to reject the call, if an exceeding is determined that the workload level exceeds a threshold. 13. The system according to claim 12, wherein the central unit is configured to generate an audio message or a written message and to send the audio message or the written message to the calling party. 14. A system for a vehicle, comprising a central unit, wherein the central unit has a first interface for connecting to a first mobile device, wherein the central unit is configured to assign the first mobile device to a driver, wherein the central unit is configured to estimate one element of a set of traffic situation based on navigation data and/or measured data of the vehicle movement, wherein the central unit is configured to output an audio message or a written message assigned to the one element, and wherein the central unit is configured to send the audio message or the written message to a calling party. 15. A communication method for a vehicle, comprising: connecting a central unit to a first mobile device via a first interface, connecting the central unit to a second mobile device via a second interface, assigning the first mobile device to a driver by means of the central unit, assigning the second mobile device to a passenger by means of the central unit, outputting information on a call directed to the first mobile device by the central unit, detecting an input by means of the input device during the outputting of information by the central unit, and redirecting the call to the second mobile device based on the detection of the input by the central unit.
Embodiments are disclosed for systems for a vehicle. An example system for a vehicle includes a central unit and an input device, wherein the central unit has a first interface for connecting to a first mobile device and a second interface for connecting to a second mobile device, wherein the central unit is configured to assign the first mobile device to a driver and the second mobile device to a passenger in the vehicle, wherein the central unit is configured to output information on a call directed to the first mobile device, wherein the central unit is configured to detect an input by means of the input device during the outputting of information, and wherein the central unit is configured, based on the detection of the input, to redirect the call to the second mobile device.1. A system for a vehicle, comprising a central unit and an input device, wherein the central unit has a first interface for connecting to a first mobile device and a second interface for connecting to a second mobile device, wherein the central unit is configured to assign the first mobile device to a driver and the second mobile device to a passenger in the vehicle, wherein the central unit is configured to output information on a call directed to the first mobile device, wherein the central unit is configured to detect an input by means of the input device during the outputting of information, and wherein the central unit is configured, based on the detection of the input, to redirect the call to the second mobile device. 2. The system according to claim 1, having a first near field communication device, having a second near field communication device, wherein the central unit is configured to assign the first mobile device, coupled to the first near field communication device, to the driver, and wherein the central unit is configured to assign the second mobile device, coupled to the second near field communication device, to the passenger. 3. The system according to claim 2, wherein the first near field communication device is positioned toward a first retainer for the first mobile device, and wherein the second near field communication device is positioned toward a second retainer for the second mobile device. 4. The system according to claim 1, wherein the central unit is configured to output the call information on a display as a component of graphical data, and wherein the display is configured for the graphical illustration of the information. 5. The system according to claim 1, wherein the central unit is configured to change the information after the redirection of the call. 6. The system according to claim 1, wherein the input device has a sensor for contactless input, and wherein the sensor is arranged in a dashboard of the vehicle. 7. The system according to claim 1, wherein the first interface is configured for a first wireless connection to the first mobile device, and wherein the second interface is configured for a second wireless connection to the second mobile device. 8. The system according claim 1 wherein the central unit is configured for redirection to send a signal for activating a service of the ad hoc call redirection to a service provider. 9. The system according to claim 1, wherein the central unit is configured for call redirection to stream audio signals, associated with the call, between a transceiver and the second mobile device. 10. The system according to claim 1, wherein the central unit is configured to receive first data of a first SIM card via the first interface from the first mobile device, and wherein the central unit is configured to receive second data of a second SIM card via the second interface from the second mobile device. 11. The system according to claim 1, wherein the central unit is configured to additionally redirect the call automatically. 12. The system according to claim 1, wherein the central unit is configured to determine a workload level of the driver based on estimated traffic conditions and/or navigation data, wherein the central unit is configured to suppress the outputting of information and to redirect the call automatically or to reject the call, if an exceeding is determined that the workload level exceeds a threshold. 13. The system according to claim 12, wherein the central unit is configured to generate an audio message or a written message and to send the audio message or the written message to the calling party. 14. A system for a vehicle, comprising a central unit, wherein the central unit has a first interface for connecting to a first mobile device, wherein the central unit is configured to assign the first mobile device to a driver, wherein the central unit is configured to estimate one element of a set of traffic situation based on navigation data and/or measured data of the vehicle movement, wherein the central unit is configured to output an audio message or a written message assigned to the one element, and wherein the central unit is configured to send the audio message or the written message to a calling party. 15. A communication method for a vehicle, comprising: connecting a central unit to a first mobile device via a first interface, connecting the central unit to a second mobile device via a second interface, assigning the first mobile device to a driver by means of the central unit, assigning the second mobile device to a passenger by means of the central unit, outputting information on a call directed to the first mobile device by the central unit, detecting an input by means of the input device during the outputting of information by the central unit, and redirecting the call to the second mobile device based on the detection of the input by the central unit.
2,600
9,967
9,967
12,022,572
2,622
The use of multiple stimulation frequencies and phases is disclosed to detect touch events on a touch sensor panel in a low-power state. Simultaneously during every frame, a number of rows of the touch sensor panel can be driven with a positive phase of one or more stimulation signals, and the same number of different rows can be driven with the anti-phase of those same stimulation signals. Because the same number of rows are stimulated with the in-phase and anti-phase components of the one or more stimulation signals, the resulting charges injected into a given column cancel each other out. However, a touch event will create an imbalance, and a non-zero charge will be detected. The detection of the touch event can then trigger the system to wake up, activate a panel processor, and perform a full panel scan, where the location of the touch event can be identified.
1. A method for detecting whether a touch event has occurred on a touch sensor panel, comprising: while maintaining a panel processor in an inactive state, simultaneously driving a first group of rows of the touch sensor panel with one or more stimulation signals, determining whether an output of one or more sense channels coupled to one or more columns of the touch sensor panel exceeds a predetermined threshold indicative of a touch event, and if the output exceeds the threshold, triggering a subsequent capture of touch data for determining a location of the touch event. 2. The method of claim 1, further comprising: driving the first group of rows driven with in-phase components of the one or more stimulation signals; and simultaneously driving a second group of rows with anti-phase components of the one or more stimulation signals in a balanced manner so that in a no-touch condition, a net charge injected into each of one or more columns of the touch sensor panel is about zero. 3. The method of claim 2, the one or more first stimulation signals comprising two or more stimulation signals at different frequencies. 4. The method of claim 2, further comprising performing multiple scans of the touch sensor panel, each scan driving the rows of the touch sensor panel in a different balanced stimulation pattern. 5. The method of claim 2, further comprising balancing the one or more stimulation signals by driving a same number of rows with both the in-phase and anti-phase components of the one or more stimulation signals. 6. The method of claim 2, further comprising balancing the one or more stimulation signals by varying the number of rows driven by the in-phase and anti-phase components of the one or more stimulation signals and varying the amplitudes of the in-phase and anti-phase components of the one or more stimulation signals. 7. The method of claim 2, further comprising reducing a size of a feedback capacitor Cfb in the sense channel due to the balancing of the stimulation signals. 8. The method of claim 1, wherein the first group of rows include all of the rows in the touch sensor panel, and are simultaneously driven with a first stimulation signal of a same frequency and phase. 9. The method of claim 1, wherein one or more of the first stimulation signals are composite multi-frequency signals. 10. The method of claim 1, wherein one or more of the first stimulation signals are single-frequency signals. 11. The method of claim 1, further comprising setting the predetermined threshold to about a midpoint of a range of output values of the sense channels produced between the no-touch to a full-touch condition. 12. The method of claim 1, further comprising entering an advanced power management mode and applying a sequence of stimulation signals to the rows to over multiple frames and gathering data needed to determine a location of touch if it is determined that the output of one or more sense channels coupled to one or more columns of the touch sensor panel exceeds a predetermined threshold. 13. An apparatus for detecting whether a touch event has occurred on a touch sensor panel, comprising: driver logic configured for providing stimulation signals to drive rows of the touch sensor panel; a plurality of sense channels configured for detecting touch events on the touch sensor panel; and auto-scan scan logic coupled to the driver logic and the plurality of sense channels, the auto-scan logic configured for during an auto-scan cycle, and while maintaining a panel processor in an inactive state, periodically triggering the driver logic to simultaneously drive a first group of rows of the touch sensor panel with one or more first stimulation signals, determining whether an output of one or more sense channels exceeds a predetermined threshold indicative of a touch event, and if the output exceeds the threshold, triggering a subsequent capture of touch data for determining a location of the touch event. 14. The apparatus of claim 13, the auto-scan logic further configured, during the auto-scan cycle, for: periodically triggering the driver logic to simultaneously drive the first group of rows of the touch sensor panel with in-phase component of the one or more first stimulation signals, while simultaneously driving a second group of rows with anti-phase components of the one or more stimulation signals in a balanced manner so that in a no-touch condition, a net charge injected into each of one or more columns of the touch sensor panel is about zero. 15. The apparatus of claim 14, the one or more first stimulation signals comprising two or more stimulation signals at different frequencies. 16. The apparatus of claim 14, the auto-scan logic further configured for performing multiple scans of the touch sensor panel during the auto-scan cycle, each scan driving the rows of the touch sensor panel in a different balanced stimulation pattern. 17. The apparatus of claim 14, the auto-scan logic further configured for balancing the one or more first stimulation signals by driving a same number of rows with both the in-phase and anti-phase components of the one or more stimulation signals. 18. The apparatus of claim 14, the auto-scan logic further configured for balancing the one or more stimulation signals by varying the number of rows driven by the in-phase and anti-phase components of the one or more stimulation signals and varying the amplitudes of the in-phase and anti-phase components of the one or more stimulation signals. 19. The apparatus of claim 14, further comprising a feedback capacitor Cfb in each sense channel whose size is reduced as compared to the Cfb needed for unbalanced stimulation signals. 20. The apparatus of claim 13, wherein the first group of rows include all of the rows in the touch sensor panel, and are simultaneously driven with a first stimulation signal of a same frequency and phase. 21. The apparatus of claim 13, the driver logic further configured for providing composite multi-frequency first stimulation signals. 22. The apparatus of claim 13, the driver logic further configured for providing separate single-frequency first stimulation signals. 23. The apparatus of claim 13, the auto-scan logic further configured for detecting the touch event by determining whether an output value from one of the plurality of sense channel exceeds a threshold set at about a midpoint of a range of the output values of the sense channel produced between the no-touch to a full-touch condition. 24. The apparatus of claim 13, the auto-scan logic further configured for entering an advanced power management mode and applying a sequence of stimulation signals to the rows to over multiple frames and gathering data needed to determine a location of touch if it is determined that the output of one or more sense channels coupled to one or more columns of the touch sensor panel exceeds a predetermined threshold. 25. The apparatus of claim 24, wherein the advanced power management mode is entered while keeping the panel processing in the inactive state. 26. A computing system comprising the apparatus of claim 13. 27. A mobile telephone including an apparatus for detecting whether a touch event has occurred on a touch sensor panel, the apparatus comprising: driver logic configured for providing stimulation signals to drive rows of the touch sensor panel; a plurality of sense channels configured for detecting touch events on the touch sensor panel; and auto-scan scan logic coupled to the driver logic and the plurality of sense channels, the auto-scan logic configured for during an auto-scan cycle, and while maintaining a panel processor in an inactive state, periodically triggering the driver logic to simultaneously drive a first group of rows of the touch sensor panel with one or more first stimulation signals, determining whether an output of one or more sense channels exceeds a predetermined threshold indicative of a touch event, and if the output exceeds the threshold, triggering a subsequent capture of touch data for determining a location of the touch event. 28. A digital audio player including an apparatus for detecting whether a touch event has occurred on a touch sensor panel, the digital audio player comprising: driver logic configured for providing stimulation signals to drive rows of the touch sensor panel; a plurality of sense channels configured for detecting touch events on the touch sensor panel; and auto-scan scan logic coupled to the driver logic and the plurality of sense channels, the auto-scan logic configured for during an auto-scan cycle, and while maintaining a panel processor in an inactive state, periodically triggering the driver logic to simultaneously drive a first group of rows of the touch sensor panel with one or more first stimulation signals, determining whether an output of one or more sense channels exceeds a predetermined threshold indicative of a touch event, and if the output exceeds the threshold, triggering a subsequent capture of touch data for determining a location of the touch event.
The use of multiple stimulation frequencies and phases is disclosed to detect touch events on a touch sensor panel in a low-power state. Simultaneously during every frame, a number of rows of the touch sensor panel can be driven with a positive phase of one or more stimulation signals, and the same number of different rows can be driven with the anti-phase of those same stimulation signals. Because the same number of rows are stimulated with the in-phase and anti-phase components of the one or more stimulation signals, the resulting charges injected into a given column cancel each other out. However, a touch event will create an imbalance, and a non-zero charge will be detected. The detection of the touch event can then trigger the system to wake up, activate a panel processor, and perform a full panel scan, where the location of the touch event can be identified.1. A method for detecting whether a touch event has occurred on a touch sensor panel, comprising: while maintaining a panel processor in an inactive state, simultaneously driving a first group of rows of the touch sensor panel with one or more stimulation signals, determining whether an output of one or more sense channels coupled to one or more columns of the touch sensor panel exceeds a predetermined threshold indicative of a touch event, and if the output exceeds the threshold, triggering a subsequent capture of touch data for determining a location of the touch event. 2. The method of claim 1, further comprising: driving the first group of rows driven with in-phase components of the one or more stimulation signals; and simultaneously driving a second group of rows with anti-phase components of the one or more stimulation signals in a balanced manner so that in a no-touch condition, a net charge injected into each of one or more columns of the touch sensor panel is about zero. 3. The method of claim 2, the one or more first stimulation signals comprising two or more stimulation signals at different frequencies. 4. The method of claim 2, further comprising performing multiple scans of the touch sensor panel, each scan driving the rows of the touch sensor panel in a different balanced stimulation pattern. 5. The method of claim 2, further comprising balancing the one or more stimulation signals by driving a same number of rows with both the in-phase and anti-phase components of the one or more stimulation signals. 6. The method of claim 2, further comprising balancing the one or more stimulation signals by varying the number of rows driven by the in-phase and anti-phase components of the one or more stimulation signals and varying the amplitudes of the in-phase and anti-phase components of the one or more stimulation signals. 7. The method of claim 2, further comprising reducing a size of a feedback capacitor Cfb in the sense channel due to the balancing of the stimulation signals. 8. The method of claim 1, wherein the first group of rows include all of the rows in the touch sensor panel, and are simultaneously driven with a first stimulation signal of a same frequency and phase. 9. The method of claim 1, wherein one or more of the first stimulation signals are composite multi-frequency signals. 10. The method of claim 1, wherein one or more of the first stimulation signals are single-frequency signals. 11. The method of claim 1, further comprising setting the predetermined threshold to about a midpoint of a range of output values of the sense channels produced between the no-touch to a full-touch condition. 12. The method of claim 1, further comprising entering an advanced power management mode and applying a sequence of stimulation signals to the rows to over multiple frames and gathering data needed to determine a location of touch if it is determined that the output of one or more sense channels coupled to one or more columns of the touch sensor panel exceeds a predetermined threshold. 13. An apparatus for detecting whether a touch event has occurred on a touch sensor panel, comprising: driver logic configured for providing stimulation signals to drive rows of the touch sensor panel; a plurality of sense channels configured for detecting touch events on the touch sensor panel; and auto-scan scan logic coupled to the driver logic and the plurality of sense channels, the auto-scan logic configured for during an auto-scan cycle, and while maintaining a panel processor in an inactive state, periodically triggering the driver logic to simultaneously drive a first group of rows of the touch sensor panel with one or more first stimulation signals, determining whether an output of one or more sense channels exceeds a predetermined threshold indicative of a touch event, and if the output exceeds the threshold, triggering a subsequent capture of touch data for determining a location of the touch event. 14. The apparatus of claim 13, the auto-scan logic further configured, during the auto-scan cycle, for: periodically triggering the driver logic to simultaneously drive the first group of rows of the touch sensor panel with in-phase component of the one or more first stimulation signals, while simultaneously driving a second group of rows with anti-phase components of the one or more stimulation signals in a balanced manner so that in a no-touch condition, a net charge injected into each of one or more columns of the touch sensor panel is about zero. 15. The apparatus of claim 14, the one or more first stimulation signals comprising two or more stimulation signals at different frequencies. 16. The apparatus of claim 14, the auto-scan logic further configured for performing multiple scans of the touch sensor panel during the auto-scan cycle, each scan driving the rows of the touch sensor panel in a different balanced stimulation pattern. 17. The apparatus of claim 14, the auto-scan logic further configured for balancing the one or more first stimulation signals by driving a same number of rows with both the in-phase and anti-phase components of the one or more stimulation signals. 18. The apparatus of claim 14, the auto-scan logic further configured for balancing the one or more stimulation signals by varying the number of rows driven by the in-phase and anti-phase components of the one or more stimulation signals and varying the amplitudes of the in-phase and anti-phase components of the one or more stimulation signals. 19. The apparatus of claim 14, further comprising a feedback capacitor Cfb in each sense channel whose size is reduced as compared to the Cfb needed for unbalanced stimulation signals. 20. The apparatus of claim 13, wherein the first group of rows include all of the rows in the touch sensor panel, and are simultaneously driven with a first stimulation signal of a same frequency and phase. 21. The apparatus of claim 13, the driver logic further configured for providing composite multi-frequency first stimulation signals. 22. The apparatus of claim 13, the driver logic further configured for providing separate single-frequency first stimulation signals. 23. The apparatus of claim 13, the auto-scan logic further configured for detecting the touch event by determining whether an output value from one of the plurality of sense channel exceeds a threshold set at about a midpoint of a range of the output values of the sense channel produced between the no-touch to a full-touch condition. 24. The apparatus of claim 13, the auto-scan logic further configured for entering an advanced power management mode and applying a sequence of stimulation signals to the rows to over multiple frames and gathering data needed to determine a location of touch if it is determined that the output of one or more sense channels coupled to one or more columns of the touch sensor panel exceeds a predetermined threshold. 25. The apparatus of claim 24, wherein the advanced power management mode is entered while keeping the panel processing in the inactive state. 26. A computing system comprising the apparatus of claim 13. 27. A mobile telephone including an apparatus for detecting whether a touch event has occurred on a touch sensor panel, the apparatus comprising: driver logic configured for providing stimulation signals to drive rows of the touch sensor panel; a plurality of sense channels configured for detecting touch events on the touch sensor panel; and auto-scan scan logic coupled to the driver logic and the plurality of sense channels, the auto-scan logic configured for during an auto-scan cycle, and while maintaining a panel processor in an inactive state, periodically triggering the driver logic to simultaneously drive a first group of rows of the touch sensor panel with one or more first stimulation signals, determining whether an output of one or more sense channels exceeds a predetermined threshold indicative of a touch event, and if the output exceeds the threshold, triggering a subsequent capture of touch data for determining a location of the touch event. 28. A digital audio player including an apparatus for detecting whether a touch event has occurred on a touch sensor panel, the digital audio player comprising: driver logic configured for providing stimulation signals to drive rows of the touch sensor panel; a plurality of sense channels configured for detecting touch events on the touch sensor panel; and auto-scan scan logic coupled to the driver logic and the plurality of sense channels, the auto-scan logic configured for during an auto-scan cycle, and while maintaining a panel processor in an inactive state, periodically triggering the driver logic to simultaneously drive a first group of rows of the touch sensor panel with one or more first stimulation signals, determining whether an output of one or more sense channels exceeds a predetermined threshold indicative of a touch event, and if the output exceeds the threshold, triggering a subsequent capture of touch data for determining a location of the touch event.
2,600
9,968
9,968
15,522,648
2,616
A system to provide augmented reality content. The system includes a connection engine to connect to a first device determined to be within physical proximity of the system. A feature extraction engine of the system is to generate a feature extractor according to the first device and provide the feature extractor to the first device via the connection engine The system includes an augmented reality generation engine to generate a generated augmented reality content according to an extracted feature provided by the feature extractor.
1. A system to provide augmented reality content, comprising: a connection engine to connect to a first device determined to be within physical proximity of the system; a feature extraction engine to generate a feature extractor, according to the first device and provide the feature extractor to the first device via the connection engine; and an augmented reality generation engine to generate augmented reality content according to an extracted feature provided by the feature extractor. 2. The system of claim 1, wherein the extracted feature is a code based on at least one of a video content and an audio content provided by the first device. 3. The system of claim 1, wherein the feature extractor is instructions to perform at feast ore of object recognition, text recognition, and audio recognition of a content provided by the first device and/or a meta-data of the content provided by the first device. 4. The system of claim 1, wherein the generated augmented reality content is displayed on a display of the system while a camera and/or microphone of the system is capturing the display of the first device. 5. The system of claim 1, wherein the augmented reality generation engine is to provide specific generated augmented reality content at a specified time according to the extracted feature. 6. A non-transitory machine-readable storage medium comprising instructions executable by a processing resource to: connect to a first device providing video content in physical proximity of a second device via a first connection; generate a feature extractor in the second device according, to the first device; provide the feature extractor to the first device via the first connection; receive an extracted feature of the video content from the first device in the second device via the first connection; generate augmented reality content on a second display of the second device while the era of the second device is capturing a first display of the first device; and display the augmented reality content on the second display of the second device according to an extracted feature of the video content, wherein the extracted feature is a code based on the video content. 7. The medium of claim 6, wherein the feature extractor is o extract features from the video content periodically. 8. The medium of claim 6, wherein the extracted feature includes at least one of a title, a network, a director, a producer, closed captioning, a distributor, a time stamp, and a duration of the video content and/or meta-data associated with the video content. 9. The medium of claim 8, wherein the feature extractor is instructions to perform at least one of object recognition, text recognition, and audio recognition of the video content and/or meta-data of the video content. 10. The medium of claim 8, wherein the displayed augmented reality content is displayed at a specific time according to the video content. 11. The medium of claim 6, wherein the first connection is a wireless connection including at least one of a Bluetooth connection, a Wi-Fi connection, an Insteon connection, Infrared Data Association (IrDA) connection, Wireless USB connection, Z-Wave connection, ZigBee connection, a cellular network connection, a Global System for Mobile Communications (GSM), Personal Communications Service (PCS) connection, Digital Advanced Mobile Phone Service connection, a general packet radio service (GPRS) network connection, and body area network (BAN) connection. 12. A method for providing an extracted feature to an augmented reality device, comprising connecting a media player to an augmented reality device via a wireless connection; receiving a feature extractor from the augmented reality device in the media player via the wireless connection; extracting a feature rot a content being provided by the media player according to the feature extractor; and providing the extracted feature to the augmented reality device via the wireless connection. 13. The method of claim 12, wherein the feature extractor includes instructions to perform at least one of object recognition, text recognition, and audio recognition of the content and/or a meta data of the content. 14. The method of claim 12, wherein the wireless connection is at least one of a Bluetooth connection, a Wi-Fi connection, an Insteon connection, Infrared Data Association (IrDA) connection, Wireless USB connection, Z-Wave connection, ZigBee connection, a cellular network connection, a Global System for Mobile Communications (GSM), Personal Communications Service (PCS) connection, Digital Advanced Mobile Phone Service connection, a general packet radio service (GPRS) network connection, and body area network (BAN) connection. 15. The method of claim 12, wherein the media player connects to the augmented reality device when the augmented reality device is in a physical proximity of the media player.
A system to provide augmented reality content. The system includes a connection engine to connect to a first device determined to be within physical proximity of the system. A feature extraction engine of the system is to generate a feature extractor according to the first device and provide the feature extractor to the first device via the connection engine The system includes an augmented reality generation engine to generate a generated augmented reality content according to an extracted feature provided by the feature extractor.1. A system to provide augmented reality content, comprising: a connection engine to connect to a first device determined to be within physical proximity of the system; a feature extraction engine to generate a feature extractor, according to the first device and provide the feature extractor to the first device via the connection engine; and an augmented reality generation engine to generate augmented reality content according to an extracted feature provided by the feature extractor. 2. The system of claim 1, wherein the extracted feature is a code based on at least one of a video content and an audio content provided by the first device. 3. The system of claim 1, wherein the feature extractor is instructions to perform at feast ore of object recognition, text recognition, and audio recognition of a content provided by the first device and/or a meta-data of the content provided by the first device. 4. The system of claim 1, wherein the generated augmented reality content is displayed on a display of the system while a camera and/or microphone of the system is capturing the display of the first device. 5. The system of claim 1, wherein the augmented reality generation engine is to provide specific generated augmented reality content at a specified time according to the extracted feature. 6. A non-transitory machine-readable storage medium comprising instructions executable by a processing resource to: connect to a first device providing video content in physical proximity of a second device via a first connection; generate a feature extractor in the second device according, to the first device; provide the feature extractor to the first device via the first connection; receive an extracted feature of the video content from the first device in the second device via the first connection; generate augmented reality content on a second display of the second device while the era of the second device is capturing a first display of the first device; and display the augmented reality content on the second display of the second device according to an extracted feature of the video content, wherein the extracted feature is a code based on the video content. 7. The medium of claim 6, wherein the feature extractor is o extract features from the video content periodically. 8. The medium of claim 6, wherein the extracted feature includes at least one of a title, a network, a director, a producer, closed captioning, a distributor, a time stamp, and a duration of the video content and/or meta-data associated with the video content. 9. The medium of claim 8, wherein the feature extractor is instructions to perform at least one of object recognition, text recognition, and audio recognition of the video content and/or meta-data of the video content. 10. The medium of claim 8, wherein the displayed augmented reality content is displayed at a specific time according to the video content. 11. The medium of claim 6, wherein the first connection is a wireless connection including at least one of a Bluetooth connection, a Wi-Fi connection, an Insteon connection, Infrared Data Association (IrDA) connection, Wireless USB connection, Z-Wave connection, ZigBee connection, a cellular network connection, a Global System for Mobile Communications (GSM), Personal Communications Service (PCS) connection, Digital Advanced Mobile Phone Service connection, a general packet radio service (GPRS) network connection, and body area network (BAN) connection. 12. A method for providing an extracted feature to an augmented reality device, comprising connecting a media player to an augmented reality device via a wireless connection; receiving a feature extractor from the augmented reality device in the media player via the wireless connection; extracting a feature rot a content being provided by the media player according to the feature extractor; and providing the extracted feature to the augmented reality device via the wireless connection. 13. The method of claim 12, wherein the feature extractor includes instructions to perform at least one of object recognition, text recognition, and audio recognition of the content and/or a meta data of the content. 14. The method of claim 12, wherein the wireless connection is at least one of a Bluetooth connection, a Wi-Fi connection, an Insteon connection, Infrared Data Association (IrDA) connection, Wireless USB connection, Z-Wave connection, ZigBee connection, a cellular network connection, a Global System for Mobile Communications (GSM), Personal Communications Service (PCS) connection, Digital Advanced Mobile Phone Service connection, a general packet radio service (GPRS) network connection, and body area network (BAN) connection. 15. The method of claim 12, wherein the media player connects to the augmented reality device when the augmented reality device is in a physical proximity of the media player.
2,600
9,969
9,969
15,478,926
2,632
Apparatus and methods for calibrating radio frequency transmitters to compensate for common mode local oscillator leakage are provided herein. In certain configurations herein, a transmitter generates a radio frequency transmit signal based on mixing a baseband input signal with a local oscillator signal. The transmitter is calibrated to compensate for common mode local oscillator leakage. Thus, a common mode component of the local oscillator signal is reduced or eliminated from the radio frequency transmit signal, which provides a number of benefits, including lower levels of undesired emissions from the transmitter.
1. A radio frequency (RF) communication system with common mode local oscillator leakage compensation, the RF communication system comprising: an RF transmitter comprising a local oscillator (LO), the RF transmitter configured to generate a differential transmitter signal including a non-inverted signal and an inverted signal; an LO leakage observation circuit configured to determine an amount of common mode LO leakage from the LO in the differential transmitter signal based at least in part on a sum of the non-inverted signal the inverted signal; and a common mode LO generation circuit configured to compensate the RF transmitter for the amount of common mode LO leakage determined by the LO leakage observation circuit. 2. The RF communication system of claim 1, wherein the common mode LO generation circuit is configured to generate a common mode LO signal that is combined with the differential transmitter signal to compensate for the amount of common mode LO leakage. 3. The RF communication system of claim 1, wherein the RF transmitter comprises a zero-intermediate frequency transmitter. 4. The RF communication system of claim 1, wherein the RF communication system further comprises a common mode LO leakage control circuit configured to control an amount of leakage correction provided by the common mode LO generation circuit based on the amount of common mode LO leakage. 5. The RF communication system of claim 1, wherein the LO leakage observation circuit is further configured to determine a differential LO leakage from the LO based at least in part on a difference between the non-inverted signal and the inverted signal. 6. The RF communication system of claim 5, wherein the LO leakage observation circuit is configurable between a common mode leakage observation mode and a differential leakage observation mode. 7. The RF communication system of claim 1, wherein the LO leakage observation circuit comprises an observation radio frequency front end (RFFE) configured to generate a leakage observation signal based on the differential transmitter signal, and an observation receiver configured to downconvert the leakage observation signal. 8. The RF communication system of claim 7, wherein the RF communication system further comprises a common mode LO leakage control circuit configured to receive one or more digital observation signals from the observation receiver, and to control an amount of leakage correction provided by the common mode LO generation circuit based on the one or more digital observation signals. 9. The RF communication system of claim 7, wherein the observation RFFE is configurable between a common mode leakage observation mode and a differential leakage observation mode. 10. The RF communication system of claim 8, wherein the LO leakage observation circuit further comprises a dummy RFFE in parallel with the observation RFFE, wherein the dummy RFFE is controlled to the differential leakage observation mode when the observation RFFE is in the common mode leakage observation mode, and to the common mode leakage observation mode when the observation RFFE is in the differential leakage observation mode. 11. The RF communication system of claim 8, wherein the observation receiver comprises an observation LO, wherein the amount of common mode LO leakage is compensated for measurement error arising from LO leakage from the observation LO. 12. The RF communication system of claim 11, wherein the amount of common mode LO leakage is compensated using at least one of a plurality of LO leakage observation measurements with different measurement polarities or a plurality of LO leakage observation measurements with different gains. 13. The RF communication system of claim 1, wherein the common mode LO generation circuit comprises an in-phase (I) mixer and a quadrature-phase (Q) mixer configured to generate a common mode LO signal that is combined with the differential transmitter signal. 14. The RF communication system of claim 13, wherein the common mode LO generation circuit further comprises a first digital-to-analog converter (DAC) configured to control an amount of common mode leakage compensation provided by the I mixer based on a first digital control signal, and a second DAC configured to control an amount of common mode leakage compensation provided by the Q mixer based on a second digital control signal. 15. A method of compensating for transmitter common mode local oscillator leakage in a radio frequency (RF) communication system, the method comprising: generating a differential transmitter signal by using at least a local oscillator (LO) of an RF transmitter, wherein the differential transmitter signal includes a non-inverted signal and an inverted signal; determining, using an LO leakage observation circuit, an amount of common mode LO leakage from the LO based at least in part on a sum of the non-inverted signal and the inverted signal; and compensating the RF transmitter for common mode LO leakage based on the determined amount of common mode LO leakage using a common mode LO generation circuit. 16. The method of claim 15, wherein compensating the RF transmitter for common mode LO leakage comprises generating a common mode LO signal, controlling a magnitude of the common mode LO signal based on the determined amount of common mode LO leakage, and combining the common mode LO signal with the differential transmitter signal. 17. A transceiver die with common mode local oscillator leakage compensation, the transceiver die comprising: a semiconductor substrate; a radio frequency (RF) transmitter formed on the semiconductor substrate and comprising a local oscillator (LO), wherein the RF transmitter is configured to generate a differential transmitter signal that includes a non-inverted signal and an inverted signal by using at least the LO and a mixer; an LO leakage observation circuit formed on the semiconductor substrate and configured to determine, based at least in part on a sum of the non-inverted signal and the inverted signal, an amount of common mode LO leakage from the LO into an output of the RF transmitter; and a common mode LO generation circuit formed on the semiconductor substrate and configured to compensate the RF transmitter for the amount of common mode LO leakage determined by the LO leakage observation circuit. 18. The transceiver die of claim 17, wherein the common mode LO generation circuit is configured to generate a common mode LO signal that is combined with the differential transmitter signal to compensate for the common mode LO leakage. 19. The transceiver die of claim 17, further comprising a common mode LO leakage control circuit formed on the semiconductor substrate and configured to control an amount of leakage correction provided by the common mode LO generation circuit based on the amount of common mode LO leakage. 20. The transceiver die of claim 17, wherein the LO leakage observation circuit comprises an observation radio frequency front end (RFFE) configured to generate a leakage observation signal based at least in part on the sum, and an observation receiver configured to downconvert the leakage observation signal.
Apparatus and methods for calibrating radio frequency transmitters to compensate for common mode local oscillator leakage are provided herein. In certain configurations herein, a transmitter generates a radio frequency transmit signal based on mixing a baseband input signal with a local oscillator signal. The transmitter is calibrated to compensate for common mode local oscillator leakage. Thus, a common mode component of the local oscillator signal is reduced or eliminated from the radio frequency transmit signal, which provides a number of benefits, including lower levels of undesired emissions from the transmitter.1. A radio frequency (RF) communication system with common mode local oscillator leakage compensation, the RF communication system comprising: an RF transmitter comprising a local oscillator (LO), the RF transmitter configured to generate a differential transmitter signal including a non-inverted signal and an inverted signal; an LO leakage observation circuit configured to determine an amount of common mode LO leakage from the LO in the differential transmitter signal based at least in part on a sum of the non-inverted signal the inverted signal; and a common mode LO generation circuit configured to compensate the RF transmitter for the amount of common mode LO leakage determined by the LO leakage observation circuit. 2. The RF communication system of claim 1, wherein the common mode LO generation circuit is configured to generate a common mode LO signal that is combined with the differential transmitter signal to compensate for the amount of common mode LO leakage. 3. The RF communication system of claim 1, wherein the RF transmitter comprises a zero-intermediate frequency transmitter. 4. The RF communication system of claim 1, wherein the RF communication system further comprises a common mode LO leakage control circuit configured to control an amount of leakage correction provided by the common mode LO generation circuit based on the amount of common mode LO leakage. 5. The RF communication system of claim 1, wherein the LO leakage observation circuit is further configured to determine a differential LO leakage from the LO based at least in part on a difference between the non-inverted signal and the inverted signal. 6. The RF communication system of claim 5, wherein the LO leakage observation circuit is configurable between a common mode leakage observation mode and a differential leakage observation mode. 7. The RF communication system of claim 1, wherein the LO leakage observation circuit comprises an observation radio frequency front end (RFFE) configured to generate a leakage observation signal based on the differential transmitter signal, and an observation receiver configured to downconvert the leakage observation signal. 8. The RF communication system of claim 7, wherein the RF communication system further comprises a common mode LO leakage control circuit configured to receive one or more digital observation signals from the observation receiver, and to control an amount of leakage correction provided by the common mode LO generation circuit based on the one or more digital observation signals. 9. The RF communication system of claim 7, wherein the observation RFFE is configurable between a common mode leakage observation mode and a differential leakage observation mode. 10. The RF communication system of claim 8, wherein the LO leakage observation circuit further comprises a dummy RFFE in parallel with the observation RFFE, wherein the dummy RFFE is controlled to the differential leakage observation mode when the observation RFFE is in the common mode leakage observation mode, and to the common mode leakage observation mode when the observation RFFE is in the differential leakage observation mode. 11. The RF communication system of claim 8, wherein the observation receiver comprises an observation LO, wherein the amount of common mode LO leakage is compensated for measurement error arising from LO leakage from the observation LO. 12. The RF communication system of claim 11, wherein the amount of common mode LO leakage is compensated using at least one of a plurality of LO leakage observation measurements with different measurement polarities or a plurality of LO leakage observation measurements with different gains. 13. The RF communication system of claim 1, wherein the common mode LO generation circuit comprises an in-phase (I) mixer and a quadrature-phase (Q) mixer configured to generate a common mode LO signal that is combined with the differential transmitter signal. 14. The RF communication system of claim 13, wherein the common mode LO generation circuit further comprises a first digital-to-analog converter (DAC) configured to control an amount of common mode leakage compensation provided by the I mixer based on a first digital control signal, and a second DAC configured to control an amount of common mode leakage compensation provided by the Q mixer based on a second digital control signal. 15. A method of compensating for transmitter common mode local oscillator leakage in a radio frequency (RF) communication system, the method comprising: generating a differential transmitter signal by using at least a local oscillator (LO) of an RF transmitter, wherein the differential transmitter signal includes a non-inverted signal and an inverted signal; determining, using an LO leakage observation circuit, an amount of common mode LO leakage from the LO based at least in part on a sum of the non-inverted signal and the inverted signal; and compensating the RF transmitter for common mode LO leakage based on the determined amount of common mode LO leakage using a common mode LO generation circuit. 16. The method of claim 15, wherein compensating the RF transmitter for common mode LO leakage comprises generating a common mode LO signal, controlling a magnitude of the common mode LO signal based on the determined amount of common mode LO leakage, and combining the common mode LO signal with the differential transmitter signal. 17. A transceiver die with common mode local oscillator leakage compensation, the transceiver die comprising: a semiconductor substrate; a radio frequency (RF) transmitter formed on the semiconductor substrate and comprising a local oscillator (LO), wherein the RF transmitter is configured to generate a differential transmitter signal that includes a non-inverted signal and an inverted signal by using at least the LO and a mixer; an LO leakage observation circuit formed on the semiconductor substrate and configured to determine, based at least in part on a sum of the non-inverted signal and the inverted signal, an amount of common mode LO leakage from the LO into an output of the RF transmitter; and a common mode LO generation circuit formed on the semiconductor substrate and configured to compensate the RF transmitter for the amount of common mode LO leakage determined by the LO leakage observation circuit. 18. The transceiver die of claim 17, wherein the common mode LO generation circuit is configured to generate a common mode LO signal that is combined with the differential transmitter signal to compensate for the common mode LO leakage. 19. The transceiver die of claim 17, further comprising a common mode LO leakage control circuit formed on the semiconductor substrate and configured to control an amount of leakage correction provided by the common mode LO generation circuit based on the amount of common mode LO leakage. 20. The transceiver die of claim 17, wherein the LO leakage observation circuit comprises an observation radio frequency front end (RFFE) configured to generate a leakage observation signal based at least in part on the sum, and an observation receiver configured to downconvert the leakage observation signal.
2,600
9,970
9,970
15,500,663
2,696
A projection capture system includes a camera to capture images of objects in a capture space, and a projector to illuminate the objects in the capture space and to project images captured by the camera into a display space. The projector includes a flash mode for providing white light for illuminating the objects in the capture space. The system includes a controller to adjust drive settings to LEDs of the projector to achieve a predetermined white point during the flash mode.
1. A projection capture system, comprising, a camera to capture images of objects in a capture space; and a projector to illuminate the objects in the capture space and to project images captured by the camera into a display space, wherein the projector includes a flash mode for projecting white light for illuminating the objects in the capture space; and a controller to adjust drive settings to LEDs of the projector to achieve a predetermined white point during the flash mode. 2. The system of claim 1, wherein the predetermined white points an International Commission on Illumination (CIE) D65 white point. 3. The system of claim 1, and further comprising: a color meter to generate color information based on the projected white light during the flash mode. 4. The system of claim 3, wherein the controller adjusts the drive settings of the LEDs based on the color information generated by the color meter. 5. The system of claim 4, wherein the drive settings of the LEDs comprise pulse width modulation (PWM) settings. 6. The system of claim 4, wherein the controller uses a simulated annealing method to achieve the predetermined white point. 7. The system of claim 6, wherein the simulated annealing method uses an International Commission on Illumination (CIE) xyY color space. 8. The system of claim 7, wherein the simulated annealing method includes determining a root mean square deviation of projected white light from the predetermined white point in the xyY space. 9. The system of claim 1, wherein the projector includes a sequential display mode for sequentially displaying red, green, and blue LED light to project images captured by the camera into the display space, and wherein the camera flash mode involves simultaneously displaying red, green, and blue LED light to provide the white light for illuminating the objects in the capture space. 10. The system of claim 1, wherein the projector is housed together with the camera. 11. The system of claim 1, wherein the camera is positioned above the projector and wherein the system further comprises a mirror positioned above the projector to reflect light from the projector down onto the display space. 12. A method for capturing and projecting images; comprising: illuminating objects in a capture space with white light from a light emitting diode (LED) projector in a flash mode of the projector; adjusting drive settings of LEDs of the projector to achieve a predetermined white point in the flash mode; capturing images of the objects in the capture space while the projector is illuminating the objects with white light at the predetermined white point; and projecting the captured images into a display space with the projector. 13. The method of claim 12, wherein adjusting drive settings of the LEDs further comprises: determining a root mean square deviation of projected white light from a D65 white point in an International Commission on Illumination (CIE) xyY color space; and adjusting pulse width modulation drive settings of the LEDS to achieve the D65 white point. 14. A computer-readable storage media storing computer-executable instructions that when executed by at least one processor cause the at least one processor to perform a method, comprising: causing a light emitting diode (LED) projector to enter a flash mode to illuminate objects in a capture space with projected white light; causing a color meter to measure the projected white light; adjusting pulse width modulation (PWM) drive settings of LEDs of the projector based on the measured projected white light to achieve a predetermined white point in the flash mode; causing a camera to capture images of the objects in the capture space while the projector is in the flash mode; and causing the projector to switch to a display mode to project the captured images into a display space. 15. The computer-readable storage media of claim 14, wherein the display space overlaps the capture space.
A projection capture system includes a camera to capture images of objects in a capture space, and a projector to illuminate the objects in the capture space and to project images captured by the camera into a display space. The projector includes a flash mode for providing white light for illuminating the objects in the capture space. The system includes a controller to adjust drive settings to LEDs of the projector to achieve a predetermined white point during the flash mode.1. A projection capture system, comprising, a camera to capture images of objects in a capture space; and a projector to illuminate the objects in the capture space and to project images captured by the camera into a display space, wherein the projector includes a flash mode for projecting white light for illuminating the objects in the capture space; and a controller to adjust drive settings to LEDs of the projector to achieve a predetermined white point during the flash mode. 2. The system of claim 1, wherein the predetermined white points an International Commission on Illumination (CIE) D65 white point. 3. The system of claim 1, and further comprising: a color meter to generate color information based on the projected white light during the flash mode. 4. The system of claim 3, wherein the controller adjusts the drive settings of the LEDs based on the color information generated by the color meter. 5. The system of claim 4, wherein the drive settings of the LEDs comprise pulse width modulation (PWM) settings. 6. The system of claim 4, wherein the controller uses a simulated annealing method to achieve the predetermined white point. 7. The system of claim 6, wherein the simulated annealing method uses an International Commission on Illumination (CIE) xyY color space. 8. The system of claim 7, wherein the simulated annealing method includes determining a root mean square deviation of projected white light from the predetermined white point in the xyY space. 9. The system of claim 1, wherein the projector includes a sequential display mode for sequentially displaying red, green, and blue LED light to project images captured by the camera into the display space, and wherein the camera flash mode involves simultaneously displaying red, green, and blue LED light to provide the white light for illuminating the objects in the capture space. 10. The system of claim 1, wherein the projector is housed together with the camera. 11. The system of claim 1, wherein the camera is positioned above the projector and wherein the system further comprises a mirror positioned above the projector to reflect light from the projector down onto the display space. 12. A method for capturing and projecting images; comprising: illuminating objects in a capture space with white light from a light emitting diode (LED) projector in a flash mode of the projector; adjusting drive settings of LEDs of the projector to achieve a predetermined white point in the flash mode; capturing images of the objects in the capture space while the projector is illuminating the objects with white light at the predetermined white point; and projecting the captured images into a display space with the projector. 13. The method of claim 12, wherein adjusting drive settings of the LEDs further comprises: determining a root mean square deviation of projected white light from a D65 white point in an International Commission on Illumination (CIE) xyY color space; and adjusting pulse width modulation drive settings of the LEDS to achieve the D65 white point. 14. A computer-readable storage media storing computer-executable instructions that when executed by at least one processor cause the at least one processor to perform a method, comprising: causing a light emitting diode (LED) projector to enter a flash mode to illuminate objects in a capture space with projected white light; causing a color meter to measure the projected white light; adjusting pulse width modulation (PWM) drive settings of LEDs of the projector based on the measured projected white light to achieve a predetermined white point in the flash mode; causing a camera to capture images of the objects in the capture space while the projector is in the flash mode; and causing the projector to switch to a display mode to project the captured images into a display space. 15. The computer-readable storage media of claim 14, wherein the display space overlaps the capture space.
2,600
9,971
9,971
14,707,581
2,613
A waveguide apparatus includes a planar waveguide and at least one optical diffraction element (DOE) that provides a plurality of optical paths between an exterior and interior of the planar waveguide. A phase profile of the DOE may combine a linear diffraction grating with a circular lens, to shape a wave front and produce beams with desired focus. Waveguide apparati may be assembled to create multiple focal planes. The DOE may have a low diffraction efficiency, and planar waveguides may be transparent when viewed normally, allowing passage of light from an ambient environment (e.g., real world) useful in AR systems. Light may be returned for temporally sequentially passes through the planar waveguide. The DOE(s) may be fixed or may have dynamically adjustable characteristics. An optical coupler system may couple images to the waveguide apparatus from a projector, for instance a biaxially scanning cantilevered optical fiber tip.
1. A method for facilitating surgery, comprising: retrieving patient data relating to a surgical procedure on a patient; generating virtual content based on the patient data; creating a first virtual user interface in a field of view of a first user; retrieving a first set of map points corresponding to a first location of the first user; and displaying the virtual content on the first virtual user interface, such that the first virtual user interface, when viewed by the first user, appears to be fixed at the first set of map points. 2. The method of claim 1, wherein the virtual content is selected from the group consisting of a three-dimensional image of a target of the surgery, patient identification information, a medical image, vital patient sign information, and a medical chart. 3. The method of claim 1, wherein the patient data is retrieved from a networked memory. 4. The method of claim 1, further comprising generating a second virtual user interface configured to facilitate communication between the first user and a second user, wherein the second user is in a second location different from the first location of the first user. 5. The method of claim 4, wherein the second virtual user interface is a visual representation of the second user. 6. The method of claim 4, wherein the second user is selected from the group consisting of a consulting surgeon, a patient, a party related to the patient, and a medical student. 7. The method of claim 4, further comprising: displaying the virtual content on the first virtual user interface to the second user, such that the first virtual user interface, when viewed by the second user, appears to be fixed at the set of map points. 8. The method of claim 1, wherein the virtual content is displayed to the first user during the surgical procedure, the method further comprising: receiving user input; generating additional virtual content based on the user input; and displaying the additional virtual content on the first virtual user interface, while the first user is performing the surgical procedure. 9. The method of claim 8, wherein the user input is selected from the group consisting of a gesture, visual data, audio data, sensory data, a direct command, a voice command, eye tracking, and selection of a physical button. 10. The method of claim 8, wherein the user input comprises an image of a field of view of the first user, the method further comprising displaying the additional virtual content on the first virtual user interface to the second user, such that the first virtual user interface, when viewed by the second user, appears to be fixed at the set of map points. 11. The method of claim 1 implemented as a system having means for implementing the method steps. 12. The method of claim 1 implemented as a computer program product comprising a computer-usable storage medium having executable code to execute the method steps.
A waveguide apparatus includes a planar waveguide and at least one optical diffraction element (DOE) that provides a plurality of optical paths between an exterior and interior of the planar waveguide. A phase profile of the DOE may combine a linear diffraction grating with a circular lens, to shape a wave front and produce beams with desired focus. Waveguide apparati may be assembled to create multiple focal planes. The DOE may have a low diffraction efficiency, and planar waveguides may be transparent when viewed normally, allowing passage of light from an ambient environment (e.g., real world) useful in AR systems. Light may be returned for temporally sequentially passes through the planar waveguide. The DOE(s) may be fixed or may have dynamically adjustable characteristics. An optical coupler system may couple images to the waveguide apparatus from a projector, for instance a biaxially scanning cantilevered optical fiber tip.1. A method for facilitating surgery, comprising: retrieving patient data relating to a surgical procedure on a patient; generating virtual content based on the patient data; creating a first virtual user interface in a field of view of a first user; retrieving a first set of map points corresponding to a first location of the first user; and displaying the virtual content on the first virtual user interface, such that the first virtual user interface, when viewed by the first user, appears to be fixed at the first set of map points. 2. The method of claim 1, wherein the virtual content is selected from the group consisting of a three-dimensional image of a target of the surgery, patient identification information, a medical image, vital patient sign information, and a medical chart. 3. The method of claim 1, wherein the patient data is retrieved from a networked memory. 4. The method of claim 1, further comprising generating a second virtual user interface configured to facilitate communication between the first user and a second user, wherein the second user is in a second location different from the first location of the first user. 5. The method of claim 4, wherein the second virtual user interface is a visual representation of the second user. 6. The method of claim 4, wherein the second user is selected from the group consisting of a consulting surgeon, a patient, a party related to the patient, and a medical student. 7. The method of claim 4, further comprising: displaying the virtual content on the first virtual user interface to the second user, such that the first virtual user interface, when viewed by the second user, appears to be fixed at the set of map points. 8. The method of claim 1, wherein the virtual content is displayed to the first user during the surgical procedure, the method further comprising: receiving user input; generating additional virtual content based on the user input; and displaying the additional virtual content on the first virtual user interface, while the first user is performing the surgical procedure. 9. The method of claim 8, wherein the user input is selected from the group consisting of a gesture, visual data, audio data, sensory data, a direct command, a voice command, eye tracking, and selection of a physical button. 10. The method of claim 8, wherein the user input comprises an image of a field of view of the first user, the method further comprising displaying the additional virtual content on the first virtual user interface to the second user, such that the first virtual user interface, when viewed by the second user, appears to be fixed at the set of map points. 11. The method of claim 1 implemented as a system having means for implementing the method steps. 12. The method of claim 1 implemented as a computer program product comprising a computer-usable storage medium having executable code to execute the method steps.
2,600
9,972
9,972
14,897,508
2,622
The purpose of the present invention is to control an information terminal, equipped with a touch panel and installed in a vehicle, in a manner in which it can be operated without affecting driving. A drive mode application causes an information terminal, controlling the running of an application through first control on the basis of a user operation received via a touch panel, to function as: a connection detection unit, which detects a connection to a vehicle; a function limiting unit, which controls the running of the application through second control, different from the first control, on the condition that the connection to the vehicle is detected; and a display switching unit, which, on the basis of a sliding operation in a first direction received via the touch panel while a first application is being displayed, displays the first application and a second application.
1. A computer-readable non-transitory storage medium storing a program that causes an information terminal, which includes a touch panel, controlling operations of applications in first control based on a user's operation received through a touch panel to function as: a connection detecting unit detecting a connection to a vehicle; and a function restricting unit controlling the operations of the applications in second control other than the first control under a condition that the connection detecting unit detects the connection to the vehicle, wherein the second control restricts some operations set in advance among the user's operations that can be received in the first control through the touch panel. 2. The computer-readable non-transitory storage medium storing a program according to claim 1, wherein the second control restricts operations some of the user's operations that can be received in the first control through the touch panel so that at least a swiping operation is permitted and restricts some of the applications that can be operated in the first control. 3. The computer-readable non-transitory storage medium storing a program according to claim 1, wherein the program further causes the information terminal to function as a display switching unit that executes switching among kinds of applications to be displayed on a display device of the information terminal based on a swiping operation executed in a first direction received through the touch panel. 4. The computer-readable non-transitory storage medium storing a program according to claim 3, wherein the program further causes the information terminal to function: to display a first application and a second application on the display device in a case when the swiping operation executed in the first direction is received in a state in which the first application is displayed on the display device; to display the second application on the display device in a case when the swiping operation executed in the first direction is received in a state in which the first application and the second application are displayed on the display device; and to display the first application on the display device in a case when a swiping operation executed in a second direction that is the direction opposite to the first direction is received in the state in which the first application and the second application are displayed on the display device, as the display switching unit. 5. A method of controlling applications that is executed by an information terminal, which includes a touch panel, controlling operations of the applications in first control based on a user's operation received through a touch panel, the method comprising: a connection detecting process detecting a connection to a vehicle; and a function restricting process controlling the operations of the applications in second control other than the first control under a condition that the connection to the vehicle is detected, wherein the second control restricts some operations set in advance among the user's operations that can be received in the first control through the touch panel. 6. The method of controlling applications according to claim 5, wherein the second control restricts some of the user's operations that can be received in the first control through the touch panel so that at least a swiping operation is permitted and restricts some of the applications that can be operated in the first control. 7. The method of controlling applications according to claim 5, the method further comprising a display switching process executing switching among kinds of applications to be displayed on a display device of the information terminal based on a swiping operation executed in a first direction received through the touch panel that is executed by the information terminal. 8. The method of controlling applications according to claim 7, wherein the display switching process includes: displaying a first application and a second application on the display device in a case when the swiping operation executed in the first direction is received in a state in which the first application is displayed on the display device; displaying the second application on the display device in a case when the swiping operation executed in the first direction is received in a state in which the first application and the second application are displayed on the display device; and displaying the first application on the display device in a case when a swiping operation executed in a second direction that is the direction opposite to the first direction is received in the state in which the first application and the second application are displayed on the display device. 9. A computer-readable non-transitory storage medium storing a program that causes a computer controlling operations of applications based on a user's operation received through a touch panel to function as: a determination unit determining whether the user's operation received through the touch panel is either a single touch or a multi-touch; a display switching unit executing switching among kinds of applications to be displayed on a display device of the computer based on a swiping operation received through the touch panel in a case when the single touch is determined by the determination unit; and an AP operating unit executing an operation of the application displayed on the display device based on the swiping operation received through the touch panel in a case when the multi-touch is determined by the determination unit. 10. The computer-readable non-transitory storage medium storing a program according to claim 9, wherein the program further causes the computer to function: to display a first application and a second application on the display device in a case when a single touch swiping operation executed in a first direction is received in a state in which the first application is displayed on the display device; to display the second application on the display device in a case when a single touch swiping operation executed in the first direction is received in a state in which the first application and the second application are displayed on the display device; and to display the first application on the display device in a case when a single touch swiping operation executed in a second direction that is the direction opposite to the first direction is received in the state in which the first application and the second application are displayed on the display device, as the display switching unit. 11. The computer-readable non-transitory storage medium storing a program according to claim 10, wherein the program causes the computer to function to execute an operation for a new application displayed by the display switching unit based on the multi-touch swiping operation in a case when a plurality of the applications are displayed on the display device as the AP operating unit. 12. A method of controlling applications that is executed by a computer controlling operations of the applications based on a user's operation received through a touch panel, the method comprising: a determination process determining whether the user's operation received through the touch panel is either a single touch or a multi-touch; a display switching process executing switching among kinds of applications to be displayed on a display device of the computer based on a swiping operation received through the touch panel in a case when the single touch is determined; and an AP operating process executing an operation of the application displayed on the display device based on the swiping operation received through the touch panel in a case when the multi-touch is determined. 13. The method of controlling applications according to claim 12, wherein the display switching process includes: displaying a first application and a second application on the display device in a case when a single touch swiping operation executed in a first direction is received in a state in which the first application is displayed on the display device; displaying the second application on the display device in a case when a single touch swiping operation executed in the first direction is received in a state in which the first application and the second application are displayed on the display device; and displaying the first application on the display device in a case when a single touch swiping operation executed in a second direction that is the direction opposite to the first direction is received in the state in which the first application and the second application are displayed on the display device. 14. The method of controlling applications according to claim 13, wherein the AP operating process executes an operation for a new application displayed by the display switching process based on the multi-touch swiping operation in a case when a plurality of the applications are displayed on the display device. 15. A computer-readable non-transitory storage medium storing a program that causes a computer controlling operations of applications based on a user's operation received through a touch panel to function as: a determination unit determining whether or not the user's operation received through the touch panel is a swiping operation; a function restricting unit invalidating the user's operation in a case when the operation is determined not to be the swiping operation by the determination unit; and a display switching unit executing switching among kinds of applications to be displayed on a display device of the computer based on the swiping operation in a case when the operation is determined to be the swiping operation by the determination unit. 16. The computer-readable non-transitory storage medium storing a program according to claim 15, wherein the program causes the computer to function to validate the swiping operation only in a case when the swiping operation is a swiping operation having a starting point positioned near an edge of the display device as the function restricting unit. 17. The computer-readable non-transitory storage medium storing a program according to claim 15, wherein the program further causes the computer to function: to determine whether or not the user's operation received through the touch panel is an operation executed in a predetermined area disposed in a part of the touch panel as the determination unit; and to validate the operation regardless of a content of the operation in a case when the user's operation is determined to be an operation executed in the predetermined area by the determination unit and invalidate an operation other than the swiping operation in a case when the user's operation is determined not to be an operation executed in the predetermined area by the determination unit as the function restricting unit. 18. A computer-readable non-transitory storage medium storing a program causing an information terminal controlling operations of applications based on a user's operation received through a touch panel mounted on a display device to function as: a display switching unit displaying a first application and a second application having a curved boundary line on a designated radius from a specific position on the first application on the display device based on a user's first type touch swiping operation executed in a first direction that is received through the touch panel in a state in which the first application is displayed on the display device. 19. The computer-readable non-transitory storage medium storing a program according to claim 18, wherein the program causes the information terminal to function to display the first application on an entire face of the display device in a case when a first type touch swiping operation executed in a second direction that is the direction opposite to the first direction is received in a state in which the first application and the second application are displayed on the display device as the display switching unit. 20. The computer-readable non-transitory storage medium storing a program according to claim 18, wherein the program further causes the information terminal to function as: a determination unit determining whether the user's operation received through the touch panel is a first type touch or a second type touch; and an AP operating unit executing an operation of the application displayed on the display device based on the swiping operation in a case when the user's swiping operation received through the touch panel is determined to be the second type touch by the determination unit, wherein the program causes the information terminal to function to display the first application and the second application on the display device in a case when the user's swiping operation executed in a first direction received through the touch panel is determined to be the first type touch by the determination unit in a state in which the first application is displayed on the display device as the display switching unit. 21. The computer-readable non-transitory storage medium storing a program according claim 18, wherein the first type touch is a multi-touch, the first application is an application having a navigation function, and the specific position on the first application is a current position of the information terminal displayed on a map that is displayed. 22. A method of controlling applications that is executed by an information terminal controlling operations of the applications based on a user's operation received through a touch panel mounted on a display device, the method comprising: a display switching step displaying a first application and a second application having a curved boundary line on a designated radius from a specific position on the first application on the display device based on a user's first type touch swiping operation executed in a first direction that is received through the touch panel in a state in which the first application is displayed on the display device. 23. The method of controlling applications according to claim 22, wherein the display switching step further includes a step for displaying the first application on an entire face of the display device in a case when a first type touch swiping operation executed in a second direction that is the direction opposite to the first direction is received in a state in which the first application and the second application are displayed on the display device. 24. The method of controlling applications according to claim 22, further comprising: a determination step determining whether the user's operation received through the touch panel is a first type touch or a second type touch; and an AP operating step executing an operation of the application displayed on the display device based on the swiping operation in a case when the user's swiping operation received through the touch panel is determined to be the second type touch in the determination step, wherein the display switching step includes displaying the first application and the second application on the display device in a case when the user's swiping operation executed in a first direction received through the touch panel is determined to be the first type touch in the determination step in a state in which the first application is displayed on the display device. 25. The method of controlling applications according to claim 22, wherein the first type touch is a multi-touch, the first application is an application having a navigation function, and the specific position on the first application is the current position of the information terminal displayed on a map that is displayed. 26. An information terminal comprising: a display device including a touch panel mounted on the display device; and a display switching unit displaying a first application and a second application having a curved boundary line on a designated radius from a specific position on the first application on the display device based on a user's first type touch swiping operation executed in a first direction that is received through the touch panel in a state in which the first application is displayed on the display device. 27. The information terminal according to claim 26, wherein the display switching unit further displays the first application on the entire face of the display device in a case when a first type touch swiping operation executed in a second direction that is the direction opposite to the first direction is received in a state in which the first application and the second application are displayed on the display device. 28. The information terminal according to claim 26, further comprising: a determination unit determining whether the user's operation received through the touch panel is a first type touch or a second type touch; and an AP operating unit executing an operation of the application displayed on the display device based on the swiping operation in a case when the user's swiping operation received through the touch panel is determined to be the second type touch by the determination unit, wherein the display switching unit displays the first application and the second application on the display device in a case when the user's swiping operation executed in a first direction received through the touch panel is determined to be the first type touch by the determination unit in a state in which the first application is displayed on the display device. 29. The information terminal according to claim 26, wherein the first type touch is a multi-touch, the first application is an application having a navigation function, and the specific position on the first application is the current position of the information terminal displayed on the map that is displayed. 30. (canceled)
The purpose of the present invention is to control an information terminal, equipped with a touch panel and installed in a vehicle, in a manner in which it can be operated without affecting driving. A drive mode application causes an information terminal, controlling the running of an application through first control on the basis of a user operation received via a touch panel, to function as: a connection detection unit, which detects a connection to a vehicle; a function limiting unit, which controls the running of the application through second control, different from the first control, on the condition that the connection to the vehicle is detected; and a display switching unit, which, on the basis of a sliding operation in a first direction received via the touch panel while a first application is being displayed, displays the first application and a second application.1. A computer-readable non-transitory storage medium storing a program that causes an information terminal, which includes a touch panel, controlling operations of applications in first control based on a user's operation received through a touch panel to function as: a connection detecting unit detecting a connection to a vehicle; and a function restricting unit controlling the operations of the applications in second control other than the first control under a condition that the connection detecting unit detects the connection to the vehicle, wherein the second control restricts some operations set in advance among the user's operations that can be received in the first control through the touch panel. 2. The computer-readable non-transitory storage medium storing a program according to claim 1, wherein the second control restricts operations some of the user's operations that can be received in the first control through the touch panel so that at least a swiping operation is permitted and restricts some of the applications that can be operated in the first control. 3. The computer-readable non-transitory storage medium storing a program according to claim 1, wherein the program further causes the information terminal to function as a display switching unit that executes switching among kinds of applications to be displayed on a display device of the information terminal based on a swiping operation executed in a first direction received through the touch panel. 4. The computer-readable non-transitory storage medium storing a program according to claim 3, wherein the program further causes the information terminal to function: to display a first application and a second application on the display device in a case when the swiping operation executed in the first direction is received in a state in which the first application is displayed on the display device; to display the second application on the display device in a case when the swiping operation executed in the first direction is received in a state in which the first application and the second application are displayed on the display device; and to display the first application on the display device in a case when a swiping operation executed in a second direction that is the direction opposite to the first direction is received in the state in which the first application and the second application are displayed on the display device, as the display switching unit. 5. A method of controlling applications that is executed by an information terminal, which includes a touch panel, controlling operations of the applications in first control based on a user's operation received through a touch panel, the method comprising: a connection detecting process detecting a connection to a vehicle; and a function restricting process controlling the operations of the applications in second control other than the first control under a condition that the connection to the vehicle is detected, wherein the second control restricts some operations set in advance among the user's operations that can be received in the first control through the touch panel. 6. The method of controlling applications according to claim 5, wherein the second control restricts some of the user's operations that can be received in the first control through the touch panel so that at least a swiping operation is permitted and restricts some of the applications that can be operated in the first control. 7. The method of controlling applications according to claim 5, the method further comprising a display switching process executing switching among kinds of applications to be displayed on a display device of the information terminal based on a swiping operation executed in a first direction received through the touch panel that is executed by the information terminal. 8. The method of controlling applications according to claim 7, wherein the display switching process includes: displaying a first application and a second application on the display device in a case when the swiping operation executed in the first direction is received in a state in which the first application is displayed on the display device; displaying the second application on the display device in a case when the swiping operation executed in the first direction is received in a state in which the first application and the second application are displayed on the display device; and displaying the first application on the display device in a case when a swiping operation executed in a second direction that is the direction opposite to the first direction is received in the state in which the first application and the second application are displayed on the display device. 9. A computer-readable non-transitory storage medium storing a program that causes a computer controlling operations of applications based on a user's operation received through a touch panel to function as: a determination unit determining whether the user's operation received through the touch panel is either a single touch or a multi-touch; a display switching unit executing switching among kinds of applications to be displayed on a display device of the computer based on a swiping operation received through the touch panel in a case when the single touch is determined by the determination unit; and an AP operating unit executing an operation of the application displayed on the display device based on the swiping operation received through the touch panel in a case when the multi-touch is determined by the determination unit. 10. The computer-readable non-transitory storage medium storing a program according to claim 9, wherein the program further causes the computer to function: to display a first application and a second application on the display device in a case when a single touch swiping operation executed in a first direction is received in a state in which the first application is displayed on the display device; to display the second application on the display device in a case when a single touch swiping operation executed in the first direction is received in a state in which the first application and the second application are displayed on the display device; and to display the first application on the display device in a case when a single touch swiping operation executed in a second direction that is the direction opposite to the first direction is received in the state in which the first application and the second application are displayed on the display device, as the display switching unit. 11. The computer-readable non-transitory storage medium storing a program according to claim 10, wherein the program causes the computer to function to execute an operation for a new application displayed by the display switching unit based on the multi-touch swiping operation in a case when a plurality of the applications are displayed on the display device as the AP operating unit. 12. A method of controlling applications that is executed by a computer controlling operations of the applications based on a user's operation received through a touch panel, the method comprising: a determination process determining whether the user's operation received through the touch panel is either a single touch or a multi-touch; a display switching process executing switching among kinds of applications to be displayed on a display device of the computer based on a swiping operation received through the touch panel in a case when the single touch is determined; and an AP operating process executing an operation of the application displayed on the display device based on the swiping operation received through the touch panel in a case when the multi-touch is determined. 13. The method of controlling applications according to claim 12, wherein the display switching process includes: displaying a first application and a second application on the display device in a case when a single touch swiping operation executed in a first direction is received in a state in which the first application is displayed on the display device; displaying the second application on the display device in a case when a single touch swiping operation executed in the first direction is received in a state in which the first application and the second application are displayed on the display device; and displaying the first application on the display device in a case when a single touch swiping operation executed in a second direction that is the direction opposite to the first direction is received in the state in which the first application and the second application are displayed on the display device. 14. The method of controlling applications according to claim 13, wherein the AP operating process executes an operation for a new application displayed by the display switching process based on the multi-touch swiping operation in a case when a plurality of the applications are displayed on the display device. 15. A computer-readable non-transitory storage medium storing a program that causes a computer controlling operations of applications based on a user's operation received through a touch panel to function as: a determination unit determining whether or not the user's operation received through the touch panel is a swiping operation; a function restricting unit invalidating the user's operation in a case when the operation is determined not to be the swiping operation by the determination unit; and a display switching unit executing switching among kinds of applications to be displayed on a display device of the computer based on the swiping operation in a case when the operation is determined to be the swiping operation by the determination unit. 16. The computer-readable non-transitory storage medium storing a program according to claim 15, wherein the program causes the computer to function to validate the swiping operation only in a case when the swiping operation is a swiping operation having a starting point positioned near an edge of the display device as the function restricting unit. 17. The computer-readable non-transitory storage medium storing a program according to claim 15, wherein the program further causes the computer to function: to determine whether or not the user's operation received through the touch panel is an operation executed in a predetermined area disposed in a part of the touch panel as the determination unit; and to validate the operation regardless of a content of the operation in a case when the user's operation is determined to be an operation executed in the predetermined area by the determination unit and invalidate an operation other than the swiping operation in a case when the user's operation is determined not to be an operation executed in the predetermined area by the determination unit as the function restricting unit. 18. A computer-readable non-transitory storage medium storing a program causing an information terminal controlling operations of applications based on a user's operation received through a touch panel mounted on a display device to function as: a display switching unit displaying a first application and a second application having a curved boundary line on a designated radius from a specific position on the first application on the display device based on a user's first type touch swiping operation executed in a first direction that is received through the touch panel in a state in which the first application is displayed on the display device. 19. The computer-readable non-transitory storage medium storing a program according to claim 18, wherein the program causes the information terminal to function to display the first application on an entire face of the display device in a case when a first type touch swiping operation executed in a second direction that is the direction opposite to the first direction is received in a state in which the first application and the second application are displayed on the display device as the display switching unit. 20. The computer-readable non-transitory storage medium storing a program according to claim 18, wherein the program further causes the information terminal to function as: a determination unit determining whether the user's operation received through the touch panel is a first type touch or a second type touch; and an AP operating unit executing an operation of the application displayed on the display device based on the swiping operation in a case when the user's swiping operation received through the touch panel is determined to be the second type touch by the determination unit, wherein the program causes the information terminal to function to display the first application and the second application on the display device in a case when the user's swiping operation executed in a first direction received through the touch panel is determined to be the first type touch by the determination unit in a state in which the first application is displayed on the display device as the display switching unit. 21. The computer-readable non-transitory storage medium storing a program according claim 18, wherein the first type touch is a multi-touch, the first application is an application having a navigation function, and the specific position on the first application is a current position of the information terminal displayed on a map that is displayed. 22. A method of controlling applications that is executed by an information terminal controlling operations of the applications based on a user's operation received through a touch panel mounted on a display device, the method comprising: a display switching step displaying a first application and a second application having a curved boundary line on a designated radius from a specific position on the first application on the display device based on a user's first type touch swiping operation executed in a first direction that is received through the touch panel in a state in which the first application is displayed on the display device. 23. The method of controlling applications according to claim 22, wherein the display switching step further includes a step for displaying the first application on an entire face of the display device in a case when a first type touch swiping operation executed in a second direction that is the direction opposite to the first direction is received in a state in which the first application and the second application are displayed on the display device. 24. The method of controlling applications according to claim 22, further comprising: a determination step determining whether the user's operation received through the touch panel is a first type touch or a second type touch; and an AP operating step executing an operation of the application displayed on the display device based on the swiping operation in a case when the user's swiping operation received through the touch panel is determined to be the second type touch in the determination step, wherein the display switching step includes displaying the first application and the second application on the display device in a case when the user's swiping operation executed in a first direction received through the touch panel is determined to be the first type touch in the determination step in a state in which the first application is displayed on the display device. 25. The method of controlling applications according to claim 22, wherein the first type touch is a multi-touch, the first application is an application having a navigation function, and the specific position on the first application is the current position of the information terminal displayed on a map that is displayed. 26. An information terminal comprising: a display device including a touch panel mounted on the display device; and a display switching unit displaying a first application and a second application having a curved boundary line on a designated radius from a specific position on the first application on the display device based on a user's first type touch swiping operation executed in a first direction that is received through the touch panel in a state in which the first application is displayed on the display device. 27. The information terminal according to claim 26, wherein the display switching unit further displays the first application on the entire face of the display device in a case when a first type touch swiping operation executed in a second direction that is the direction opposite to the first direction is received in a state in which the first application and the second application are displayed on the display device. 28. The information terminal according to claim 26, further comprising: a determination unit determining whether the user's operation received through the touch panel is a first type touch or a second type touch; and an AP operating unit executing an operation of the application displayed on the display device based on the swiping operation in a case when the user's swiping operation received through the touch panel is determined to be the second type touch by the determination unit, wherein the display switching unit displays the first application and the second application on the display device in a case when the user's swiping operation executed in a first direction received through the touch panel is determined to be the first type touch by the determination unit in a state in which the first application is displayed on the display device. 29. The information terminal according to claim 26, wherein the first type touch is a multi-touch, the first application is an application having a navigation function, and the specific position on the first application is the current position of the information terminal displayed on the map that is displayed. 30. (canceled)
2,600
9,973
9,973
14,728,027
2,683
Media rendering system including a remote control device and associated docking station. The remote control device interfaces with a remote server to stream media content for local and/or external playback. The remote control device may interface with a docking station to playback rendered media on one or more entertainment appliances. The portable device preferably has standard remote control capability in order to enable advanced features and functions for media playback.
1. A method for using a wireless interface device interfaced to an appliance to facilitate play a media stream, comprising: receiving at a portable electronic device the media stream; causing the portable electronic device to route the received media stream for a playing of the received media stream by the portable electronic device; detecting by the portable electronic device that the portable electronic device has been placed into wireless communication with the wireless interface device interfaced to the appliance; and in response to the portable electronic device detecting that the portable electronic device has been placed into wireless communication with the wireless interface device interfaced to the appliance, causing the portable electronic device to automatically reroute the received media stream to the wireless interface device interfaced to the appliance for a playing of the received media stream by the appliance instead of routing the received media stream for the playing of the received media steam by the portable electronic device. 2. The method as recited in claim 1, wherein, in response to the portable electronic device detecting that the portable electronic device has been placed into wireless communication with the with the wireless interface device interfaced to the appliance, further causing the appliance to be automatically placed into an operating state appropriate for the playing of the received media stream by the appliance. 3. The method as recited in claim 1, comprising causing the portable electronic device to transmit a command to the appliance to cause the appliance to be placed into the operating mode and wherein the command is automatically caused to be transmitted by the portable electronic device in response to the portable electronic device detecting that the portable electronic device has been placed into wireless communication with the wireless interface device interfaced to the appliance. 4. The method as recited in claim 1, comprising using a radio frequency protocol to reroute the received media stream to the wireless interface device interfaced to the appliance. 5. The method as recited in claim 1, wherein the appliance comprises a television. 6. The method as recited in claim 1, comprising causing the portable electronic device to receive the media stream from a server device via a wide area network. 7. The method as recited in claim 1, comprising causing the portable electronic device to receive the media stream from a server device via a local area network. 8. The method as recited in claim 2, wherein causing the appliance to be automatically placed into an operating state appropriate for the playing of the received media stream by the appliance comprises causing the appliance to be automatically placed into a powered on state. 9. The method as recited in claim 10, wherein causing the appliance to be automatically placed into a powered on state comprises causing the portable electronic device to automatically transmit a power control command to the appliance. 10. The method as recited in claim 2, wherein causing the appliance to be automatically placed into an operating state appropriate for the playing of the received media stream on the appliance comprises causing the appliance to be automatically placed into a selected input mode state. 11. The method as recited in claim 11, wherein causing the appliance to be automatically placed into a selected input mode state comprises causing the portable electronic device to automatically transmit an input mode selection command to the appliance. 12. The method as recited in claim 1, comprising causing the wireless interface device interfaced to the appliance to pass the rerouted received media stream through to the appliance to thereby allow the appliance to process the passed through media stream for the playing of the received media stream by the appliance. 13. The method as recited in claim 1, comprising causing the wireless interface device interfaced to the appliance to provide an address to the portable electronic device and causing the portable electronic device to use the address to reroute the media stream to the wireless interface device interfaced to the appliance. 14. The method as recited in claim 1, wherein the wireless interface device is interfaced to the appliance via a HDMI interface connection and the rerouted media stream received by the wireless interface device interfaced to the appliance is provided to the appliance via the HDMI interface connection. 15. The method as recited in claim 1, comprising causing the wireless interface device interfaced to the appliance to convert the rerouted media stream received by the wireless interface device interface to the appliance into a format appropriate for the playing of the media stream by the appliance.
Media rendering system including a remote control device and associated docking station. The remote control device interfaces with a remote server to stream media content for local and/or external playback. The remote control device may interface with a docking station to playback rendered media on one or more entertainment appliances. The portable device preferably has standard remote control capability in order to enable advanced features and functions for media playback.1. A method for using a wireless interface device interfaced to an appliance to facilitate play a media stream, comprising: receiving at a portable electronic device the media stream; causing the portable electronic device to route the received media stream for a playing of the received media stream by the portable electronic device; detecting by the portable electronic device that the portable electronic device has been placed into wireless communication with the wireless interface device interfaced to the appliance; and in response to the portable electronic device detecting that the portable electronic device has been placed into wireless communication with the wireless interface device interfaced to the appliance, causing the portable electronic device to automatically reroute the received media stream to the wireless interface device interfaced to the appliance for a playing of the received media stream by the appliance instead of routing the received media stream for the playing of the received media steam by the portable electronic device. 2. The method as recited in claim 1, wherein, in response to the portable electronic device detecting that the portable electronic device has been placed into wireless communication with the with the wireless interface device interfaced to the appliance, further causing the appliance to be automatically placed into an operating state appropriate for the playing of the received media stream by the appliance. 3. The method as recited in claim 1, comprising causing the portable electronic device to transmit a command to the appliance to cause the appliance to be placed into the operating mode and wherein the command is automatically caused to be transmitted by the portable electronic device in response to the portable electronic device detecting that the portable electronic device has been placed into wireless communication with the wireless interface device interfaced to the appliance. 4. The method as recited in claim 1, comprising using a radio frequency protocol to reroute the received media stream to the wireless interface device interfaced to the appliance. 5. The method as recited in claim 1, wherein the appliance comprises a television. 6. The method as recited in claim 1, comprising causing the portable electronic device to receive the media stream from a server device via a wide area network. 7. The method as recited in claim 1, comprising causing the portable electronic device to receive the media stream from a server device via a local area network. 8. The method as recited in claim 2, wherein causing the appliance to be automatically placed into an operating state appropriate for the playing of the received media stream by the appliance comprises causing the appliance to be automatically placed into a powered on state. 9. The method as recited in claim 10, wherein causing the appliance to be automatically placed into a powered on state comprises causing the portable electronic device to automatically transmit a power control command to the appliance. 10. The method as recited in claim 2, wherein causing the appliance to be automatically placed into an operating state appropriate for the playing of the received media stream on the appliance comprises causing the appliance to be automatically placed into a selected input mode state. 11. The method as recited in claim 11, wherein causing the appliance to be automatically placed into a selected input mode state comprises causing the portable electronic device to automatically transmit an input mode selection command to the appliance. 12. The method as recited in claim 1, comprising causing the wireless interface device interfaced to the appliance to pass the rerouted received media stream through to the appliance to thereby allow the appliance to process the passed through media stream for the playing of the received media stream by the appliance. 13. The method as recited in claim 1, comprising causing the wireless interface device interfaced to the appliance to provide an address to the portable electronic device and causing the portable electronic device to use the address to reroute the media stream to the wireless interface device interfaced to the appliance. 14. The method as recited in claim 1, wherein the wireless interface device is interfaced to the appliance via a HDMI interface connection and the rerouted media stream received by the wireless interface device interfaced to the appliance is provided to the appliance via the HDMI interface connection. 15. The method as recited in claim 1, comprising causing the wireless interface device interfaced to the appliance to convert the rerouted media stream received by the wireless interface device interface to the appliance into a format appropriate for the playing of the media stream by the appliance.
2,600
9,974
9,974
15,368,398
2,616
An apparatus such as a head-mounted display (HMD) may have a camera for capturing a visual scene for presentation via the HMD. A user of the apparatus may be operating a second, physical camera for capturing video or still images within the visual scene. The HMD may generate an augmented reality (AR) experience by presenting an AR view frustum representative of the actual view frustum of the physical camera. The field of view of the user viewing the captured visual scene via the AR experience is generally larger that the AR view frustum, allowing the user to avoid unnatural tracking movements and/or hunting for subjects.
1. A computer-implemented method, comprising: capturing a visual scene with an augmented reality device; augmenting the visual scene with an augmented view frustum associated with a physical camera, wherein the augmented reality view frustum is smaller than a field of view of a user of the augmented reality device; tracking at least one of movement and positioning of the physical camera; and at least one of moving and positioning the augmented reality view frustum in a manner commensurate with the at least one of the movement and positioning of the physical camera. 2. The computer-implemented method of claim 1, wherein the user field of view is a user field of view represented in the captured visual scene 3. The computer-implemented method of claim 1, wherein the tracking of the least one of the movement and positioning of the physical camera comprises sensing and monitoring a tracking marker associated with the physical camera. 4. The computer-implemented method of claim 1, wherein the visual scene comprises at least one object moving into or out of the augmented reality view frustum. 5. The computer-implemented method of claim 1, further comprising presenting at least one operating characteristic of the physical camera with the augmented reality view frustum. 6. The computer-implemented method of claim 5, further comprising adjusting the at least one operating characteristic of the physical camera via the augmented reality device. 7. The computer-implemented method of claim 5, further comprising adjusting one or more characteristics of the augmented reality view frustum based upon the at least one operating characteristic of the physical camera. 8. The computer-implemented method of claim 1, further comprising setting a size of the augmented reality view frustum to be commensurate with a focal length of the physical camera's lens. 9. The computer-implemented method of claim 1, wherein at least one of the movement and positioning of the physical camera is further based upon at least one of movement and positioning of the augmented reality device or the user of the augmented reality device. 10. An apparatus, comprising: at least one camera adapted to capture a visual scene; an augmented reality component adapted to identify movement and positioning of a physical camera; and a display on which an augmented reality experience is presented, the augmented reality experience comprising the captured visual scene and an augmented reality view frustum representative of a view frustum of the physical camera in accordance with the movement and positioning of the physical camera, the augmented reality view frustum being representative of a field of view smaller than a field of view of a user of the apparatus. 11. The apparatus of claim 10, wherein the apparatus is communicatively connected to the physical camera via a wireless or wired connection. 12. The apparatus of claim 10, further comprising at least one sensor adapted to monitor the movement and positioning of the physical camera and transmit information reflecting the movement and positioning of the physical camera to the augmented reality component. 13. The apparatus of claim 12, wherein the at least one sensor monitors the movement and positioning of the physical camera by monitoring a tracking marker associated with the physical camera. 14. The apparatus of claim 10, wherein the display is further adapted to display one or more operating parameters of the physical camera in conjunction with the augmented reality view frustum. 15. An apparatus, comprising: a first camera capturing a visual scene; a second camera communicatively connected to the first camera and capturing a subset of the visual scene captured by the first camera; an augmented reality component generating an augmented reality experience; and a display on which the augmented reality experience is presented, the augmented reality experience comprising the visual scene captured by the first camera and an augmented reality view frustum representative of a view frustum of the second camera in accordance with the movement and positioning of the second camera, the augmented reality view frustum being representative of a field of view smaller than a field of view of a user of the apparatus. 16. The apparatus of claim 15, wherein the second camera is co-located with the first camera 17. The apparatus of claim 15, wherein the movement and positioning of the second camera follows movement and positioning of at least one of the apparatus and the user. 18. The apparatus of claim 15, wherein the second camera is remotely located from the first camera, the second camera being controlled by the user of the apparatus. 19. The apparatus of claim 18, further comprising at least one sensor tracking the movement and positioning of the second camera. 20. The apparatus of claim 15, wherein the augmented reality component presents the augmented reality view frustum in conjunction with operating parameters of the second camera.
An apparatus such as a head-mounted display (HMD) may have a camera for capturing a visual scene for presentation via the HMD. A user of the apparatus may be operating a second, physical camera for capturing video or still images within the visual scene. The HMD may generate an augmented reality (AR) experience by presenting an AR view frustum representative of the actual view frustum of the physical camera. The field of view of the user viewing the captured visual scene via the AR experience is generally larger that the AR view frustum, allowing the user to avoid unnatural tracking movements and/or hunting for subjects.1. A computer-implemented method, comprising: capturing a visual scene with an augmented reality device; augmenting the visual scene with an augmented view frustum associated with a physical camera, wherein the augmented reality view frustum is smaller than a field of view of a user of the augmented reality device; tracking at least one of movement and positioning of the physical camera; and at least one of moving and positioning the augmented reality view frustum in a manner commensurate with the at least one of the movement and positioning of the physical camera. 2. The computer-implemented method of claim 1, wherein the user field of view is a user field of view represented in the captured visual scene 3. The computer-implemented method of claim 1, wherein the tracking of the least one of the movement and positioning of the physical camera comprises sensing and monitoring a tracking marker associated with the physical camera. 4. The computer-implemented method of claim 1, wherein the visual scene comprises at least one object moving into or out of the augmented reality view frustum. 5. The computer-implemented method of claim 1, further comprising presenting at least one operating characteristic of the physical camera with the augmented reality view frustum. 6. The computer-implemented method of claim 5, further comprising adjusting the at least one operating characteristic of the physical camera via the augmented reality device. 7. The computer-implemented method of claim 5, further comprising adjusting one or more characteristics of the augmented reality view frustum based upon the at least one operating characteristic of the physical camera. 8. The computer-implemented method of claim 1, further comprising setting a size of the augmented reality view frustum to be commensurate with a focal length of the physical camera's lens. 9. The computer-implemented method of claim 1, wherein at least one of the movement and positioning of the physical camera is further based upon at least one of movement and positioning of the augmented reality device or the user of the augmented reality device. 10. An apparatus, comprising: at least one camera adapted to capture a visual scene; an augmented reality component adapted to identify movement and positioning of a physical camera; and a display on which an augmented reality experience is presented, the augmented reality experience comprising the captured visual scene and an augmented reality view frustum representative of a view frustum of the physical camera in accordance with the movement and positioning of the physical camera, the augmented reality view frustum being representative of a field of view smaller than a field of view of a user of the apparatus. 11. The apparatus of claim 10, wherein the apparatus is communicatively connected to the physical camera via a wireless or wired connection. 12. The apparatus of claim 10, further comprising at least one sensor adapted to monitor the movement and positioning of the physical camera and transmit information reflecting the movement and positioning of the physical camera to the augmented reality component. 13. The apparatus of claim 12, wherein the at least one sensor monitors the movement and positioning of the physical camera by monitoring a tracking marker associated with the physical camera. 14. The apparatus of claim 10, wherein the display is further adapted to display one or more operating parameters of the physical camera in conjunction with the augmented reality view frustum. 15. An apparatus, comprising: a first camera capturing a visual scene; a second camera communicatively connected to the first camera and capturing a subset of the visual scene captured by the first camera; an augmented reality component generating an augmented reality experience; and a display on which the augmented reality experience is presented, the augmented reality experience comprising the visual scene captured by the first camera and an augmented reality view frustum representative of a view frustum of the second camera in accordance with the movement and positioning of the second camera, the augmented reality view frustum being representative of a field of view smaller than a field of view of a user of the apparatus. 16. The apparatus of claim 15, wherein the second camera is co-located with the first camera 17. The apparatus of claim 15, wherein the movement and positioning of the second camera follows movement and positioning of at least one of the apparatus and the user. 18. The apparatus of claim 15, wherein the second camera is remotely located from the first camera, the second camera being controlled by the user of the apparatus. 19. The apparatus of claim 18, further comprising at least one sensor tracking the movement and positioning of the second camera. 20. The apparatus of claim 15, wherein the augmented reality component presents the augmented reality view frustum in conjunction with operating parameters of the second camera.
2,600
9,975
9,975
15,141,645
2,656
Systems and processes for unified language modeling are provided. In accordance with one example, a method includes, at an electronic device with one or more processors and memory, receiving a character of a sequence of characters and determining a current character context based on the received character of the sequence of characters and a previous character context. The method further includes determining a current word representation based on the current character context and determining a current word context based on the current word representation and a previous word context. The method further includes determining a next word representation based on the current word context and providing the next word representation.
1. A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to: receive a character of a sequence of characters; determine a current character context based on the received character of the sequence of characters and a previous character context; determine a current word representation based on the current character context; determine a current word context based on the current word representation and a previous word context; determine a next word representation based on the current word context; and provide the next word representation. 2. The non-transitory computer-readable storage medium of claim 1, wherein the current word representation has a first dimensionality and the next word representation has a second dimensionality greater than the first dimensionality. 3. The non-transitory computer-readable storage medium of claim 1, wherein determining the current word representation comprises: weighting the current character context using a weight matrix. 4. The non-transitory computer-readable storage medium of claim 1, wherein a dimensionality of the character of the sequence of characters corresponds to a number of characters. 5. The non-transitory computer-readable storage medium of claim 4, wherein a dimensionality of the next word representation corresponds to a number of words. 6. The non-transitory computer-readable storage medium of claim 1, wherein determining the current character context further comprises: determining the current character context using an activation function, wherein the activation function comprises a sigmoid function, a hyperbolic tangent function, a rectified linear unit function, or a combination thereof. 7. The non-transitory computer-readable storage medium of claim 1, wherein providing the next word representation comprises providing the next word representation using an output layer of a neural network. 8. The non-transitory computer-readable storage medium of claim 7, wherein the neural network is a recurrent neural network. 9. The non-transitory computer-readable storage medium of claim 1, further comprising: encoding the character of the sequence of characters using 1-of-M encoding binary code. 10. The non-transitory computer-readable storage medium of claim 1, further comprising: encoding the character of the sequence of characters using one or more Gray codes. 11. The non-transitory computer-readable storage medium of claim 1, further comprising: encoding the character of the sequence of characters using 1-of-M encoding binary code and one or more Gray codes. 12. The non-transitory computer-readable storage medium of claim 7, wherein at least one of the current word representation and the next word representation are vector representations. 13. A method comprising: at an electronic device with one or more processors and memory: receiving a character of a sequence of characters; determining a current character context based on the received character of the sequence of characters and a previous character context; determining a current word representation based on the current character context; determining a current word context based on the current word representation and a previous word context; determining a next word representation based on the current word context; and providing the next word representation. 14. An electronic device, comprising: one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: receiving a character of a sequence of characters; determining a current character context based on the received character of the sequence of characters and a previous character context; determining a current word representation based on the current character context; determining a current word context based on the current word representation and a previous word context; determining a next word representation based on the current word context; and providing the next word representation.
Systems and processes for unified language modeling are provided. In accordance with one example, a method includes, at an electronic device with one or more processors and memory, receiving a character of a sequence of characters and determining a current character context based on the received character of the sequence of characters and a previous character context. The method further includes determining a current word representation based on the current character context and determining a current word context based on the current word representation and a previous word context. The method further includes determining a next word representation based on the current word context and providing the next word representation.1. A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to: receive a character of a sequence of characters; determine a current character context based on the received character of the sequence of characters and a previous character context; determine a current word representation based on the current character context; determine a current word context based on the current word representation and a previous word context; determine a next word representation based on the current word context; and provide the next word representation. 2. The non-transitory computer-readable storage medium of claim 1, wherein the current word representation has a first dimensionality and the next word representation has a second dimensionality greater than the first dimensionality. 3. The non-transitory computer-readable storage medium of claim 1, wherein determining the current word representation comprises: weighting the current character context using a weight matrix. 4. The non-transitory computer-readable storage medium of claim 1, wherein a dimensionality of the character of the sequence of characters corresponds to a number of characters. 5. The non-transitory computer-readable storage medium of claim 4, wherein a dimensionality of the next word representation corresponds to a number of words. 6. The non-transitory computer-readable storage medium of claim 1, wherein determining the current character context further comprises: determining the current character context using an activation function, wherein the activation function comprises a sigmoid function, a hyperbolic tangent function, a rectified linear unit function, or a combination thereof. 7. The non-transitory computer-readable storage medium of claim 1, wherein providing the next word representation comprises providing the next word representation using an output layer of a neural network. 8. The non-transitory computer-readable storage medium of claim 7, wherein the neural network is a recurrent neural network. 9. The non-transitory computer-readable storage medium of claim 1, further comprising: encoding the character of the sequence of characters using 1-of-M encoding binary code. 10. The non-transitory computer-readable storage medium of claim 1, further comprising: encoding the character of the sequence of characters using one or more Gray codes. 11. The non-transitory computer-readable storage medium of claim 1, further comprising: encoding the character of the sequence of characters using 1-of-M encoding binary code and one or more Gray codes. 12. The non-transitory computer-readable storage medium of claim 7, wherein at least one of the current word representation and the next word representation are vector representations. 13. A method comprising: at an electronic device with one or more processors and memory: receiving a character of a sequence of characters; determining a current character context based on the received character of the sequence of characters and a previous character context; determining a current word representation based on the current character context; determining a current word context based on the current word representation and a previous word context; determining a next word representation based on the current word context; and providing the next word representation. 14. An electronic device, comprising: one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: receiving a character of a sequence of characters; determining a current character context based on the received character of the sequence of characters and a previous character context; determining a current word representation based on the current character context; determining a current word context based on the current word representation and a previous word context; determining a next word representation based on the current word context; and providing the next word representation.
2,600
9,976
9,976
15,145,362
2,611
A method includes sequentially outputting from an imaging sensor each pixel row of a set of pixel rows of an image captured by the imaging sensor. The method further includes displaying, at a display device, a pixel row representative of a first pixel row of the captured image prior to a second pixel row of the captured image being output by the imaging sensor. An apparatus includes an imaging sensor having a first lens that imparts a first type of spatial distortion, a display device coupled to the imaging sensor, the display to display imagery captured by the imaging sensor with the first spatial distortion, and an eyepiece lens aligned with the display, the eyepiece lens imparting a second type of spatial distortion that compensates for the first type of spatial distortion.
1. A method comprising: sequentially outputting from a first imaging sensor each pixel row of a first set of pixel rows of a first image captured by the first imaging sensor; and displaying, at a display device, a pixel row representative of a first pixel row of the first image prior to a second pixel row of the first image being output by the first imaging sensor. 2. The method of claim 1, further comprising: buffering a subset of the first set of pixel rows in a buffer, the subset including the first pixel row; modifying the buffered subset of pixel rows in the buffer to generate the pixel row representative of the first pixel row; and wherein displaying the pixel row representative of the first pixel row comprises accessing the pixel row representative of the first pixel row from the buffer and driving a corresponding row of the display device with the accessed pixel row. 3. The method of claim 2, wherein modifying the buffered subset of pixel rows comprises: receiving augmented reality overlay information for one or more pixel rows of the first subset of pixel rows; and modifying the buffered subset of pixel rows based on the augmented reality overlay information. 4. The method of claim 2, wherein modifying the buffered subset of pixel rows comprises: performing one or more filtering processes on the buffered subset of pixel rows. 5. The method of claim 4, wherein performing one or more filtering processes comprises: performing at least one of: a spatial filtering process; and a chromatic filtering process. 6. The method of claim 1, further comprising: buffering the first pixel row in a pixel row buffer; modifying the first pixel row in the pixel row buffer based on augmented reality overlay information associated with a position of the first pixel row to generate the pixel row representative of the first pixel row; and wherein displaying the pixel row representative of the first pixel row comprises accessing the pixel row representative of the first pixel row from the buffer and driving the display device with the accessed pixel row. 7. The method of claim 1, further comprising: displaying, at the display device, a pixel row representative of a second pixel row prior to a third pixel row of the first image being output by the first imaging sensor. 8. The method of claim 1, further comprising: sequentially outputting from a second imaging sensor each pixel row of a second set of pixel rows of a second image captured by the second imaging sensor; and displaying, at the display device, a pixel row representative of a third pixel row of the second image prior to a fourth pixel row of the second image being output by the second imaging sensor. 9. The method of claim 8, wherein: the first image is displayed in a first region of the display; and the second image is displayed in a second region of the display concurrent with the display of the first image. 10. The method of claim 8, wherein: the first image is displayed in a first region of the display at a first time; the second image is displayed in a second region of the display at a second time different than the first time; the second region is inactive at the first time; and the first region is inactive at the second time. 11. An apparatus comprising: a first imaging sensor having an output to sequentially output pixel rows of a first captured image; and a display controller coupled to the output of the first imaging sensor, the display controller to begin sequential display of pixel rows of the first captured image at a display device before a last pixel row of the first captured image is output by the first imaging sensor. 12. The apparatus of claim 11, further comprising: a pixel row buffer coupled to the output of the first imaging sensor, the pixel row buffer having a plurality of entries to buffer a subset of the pixel rows of the first captured image in a buffer; a compositor coupled to the pixel row buffer, the compositor to modify the buffered subset of pixel rows to generate a modified subset of pixel rows; and wherein the display controller is coupled to the pixel row buffer, the display controller is to sequentially display the pixel rows of the first captured image by sequentially accessing each pixel row of the modified subset of pixel rows from the pixel row buffer and is to drive a corresponding row the display device with the accessed pixel row. 13. The apparatus of claim 12, wherein: the compositor further is to receive augmented reality overlay information for one or more pixel rows of the subset of pixel rows; and the compositor is to modify the buffered subset of pixel rows based on the augmented reality overlay information. 14. The apparatus of claim 12, wherein: the compositor is to modify the buffered subset of pixel rows by performing one or more filtering processes on the buffered subset of pixel rows. 15. The apparatus of claim 14, wherein the one or more filtering processes comprises at least one of: a spatial filtering process; and a chromatic filtering process. 16. The apparatus of claim 11, further comprising: a pixel row buffer to buffer a first pixel row of the first captured image in a pixel row buffer; a compositor coupled to the pixel row buffer, the compositor to modify the first pixel row in the pixel row buffer based on augmented reality overlay information associated with a position of the first pixel row to generate a pixel row representative of the first pixel row; and wherein the display controller is coupled to the pixel row buffer and is to display the pixel row representative of the first pixel row by accessing the pixel row representative of the first pixel row from the buffer and is to drive the display device with the accessed pixel row. 17. The apparatus of claim 11, wherein: the display controller further is to display, at the display device, a pixel row representative of a second pixel row prior to a third pixel row of the first captured image being output by the first imaging sensor. 18. The apparatus of claim 11, further comprising: a second imaging sensor having an output to sequentially output pixel rows of a second captured image; and wherein the display controller is coupled to the second imaging sensor and further is to begin sequential display of pixel rows of the second captured image at the display device before a last pixel row of the second captured image is output by the second imaging sensor. 19. The apparatus of claim 18, wherein: the display controller is to display the first captured image in a first region of the display to display the second captured image in a second region of the display concurrent with the display of the first captured image. 20. The apparatus of claim 18, wherein: the display controller is to display the first captured image in a first region of the display at a first time; the display controller is to display the second captured image in a second region of the display at a second time different than the first time; wherein the second region is inactive at the first time; and wherein the first region is inactive at the second time. 21. The apparatus of claim 11, further comprising: an eyepiece lens aligned with the display, the eyepiece lens imparting a first type of spatial distortion; and wherein the first imaging sensor comprises a lens that imparts a second type of spatial distortion that is complementary to the first type of spatial distortion. 22. A head mounted display system comprising the apparatus of claim 11. 23. An apparatus comprising: an imaging sensor having a lens that imparts a first type of spatial distortion; a display device coupled to the imaging sensor, the display device to display imagery captured by the imaging sensor with the first type of spatial distortion; and an eyepiece lens aligned with the display, the eyepiece lens imparting a second type of spatial distortion that compensates for the first type of spatial distortion. 24. The apparatus of claim 23, wherein the first type of spatial distortion is barrel distortion and the second type of spatial distortion is pincushion distortion. 25. The apparatus of claim 23, wherein the first type of spatial distortion is pincushion distortion and the second type of spatial distortion is barrel distortion. 26. A head mounted display comprising the apparatus of claim 23.
A method includes sequentially outputting from an imaging sensor each pixel row of a set of pixel rows of an image captured by the imaging sensor. The method further includes displaying, at a display device, a pixel row representative of a first pixel row of the captured image prior to a second pixel row of the captured image being output by the imaging sensor. An apparatus includes an imaging sensor having a first lens that imparts a first type of spatial distortion, a display device coupled to the imaging sensor, the display to display imagery captured by the imaging sensor with the first spatial distortion, and an eyepiece lens aligned with the display, the eyepiece lens imparting a second type of spatial distortion that compensates for the first type of spatial distortion.1. A method comprising: sequentially outputting from a first imaging sensor each pixel row of a first set of pixel rows of a first image captured by the first imaging sensor; and displaying, at a display device, a pixel row representative of a first pixel row of the first image prior to a second pixel row of the first image being output by the first imaging sensor. 2. The method of claim 1, further comprising: buffering a subset of the first set of pixel rows in a buffer, the subset including the first pixel row; modifying the buffered subset of pixel rows in the buffer to generate the pixel row representative of the first pixel row; and wherein displaying the pixel row representative of the first pixel row comprises accessing the pixel row representative of the first pixel row from the buffer and driving a corresponding row of the display device with the accessed pixel row. 3. The method of claim 2, wherein modifying the buffered subset of pixel rows comprises: receiving augmented reality overlay information for one or more pixel rows of the first subset of pixel rows; and modifying the buffered subset of pixel rows based on the augmented reality overlay information. 4. The method of claim 2, wherein modifying the buffered subset of pixel rows comprises: performing one or more filtering processes on the buffered subset of pixel rows. 5. The method of claim 4, wherein performing one or more filtering processes comprises: performing at least one of: a spatial filtering process; and a chromatic filtering process. 6. The method of claim 1, further comprising: buffering the first pixel row in a pixel row buffer; modifying the first pixel row in the pixel row buffer based on augmented reality overlay information associated with a position of the first pixel row to generate the pixel row representative of the first pixel row; and wherein displaying the pixel row representative of the first pixel row comprises accessing the pixel row representative of the first pixel row from the buffer and driving the display device with the accessed pixel row. 7. The method of claim 1, further comprising: displaying, at the display device, a pixel row representative of a second pixel row prior to a third pixel row of the first image being output by the first imaging sensor. 8. The method of claim 1, further comprising: sequentially outputting from a second imaging sensor each pixel row of a second set of pixel rows of a second image captured by the second imaging sensor; and displaying, at the display device, a pixel row representative of a third pixel row of the second image prior to a fourth pixel row of the second image being output by the second imaging sensor. 9. The method of claim 8, wherein: the first image is displayed in a first region of the display; and the second image is displayed in a second region of the display concurrent with the display of the first image. 10. The method of claim 8, wherein: the first image is displayed in a first region of the display at a first time; the second image is displayed in a second region of the display at a second time different than the first time; the second region is inactive at the first time; and the first region is inactive at the second time. 11. An apparatus comprising: a first imaging sensor having an output to sequentially output pixel rows of a first captured image; and a display controller coupled to the output of the first imaging sensor, the display controller to begin sequential display of pixel rows of the first captured image at a display device before a last pixel row of the first captured image is output by the first imaging sensor. 12. The apparatus of claim 11, further comprising: a pixel row buffer coupled to the output of the first imaging sensor, the pixel row buffer having a plurality of entries to buffer a subset of the pixel rows of the first captured image in a buffer; a compositor coupled to the pixel row buffer, the compositor to modify the buffered subset of pixel rows to generate a modified subset of pixel rows; and wherein the display controller is coupled to the pixel row buffer, the display controller is to sequentially display the pixel rows of the first captured image by sequentially accessing each pixel row of the modified subset of pixel rows from the pixel row buffer and is to drive a corresponding row the display device with the accessed pixel row. 13. The apparatus of claim 12, wherein: the compositor further is to receive augmented reality overlay information for one or more pixel rows of the subset of pixel rows; and the compositor is to modify the buffered subset of pixel rows based on the augmented reality overlay information. 14. The apparatus of claim 12, wherein: the compositor is to modify the buffered subset of pixel rows by performing one or more filtering processes on the buffered subset of pixel rows. 15. The apparatus of claim 14, wherein the one or more filtering processes comprises at least one of: a spatial filtering process; and a chromatic filtering process. 16. The apparatus of claim 11, further comprising: a pixel row buffer to buffer a first pixel row of the first captured image in a pixel row buffer; a compositor coupled to the pixel row buffer, the compositor to modify the first pixel row in the pixel row buffer based on augmented reality overlay information associated with a position of the first pixel row to generate a pixel row representative of the first pixel row; and wherein the display controller is coupled to the pixel row buffer and is to display the pixel row representative of the first pixel row by accessing the pixel row representative of the first pixel row from the buffer and is to drive the display device with the accessed pixel row. 17. The apparatus of claim 11, wherein: the display controller further is to display, at the display device, a pixel row representative of a second pixel row prior to a third pixel row of the first captured image being output by the first imaging sensor. 18. The apparatus of claim 11, further comprising: a second imaging sensor having an output to sequentially output pixel rows of a second captured image; and wherein the display controller is coupled to the second imaging sensor and further is to begin sequential display of pixel rows of the second captured image at the display device before a last pixel row of the second captured image is output by the second imaging sensor. 19. The apparatus of claim 18, wherein: the display controller is to display the first captured image in a first region of the display to display the second captured image in a second region of the display concurrent with the display of the first captured image. 20. The apparatus of claim 18, wherein: the display controller is to display the first captured image in a first region of the display at a first time; the display controller is to display the second captured image in a second region of the display at a second time different than the first time; wherein the second region is inactive at the first time; and wherein the first region is inactive at the second time. 21. The apparatus of claim 11, further comprising: an eyepiece lens aligned with the display, the eyepiece lens imparting a first type of spatial distortion; and wherein the first imaging sensor comprises a lens that imparts a second type of spatial distortion that is complementary to the first type of spatial distortion. 22. A head mounted display system comprising the apparatus of claim 11. 23. An apparatus comprising: an imaging sensor having a lens that imparts a first type of spatial distortion; a display device coupled to the imaging sensor, the display device to display imagery captured by the imaging sensor with the first type of spatial distortion; and an eyepiece lens aligned with the display, the eyepiece lens imparting a second type of spatial distortion that compensates for the first type of spatial distortion. 24. The apparatus of claim 23, wherein the first type of spatial distortion is barrel distortion and the second type of spatial distortion is pincushion distortion. 25. The apparatus of claim 23, wherein the first type of spatial distortion is pincushion distortion and the second type of spatial distortion is barrel distortion. 26. A head mounted display comprising the apparatus of claim 23.
2,600
9,977
9,977
15,062,104
2,625
Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. An augmented reality display system comprises a handheld component housing an electromagnetic field emitter, the electromagnetic field emitter emitting a known magnetic field, the head mounted component coupled to one or more electromagnetic sensors that detect the magnetic field emitted by the electromagnetic field emitter housed in the handheld component, wherein a head pose is known, and a controller communicatively coupled to the handheld component and the head mounted component, the controller receiving magnetic field data from the handheld component, and receiving sensor data from the head mounted component, wherein the controller determining a hand pose based at least in part on the received magnetic field data and the received sensor data.
1. An augmented reality (AR) display system, comprising: an electromagnetic field emitter to emit a known magnetic field; an electromagnetic sensor to measure a parameter related to a magnetic flux at the electromagnetic sensor as a result of the emitted known magnetic field, wherein world coordinates of the electromagnetic sensor are known; a controller to determine pose information relative to the electromagnetic field emitter based at least in part on the measured parameter related to the magnetic flux at the electromagnetic sensor; and a display system to display virtual content to a user based at least in part on the determined pose information relative to the electromagnetic field emitter. 2. The AR display system of claim 1, wherein the electromagnetic field emitter resides in a mobile component of the AR display system. 3. The AR display system of claim 2, wherein the mobile component is a hand-held component. 4. The AR display system of claim 2, wherein the mobile component is a totem. 5. The AR display system of claim 2, wherein the mobile component is a head-mounted component of the AR display system. 6. The AR display system of claim 1, further comprising a head-mounted component that houses the display system, wherein the electromagnetic sensor is operatively coupled to the head-mounted component. 7. The AR display system of claim 1, wherein the world coordinates of the electromagnetic sensor is known based at least in part on SLAM analysis performed to determine head pose information, wherein the electromagnetic sensor is operatively coupled to a head-mounted component that houses the display system. 8. The AR display system of claim 7, further comprising one or more cameras operatively coupled to the head-mounted component, and wherein the SLAM analysis is performed based at least on data captured by the one or more cameras. 9. The AR display system of claim 1, wherein the electromagnetic sensors comprise one or more inertial measurement units (IMUs). 10. The AR display system of claim 1, wherein the pose information corresponds to at least a position and orientation of the electromagnetic field emitter relative to the world. 11. The AR display system of claim 1, wherein the pose information is analyzed to determine world coordinates corresponding to the electromagnetic field emitter. 12. The AR display system of claim 1, wherein the controller detects an interaction with one or more virtual contents based at least in part on the pose information corresponding to the electromagnetic field emitter. 13. The AR display system of claim 12, wherein the display system displays virtual content to the user based at least in part on the detected interaction. 14. The AR display system of claim 1, wherein the electromagnetic sensor comprises at least three coils to measure magnetic flux in three directions. 15. The AR display system of claim 14, wherein the at least three coils are housed together at substantially the same location, the electromagnetic sensor being coupled to a head-mounted component of the AR display system. 16. The AR display system of claim 14, wherein the at least three coils are housed at different locations of the head-mounted component of the AR display system. 17. The AR display system of claim 1, further comprising a control and quick release module to decouple the magnetic field emitted by the electromagnetic field emitter. 18. The AR display system of claim 1, further comprising additional localization resources to determine the world coordinates of the electromagnetic field emitter. 19. The AR display system of claim 18, wherein the additional localization resources comprises a GPS receiver. 20. The AR display system of claim 18, wherein the additional localization resources comprises a beacon. 21. The AR display system of claim 1, wherein the electromagnetic sensor comprises a non-solid ferrite cube. 22. The AR display system of claim 1, wherein the electromagnetic sensor comprises a stack of ferrite disks. 23. The AR display system of claim 1, wherein the electromagnetic sensor comprises a plurality of ferrite rods each having a polymer coating. 24. The AR display system of claim 1, wherein the electromagnetic sensor comprises a time division multiplexing switch. 25. A method to display augmented reality, comprising: emitting, through an electromagnetic field emitter, a known magnetic field; measuring, through an electromagnetic sensor, a parameter related to a magnetic flux at the electromagnetic sensor as a result of the emitted known magnetic field, wherein world coordinates of the electromagnetic sensor are known; determining pose information relative to the electromagnetic field emitter based at least in part on the measured parameter related to the magnetic flux at the electromagnetic sensor; and displaying virtual content to a user based at least in part on the determined pose information relative to the electromagnetic field emitter. 26. The method of claim 25, wherein the electromagnetic field emitter resides in a mobile component of the AR display system. 27. The method of claim 26, wherein the mobile component is a hand-held component. 28. The method of claim 26, wherein the mobile component is a totem. 29. The method of claim 26, wherein the mobile component is a head-mounted component of the AR display system. 30. The method of claim 25, further comprising housing the display system in a head-mounted component, wherein the electromagnetic sensor is operatively coupled to the head-mounted component. 31. The method of claim 25, wherein the world coordinates of the electromagnetic sensor is known based at least in part on SLAM analysis performed to determine head pose information, wherein the electromagnetic sensor is operatively coupled to a head-mounted component that houses the display system. 32. The method of claim 31, capturing image data through one or more cameras that are operatively coupled to the head-mounted component, and wherein the SLAM analysis is performed based at least on data captured by the one or more cameras. 33. The method of claim 25, wherein the electromagnetic sensors comprise one or more inertial measurement units (IMUs). 34. The method of claim 25, wherein the pose information corresponds to at least a position and orientation of the electromagnetic field emitter relative to the world. 35. The method of claim 25, wherein the pose information is analyzed to determine world coordinates corresponding to the electromagnetic field emitter. 36. The method of claim 25, further comprising detecting an interaction with one or more virtual contents based at least in part on the pose information corresponding to the electromagnetic field emitter. 37. The method of claim 36, further comprising displaying virtual content to the user based at least in part on the detected interaction. 38. The method of claim 25, wherein the electromagnetic sensor comprises at least three coils to measure magnetic flux in three directions. 39. The method of claim 38, wherein the at least three coils are housed together at substantially the same location, the electromagnetic sensor being coupled to a head-mounted component of the AR display system. 40. The method of claim 38, wherein the at least three coils are housed at different locations of the head-mounted component of the AR display system. 41. The method of claim 25, further comprising decoupling the magnetic field emitted by the electromagnetic field emitter through a control and quick release module. 42. The method of claim 25, further comprising determining the world coordinates of the electromagnetic field emitter through additional localization resources. 43. The method of claim 42, wherein the additional localization resources comprises a GPS receiver. 44. The method of claim 42, wherein the additional localization resources comprises a beacon. 45. The method of claim 25, wherein the electromagnetic sensor comprises a non-solid ferrite cube. 46. The method of claim 25, wherein the electromagnetic sensor comprises a stack of ferrite disks. 47. The method of claim 25, wherein the electromagnetic sensor comprises a plurality of ferrite rods each having a polymer coating. 48. The method of claim 25, wherein the electromagnetic sensor comprises a time division multiplexing switch. 49. An augmented reality display system, comprising: a handheld component housing an electromagnetic field emitter, the electromagnetic field emitter emitting a known magnetic field; a head mounted component having a display system that displays virtual content to a user, the head mounted component coupled to one or more electromagnetic sensors that detect the magnetic field emitted by the electromagnetic field emitter housed in the handheld component, wherein a head pose is known; and a controller communicatively coupled to the handheld component and the head mounted component, the controller receiving magnetic field data from the handheld component, and receiving sensor data from the head mounted component, wherein the controller determines a hand pose based at least in part on the received magnetic field data and the received sensor data, wherein the display system modifies the virtual content displayed to the user based at least in part on the determined hand pose. 50. The AR display system of claim 49, wherein the handheld component is mobile. 51. The AR display system of claim 50, wherein the handheld component is a totem. 52. The AR display system of claim 50, wherein the handheld component is a gaming component. 53. The AR display system of claim 49, wherein the head pose is known based at least in part on SLAM analysis. 54. The AR display system of claim 53, further comprising one or more cameras operatively coupled to the head-mounted component, and wherein the SLAM analysis is performed based at least on data captured by the one or more cameras. 55. The AR display system of claim 49, wherein the electromagnetic sensor comprises one or more inertial measurement units (IMUs). 56. The AR display system of claim 49, wherein the head pose corresponds to at least a position and orientation of the electromagnetic sensor relative to the world. 57. The AR display system of claim 49, wherein the hand pose is analyzed to determine world coordinates corresponding to the handheld component. 58. The AR display system of claim 49, wherein the controller detects an interaction with one or more virtual contents based at least in part on the determined hand pose. 59. The AR display system of claim 58, wherein the display system displays the virtual content to the user based at least in part on the detected interaction. 60. The AR display system of claim 49, wherein the electromagnetic sensor comprises at least three coils to measure magnetic flux in three directions. 61. The AR display system of claim 60, wherein the at least three coils are housed together at substantially the same location. 62. The AR display system of claim 60, wherein the at least three coils are housed at different locations of the head-mounted component. 63. The AR display system of claim 49, further comprising a control and quick release module to decouple the magnetic field emitted by the electromagnetic field emitter. 64. The AR display system of claim 49, further comprising additional localization resources to determine the hand pose. 65. The AR display system of claim 64, wherein the additional localization resources comprises a GPS receiver. 66. The AR display system of claim 64, wherein the additional localization resources comprises a beacon. 67. The AR display system of claim 49, wherein at least one of the one or more electromagnetic sensors comprises a non-solid ferrite cube. 68. The AR display system of claim 49, wherein at least one of the one or more electromagnetic sensors comprises a stack of ferrite disks. 69. The AR display system of claim 49, wherein at least one of the one or more electromagnetic sensors comprises a plurality of ferrite rods each having a polymer coating. 70. The AR display system of claim 49, wherein at least one of the one or more electromagnetic sensors comprises a time division multiplexing switch.
Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. An augmented reality display system comprises a handheld component housing an electromagnetic field emitter, the electromagnetic field emitter emitting a known magnetic field, the head mounted component coupled to one or more electromagnetic sensors that detect the magnetic field emitted by the electromagnetic field emitter housed in the handheld component, wherein a head pose is known, and a controller communicatively coupled to the handheld component and the head mounted component, the controller receiving magnetic field data from the handheld component, and receiving sensor data from the head mounted component, wherein the controller determining a hand pose based at least in part on the received magnetic field data and the received sensor data.1. An augmented reality (AR) display system, comprising: an electromagnetic field emitter to emit a known magnetic field; an electromagnetic sensor to measure a parameter related to a magnetic flux at the electromagnetic sensor as a result of the emitted known magnetic field, wherein world coordinates of the electromagnetic sensor are known; a controller to determine pose information relative to the electromagnetic field emitter based at least in part on the measured parameter related to the magnetic flux at the electromagnetic sensor; and a display system to display virtual content to a user based at least in part on the determined pose information relative to the electromagnetic field emitter. 2. The AR display system of claim 1, wherein the electromagnetic field emitter resides in a mobile component of the AR display system. 3. The AR display system of claim 2, wherein the mobile component is a hand-held component. 4. The AR display system of claim 2, wherein the mobile component is a totem. 5. The AR display system of claim 2, wherein the mobile component is a head-mounted component of the AR display system. 6. The AR display system of claim 1, further comprising a head-mounted component that houses the display system, wherein the electromagnetic sensor is operatively coupled to the head-mounted component. 7. The AR display system of claim 1, wherein the world coordinates of the electromagnetic sensor is known based at least in part on SLAM analysis performed to determine head pose information, wherein the electromagnetic sensor is operatively coupled to a head-mounted component that houses the display system. 8. The AR display system of claim 7, further comprising one or more cameras operatively coupled to the head-mounted component, and wherein the SLAM analysis is performed based at least on data captured by the one or more cameras. 9. The AR display system of claim 1, wherein the electromagnetic sensors comprise one or more inertial measurement units (IMUs). 10. The AR display system of claim 1, wherein the pose information corresponds to at least a position and orientation of the electromagnetic field emitter relative to the world. 11. The AR display system of claim 1, wherein the pose information is analyzed to determine world coordinates corresponding to the electromagnetic field emitter. 12. The AR display system of claim 1, wherein the controller detects an interaction with one or more virtual contents based at least in part on the pose information corresponding to the electromagnetic field emitter. 13. The AR display system of claim 12, wherein the display system displays virtual content to the user based at least in part on the detected interaction. 14. The AR display system of claim 1, wherein the electromagnetic sensor comprises at least three coils to measure magnetic flux in three directions. 15. The AR display system of claim 14, wherein the at least three coils are housed together at substantially the same location, the electromagnetic sensor being coupled to a head-mounted component of the AR display system. 16. The AR display system of claim 14, wherein the at least three coils are housed at different locations of the head-mounted component of the AR display system. 17. The AR display system of claim 1, further comprising a control and quick release module to decouple the magnetic field emitted by the electromagnetic field emitter. 18. The AR display system of claim 1, further comprising additional localization resources to determine the world coordinates of the electromagnetic field emitter. 19. The AR display system of claim 18, wherein the additional localization resources comprises a GPS receiver. 20. The AR display system of claim 18, wherein the additional localization resources comprises a beacon. 21. The AR display system of claim 1, wherein the electromagnetic sensor comprises a non-solid ferrite cube. 22. The AR display system of claim 1, wherein the electromagnetic sensor comprises a stack of ferrite disks. 23. The AR display system of claim 1, wherein the electromagnetic sensor comprises a plurality of ferrite rods each having a polymer coating. 24. The AR display system of claim 1, wherein the electromagnetic sensor comprises a time division multiplexing switch. 25. A method to display augmented reality, comprising: emitting, through an electromagnetic field emitter, a known magnetic field; measuring, through an electromagnetic sensor, a parameter related to a magnetic flux at the electromagnetic sensor as a result of the emitted known magnetic field, wherein world coordinates of the electromagnetic sensor are known; determining pose information relative to the electromagnetic field emitter based at least in part on the measured parameter related to the magnetic flux at the electromagnetic sensor; and displaying virtual content to a user based at least in part on the determined pose information relative to the electromagnetic field emitter. 26. The method of claim 25, wherein the electromagnetic field emitter resides in a mobile component of the AR display system. 27. The method of claim 26, wherein the mobile component is a hand-held component. 28. The method of claim 26, wherein the mobile component is a totem. 29. The method of claim 26, wherein the mobile component is a head-mounted component of the AR display system. 30. The method of claim 25, further comprising housing the display system in a head-mounted component, wherein the electromagnetic sensor is operatively coupled to the head-mounted component. 31. The method of claim 25, wherein the world coordinates of the electromagnetic sensor is known based at least in part on SLAM analysis performed to determine head pose information, wherein the electromagnetic sensor is operatively coupled to a head-mounted component that houses the display system. 32. The method of claim 31, capturing image data through one or more cameras that are operatively coupled to the head-mounted component, and wherein the SLAM analysis is performed based at least on data captured by the one or more cameras. 33. The method of claim 25, wherein the electromagnetic sensors comprise one or more inertial measurement units (IMUs). 34. The method of claim 25, wherein the pose information corresponds to at least a position and orientation of the electromagnetic field emitter relative to the world. 35. The method of claim 25, wherein the pose information is analyzed to determine world coordinates corresponding to the electromagnetic field emitter. 36. The method of claim 25, further comprising detecting an interaction with one or more virtual contents based at least in part on the pose information corresponding to the electromagnetic field emitter. 37. The method of claim 36, further comprising displaying virtual content to the user based at least in part on the detected interaction. 38. The method of claim 25, wherein the electromagnetic sensor comprises at least three coils to measure magnetic flux in three directions. 39. The method of claim 38, wherein the at least three coils are housed together at substantially the same location, the electromagnetic sensor being coupled to a head-mounted component of the AR display system. 40. The method of claim 38, wherein the at least three coils are housed at different locations of the head-mounted component of the AR display system. 41. The method of claim 25, further comprising decoupling the magnetic field emitted by the electromagnetic field emitter through a control and quick release module. 42. The method of claim 25, further comprising determining the world coordinates of the electromagnetic field emitter through additional localization resources. 43. The method of claim 42, wherein the additional localization resources comprises a GPS receiver. 44. The method of claim 42, wherein the additional localization resources comprises a beacon. 45. The method of claim 25, wherein the electromagnetic sensor comprises a non-solid ferrite cube. 46. The method of claim 25, wherein the electromagnetic sensor comprises a stack of ferrite disks. 47. The method of claim 25, wherein the electromagnetic sensor comprises a plurality of ferrite rods each having a polymer coating. 48. The method of claim 25, wherein the electromagnetic sensor comprises a time division multiplexing switch. 49. An augmented reality display system, comprising: a handheld component housing an electromagnetic field emitter, the electromagnetic field emitter emitting a known magnetic field; a head mounted component having a display system that displays virtual content to a user, the head mounted component coupled to one or more electromagnetic sensors that detect the magnetic field emitted by the electromagnetic field emitter housed in the handheld component, wherein a head pose is known; and a controller communicatively coupled to the handheld component and the head mounted component, the controller receiving magnetic field data from the handheld component, and receiving sensor data from the head mounted component, wherein the controller determines a hand pose based at least in part on the received magnetic field data and the received sensor data, wherein the display system modifies the virtual content displayed to the user based at least in part on the determined hand pose. 50. The AR display system of claim 49, wherein the handheld component is mobile. 51. The AR display system of claim 50, wherein the handheld component is a totem. 52. The AR display system of claim 50, wherein the handheld component is a gaming component. 53. The AR display system of claim 49, wherein the head pose is known based at least in part on SLAM analysis. 54. The AR display system of claim 53, further comprising one or more cameras operatively coupled to the head-mounted component, and wherein the SLAM analysis is performed based at least on data captured by the one or more cameras. 55. The AR display system of claim 49, wherein the electromagnetic sensor comprises one or more inertial measurement units (IMUs). 56. The AR display system of claim 49, wherein the head pose corresponds to at least a position and orientation of the electromagnetic sensor relative to the world. 57. The AR display system of claim 49, wherein the hand pose is analyzed to determine world coordinates corresponding to the handheld component. 58. The AR display system of claim 49, wherein the controller detects an interaction with one or more virtual contents based at least in part on the determined hand pose. 59. The AR display system of claim 58, wherein the display system displays the virtual content to the user based at least in part on the detected interaction. 60. The AR display system of claim 49, wherein the electromagnetic sensor comprises at least three coils to measure magnetic flux in three directions. 61. The AR display system of claim 60, wherein the at least three coils are housed together at substantially the same location. 62. The AR display system of claim 60, wherein the at least three coils are housed at different locations of the head-mounted component. 63. The AR display system of claim 49, further comprising a control and quick release module to decouple the magnetic field emitted by the electromagnetic field emitter. 64. The AR display system of claim 49, further comprising additional localization resources to determine the hand pose. 65. The AR display system of claim 64, wherein the additional localization resources comprises a GPS receiver. 66. The AR display system of claim 64, wherein the additional localization resources comprises a beacon. 67. The AR display system of claim 49, wherein at least one of the one or more electromagnetic sensors comprises a non-solid ferrite cube. 68. The AR display system of claim 49, wherein at least one of the one or more electromagnetic sensors comprises a stack of ferrite disks. 69. The AR display system of claim 49, wherein at least one of the one or more electromagnetic sensors comprises a plurality of ferrite rods each having a polymer coating. 70. The AR display system of claim 49, wherein at least one of the one or more electromagnetic sensors comprises a time division multiplexing switch.
2,600
9,978
9,978
14,566,937
2,618
An image processing method includes following steps. A two-dimension (2D) image is obtained; a gray-scale processing is performed; a smoothing processing is performed; and a height calculation for constructing a three-dimension (3D) model is performed. The 2D image is automatically converted into a 3D model, even an user does not have 3D model construction skill. Furthermore, the 3D model constructed has less noise and more obvious image features.
1. A method for image processing, comprising: a) retrieving a two-dimension image; b) performing a gray-scale processing to the two-dimension image; c) performing a smoothing processing to the two-dimension image; d) respectively calculating a height value corresponding to each pixel according to pixel values of a plurality of pixels of the two-dimension image, wherein the pixel value of each pixel is inversely proportional to the corresponding height value; and e) constructing a three-dimension model according to the two-dimension image and the plurality of the height values. 2. The image processing method of claim 1, wherein in the step a), the two-dimension image is received via the Internet. 3. The image processing method of claim 2, further comprising: f) generating and returning a three-dimension model file according to the three-dimension model. 4. The image processing method of claim 1, wherein in the step c), the smoothing processing is resolution lowering, Mosaic processing, Binarization processing or grid processing. 5. The image processing method of claim 1, further comprising: g) performing slicing processing to the three-dimension model. 6. The image processing method of claim 5, wherein the step g) comprises: g1) retrieving a slicing threshold; and g2) slicing the three-dimension model into a plurality of slice models, wherein the number of the plurality of slice models corresponds to slicing threshold. 7. The image processing method of claim 6, wherein the step (g2) comprises: g21) calculating a thickness value for each slice model according to a pixel value range and the slicing threshold of the plurality of pixels of the two-dimension image; and g22) slicing the three-dimension model according to the thickness value. 8. The image processing method of claim 7, wherein the step g22) comprises: g221) calculating the height value corresponding to each pixel of the two-dimension image according to the thickness value so that the maximum height value being corresponding to the slicing threshold; and g222) slicing the three-dimension model according to the plurality of height values. 9. The image processing method of claim 5, further comprising step h): printing the three-dimension model after slicing processing.
An image processing method includes following steps. A two-dimension (2D) image is obtained; a gray-scale processing is performed; a smoothing processing is performed; and a height calculation for constructing a three-dimension (3D) model is performed. The 2D image is automatically converted into a 3D model, even an user does not have 3D model construction skill. Furthermore, the 3D model constructed has less noise and more obvious image features.1. A method for image processing, comprising: a) retrieving a two-dimension image; b) performing a gray-scale processing to the two-dimension image; c) performing a smoothing processing to the two-dimension image; d) respectively calculating a height value corresponding to each pixel according to pixel values of a plurality of pixels of the two-dimension image, wherein the pixel value of each pixel is inversely proportional to the corresponding height value; and e) constructing a three-dimension model according to the two-dimension image and the plurality of the height values. 2. The image processing method of claim 1, wherein in the step a), the two-dimension image is received via the Internet. 3. The image processing method of claim 2, further comprising: f) generating and returning a three-dimension model file according to the three-dimension model. 4. The image processing method of claim 1, wherein in the step c), the smoothing processing is resolution lowering, Mosaic processing, Binarization processing or grid processing. 5. The image processing method of claim 1, further comprising: g) performing slicing processing to the three-dimension model. 6. The image processing method of claim 5, wherein the step g) comprises: g1) retrieving a slicing threshold; and g2) slicing the three-dimension model into a plurality of slice models, wherein the number of the plurality of slice models corresponds to slicing threshold. 7. The image processing method of claim 6, wherein the step (g2) comprises: g21) calculating a thickness value for each slice model according to a pixel value range and the slicing threshold of the plurality of pixels of the two-dimension image; and g22) slicing the three-dimension model according to the thickness value. 8. The image processing method of claim 7, wherein the step g22) comprises: g221) calculating the height value corresponding to each pixel of the two-dimension image according to the thickness value so that the maximum height value being corresponding to the slicing threshold; and g222) slicing the three-dimension model according to the plurality of height values. 9. The image processing method of claim 5, further comprising step h): printing the three-dimension model after slicing processing.
2,600
9,979
9,979
15,002,164
2,621
A multi-view display system that enables viewers to individually interact with the system to provide information thereto is disclosed. The system includes a multi-view display, a system controller, and an input/communications device. In the illustrative embodiment, the input/communications device provides a way for a viewer to communicate, to the system, a viewing preference pertaining to the presentation of content and facilitates associating the viewing preference with a viewing location so that the content that is ultimately displayed via the multi-view display is viewable by viewer at the proper viewing location.
1. A method for operating a multi-view display, wherein the method comprises: receiving, at a system controller, input from each of a plurality of viewers, each viewer's input being associated with one viewing location of a plurality thereof, and wherein at least some viewers' input differ from other of the viewers' input; and displaying, via the multi-view display, content that is based on each viewer's input, wherein the content associated with any one viewer's input is viewable only at the viewing location associated with the one viewer's input. 2. The method of claim 1 and further wherein the input is a viewing preference pertaining to the presentation of content. 3. The method of claim 1 and further wherein the input is conveyed to the system controller via an optically sensed object. 4. The method of claim 3 wherein a characteristic of the optically sensed object conveys the input. 5. The method of claim 4 wherein the characteristic is selected from the group consisting of color, shape, and pattern. 6. The method of claim 1 wherein receiving, at a system controller, input, further comprises: displaying, via the multi-view display, introductory information at at least some of the viewing locations, wherein the introductory information is viewed by viewers; capturing, via a sensing system, gestures of the viewers, wherein the gestures are based on the introductory information; determining, from the captured gestures, the viewing locations from which the gestures originated; and determining, from the captured gestures, the viewers' respective input. 7. The method of claim 6 wherein the introductory information directs the viewer to gesture. 8. The method of claim 6 wherein the sensing system includes a camera, and wherein the camera is disposed proximate to the multi-view display and facing the plurality of viewing locations. 9. The method of claim 6 wherein the gestures are made with a body part of the viewer. 10. The method of claim 1 wherein receiving, at a system controller, input, further comprises: uniquely associating, for each viewing location in at least a subset of the plurality of viewing locations, a communications device with a respective viewing location in said subset thereof; and receiving, at the system controller, input from each of the communications devices, wherein the input from at least some of the communications devices is different from the input from some other of the communications devices. 11. The method of claim 10 wherein the communications device is permanently located at each viewing location. 12. The method of claim 1 wherein receiving, at a system controller, input, further comprises: generating a datum for each of the plurality of viewing locations, wherein each datum is uniquely associated with a respective one of the viewing locations and is viewable only at the associated viewing location; displaying, on the multi-view display, the datum for at least some of the viewing locations; uniquely associating, for at least some of the viewing locations at which the datum appears, a communications device with a respective viewing location; and receiving input from each of the communications devices, wherein the input received from at least some of the communications devices is different from the input received from some other of the communications devices. 13. The method of claim 12 wherein the communications device is provided by the viewer. 14. The method of claim 12 wherein the communications device is provided to the viewer by an operator of the multi-view display. 15. The method of claim 1 wherein the input is a command that directs actions in a game, wherein game play is displayed by the multi-view display. 16. The method of claim 1 wherein the input relates to a viewer's interest in a product being advertised. 17. The method of claim 16 wherein the input is a viewer's movements with respect to the product. 18. A multi-view display system comprising: a multi-view display; a system controller that causes images to be displayed via the multi-view display, wherein the images are displayed simultaneously to a plurality of viewers located at a respective plurality of viewing locations, and further wherein at least some images displayed for viewing at some of the viewing locations are different from images displayed for viewing at some other of the viewing locations; and an input/locating device, wherein the input/locating device: (a) facilitates communication between the plurality of viewers and the multi-view display system, each viewer of the plurality being located at the respective viewing location; (b) provides location information to the system controller; and (c) is not pre-associated with a viewing location. 19. The multi-view display system of claim 18, wherein the input/locating device comprises a camera that is proximate to the multi-view display and faces the plurality of viewing locations. 20. The multi-view display system of claim 18, wherein the input/locating device further comprises an optically sensed object, wherein the optically sensed object comprises characteristics that provide an indication of at least one of a viewing preference pertaining to presentation of content or an identity of a particular viewer. 21. The multi-view display system of claim 18, wherein the input/locating device is an interactive display at which a viewer can input a viewing location and viewing preferences pertaining to presentation of content, wherein the interactive display is immovably installed at a location other than any of the viewing locations. 22. The multi-view display system of claim 18 wherein the input/locating device is a communications device that is capable of transmitting at least one of a viewing location and a viewing preference pertaining to presentation of content. 23. The multi-view display system of claim 22 wherein the input/locating device further comprises at least one of either a passive location tag or an active location tag. 24. The multi-view display system of claim 22 wherein the input/locating device further comprises a shared location-determining system. 25. The multi-view display system of claim 22 further comprising a control application that is stored in the communications device, wherein the control application facilitates communications between the communications device and the system controller.
A multi-view display system that enables viewers to individually interact with the system to provide information thereto is disclosed. The system includes a multi-view display, a system controller, and an input/communications device. In the illustrative embodiment, the input/communications device provides a way for a viewer to communicate, to the system, a viewing preference pertaining to the presentation of content and facilitates associating the viewing preference with a viewing location so that the content that is ultimately displayed via the multi-view display is viewable by viewer at the proper viewing location.1. A method for operating a multi-view display, wherein the method comprises: receiving, at a system controller, input from each of a plurality of viewers, each viewer's input being associated with one viewing location of a plurality thereof, and wherein at least some viewers' input differ from other of the viewers' input; and displaying, via the multi-view display, content that is based on each viewer's input, wherein the content associated with any one viewer's input is viewable only at the viewing location associated with the one viewer's input. 2. The method of claim 1 and further wherein the input is a viewing preference pertaining to the presentation of content. 3. The method of claim 1 and further wherein the input is conveyed to the system controller via an optically sensed object. 4. The method of claim 3 wherein a characteristic of the optically sensed object conveys the input. 5. The method of claim 4 wherein the characteristic is selected from the group consisting of color, shape, and pattern. 6. The method of claim 1 wherein receiving, at a system controller, input, further comprises: displaying, via the multi-view display, introductory information at at least some of the viewing locations, wherein the introductory information is viewed by viewers; capturing, via a sensing system, gestures of the viewers, wherein the gestures are based on the introductory information; determining, from the captured gestures, the viewing locations from which the gestures originated; and determining, from the captured gestures, the viewers' respective input. 7. The method of claim 6 wherein the introductory information directs the viewer to gesture. 8. The method of claim 6 wherein the sensing system includes a camera, and wherein the camera is disposed proximate to the multi-view display and facing the plurality of viewing locations. 9. The method of claim 6 wherein the gestures are made with a body part of the viewer. 10. The method of claim 1 wherein receiving, at a system controller, input, further comprises: uniquely associating, for each viewing location in at least a subset of the plurality of viewing locations, a communications device with a respective viewing location in said subset thereof; and receiving, at the system controller, input from each of the communications devices, wherein the input from at least some of the communications devices is different from the input from some other of the communications devices. 11. The method of claim 10 wherein the communications device is permanently located at each viewing location. 12. The method of claim 1 wherein receiving, at a system controller, input, further comprises: generating a datum for each of the plurality of viewing locations, wherein each datum is uniquely associated with a respective one of the viewing locations and is viewable only at the associated viewing location; displaying, on the multi-view display, the datum for at least some of the viewing locations; uniquely associating, for at least some of the viewing locations at which the datum appears, a communications device with a respective viewing location; and receiving input from each of the communications devices, wherein the input received from at least some of the communications devices is different from the input received from some other of the communications devices. 13. The method of claim 12 wherein the communications device is provided by the viewer. 14. The method of claim 12 wherein the communications device is provided to the viewer by an operator of the multi-view display. 15. The method of claim 1 wherein the input is a command that directs actions in a game, wherein game play is displayed by the multi-view display. 16. The method of claim 1 wherein the input relates to a viewer's interest in a product being advertised. 17. The method of claim 16 wherein the input is a viewer's movements with respect to the product. 18. A multi-view display system comprising: a multi-view display; a system controller that causes images to be displayed via the multi-view display, wherein the images are displayed simultaneously to a plurality of viewers located at a respective plurality of viewing locations, and further wherein at least some images displayed for viewing at some of the viewing locations are different from images displayed for viewing at some other of the viewing locations; and an input/locating device, wherein the input/locating device: (a) facilitates communication between the plurality of viewers and the multi-view display system, each viewer of the plurality being located at the respective viewing location; (b) provides location information to the system controller; and (c) is not pre-associated with a viewing location. 19. The multi-view display system of claim 18, wherein the input/locating device comprises a camera that is proximate to the multi-view display and faces the plurality of viewing locations. 20. The multi-view display system of claim 18, wherein the input/locating device further comprises an optically sensed object, wherein the optically sensed object comprises characteristics that provide an indication of at least one of a viewing preference pertaining to presentation of content or an identity of a particular viewer. 21. The multi-view display system of claim 18, wherein the input/locating device is an interactive display at which a viewer can input a viewing location and viewing preferences pertaining to presentation of content, wherein the interactive display is immovably installed at a location other than any of the viewing locations. 22. The multi-view display system of claim 18 wherein the input/locating device is a communications device that is capable of transmitting at least one of a viewing location and a viewing preference pertaining to presentation of content. 23. The multi-view display system of claim 22 wherein the input/locating device further comprises at least one of either a passive location tag or an active location tag. 24. The multi-view display system of claim 22 wherein the input/locating device further comprises a shared location-determining system. 25. The multi-view display system of claim 22 further comprising a control application that is stored in the communications device, wherein the control application facilitates communications between the communications device and the system controller.
2,600
9,980
9,980
15,439,168
2,651
A telemedicine device is implemented in an integrated housing, which includes a display, input/output ports, a videoconferencing codec, and a codec-independent hardware user interface. A processor receives inputs through the user interface, translates them into instructions understandable by the codec, and sends the translated instruction to the codec for execution. The user interface can be standardized, such that it is identical regardless of the codec in use, and can group functions logically (e.g., call control, video functions, audio functions).
1. A telemedicine device implemented in an integrated housing, the telemedicine device comprising: a display; a plurality of input/output ports; a videoconferencing codec; a codec-independent hardware user interface; and a processor configured: to receive an input through the codec-independent hardware user interface; and to translate the received input into an instruction understandable by the videoconferencing codec; and to send the instruction to the videoconferencing codec. 2. The telemedicine device according to claim 1, wherein the codec-independent hardware user interface comprises: a codec-independent call control hardware user interface; a codec-independent video function hardware user interface; and a codec-independent audio function hardware user interface. 3. The telemedicine device according to claim 1, wherein the processor is further configured: to determine whether the received input is executable without inducing an error state; and to ignore the received input if the received input is not executable without inducing an error state. 4. The telemedicine device according to claim 1, further comprising a camera. 5. The telemedicine device according to claim 4, wherein the camera further comprises an integrated microphone. 6. The telemedicine device according to claim 1, further comprising a microphone. 7. The telemedicine device according to claim 1, further comprising a wireless remote control, the wireless remote control including a codec-independent hardware user interface. 8. The telemedicine device according to claim 5, wherein the codec-independent hardware user interface of the wireless remote control mirrors the codec-independent hardware user interface of the telemedicine device. 9. The telemedicine device according to claim 1, wherein the processor is further configured: to translate the received input into an instruction understandable by a peripheral device coupled to one of the plurality of input/output ports; and to send the instruction to the peripheral device. 10. A telemedicine device, comprising: an integrated housing, comprising: a display; a plurality of input/output ports; a videoconferencing codec; a codec-independent hardware user interface; and a processor configured to translate user inputs received through the codec-independent hardware user interface into instructions understandable by the videoconferencing codec. 11. The telemedicine device according to claim 10, wherein the codec-independent hardware user interface comprises a plurality of codec-independent hardware control groupings, and wherein each codec-independent hardware control grouping includes a plurality of hardware controls. 12. The telemedicine device according to claim 11, wherein the plurality of codec-independent hardware control groupings comprises: a codec-independent hardware call control grouping; a codec-independent hardware video control grouping; and a codec-independent hardware audio control grouping. 13. The telemedicine device according to claim 10, further comprising a plurality of input/output devices coupled to the plurality of input/output ports. 14. The telemedicine device according to claim 13, wherein the plurality of input/output devices are selected from the group consisting of examination tools, cameras, microphones, and speakers. 15. The telemedicine device according to claim 10, further comprising a wireless remote control including a codec-independent hardware user interface. 16. The telemedicine device according to claim 15, wherein the codec-independent hardware user interface of the wireless remote control mirrors the codec-independent hardware user interface of the telemedicine device.
A telemedicine device is implemented in an integrated housing, which includes a display, input/output ports, a videoconferencing codec, and a codec-independent hardware user interface. A processor receives inputs through the user interface, translates them into instructions understandable by the codec, and sends the translated instruction to the codec for execution. The user interface can be standardized, such that it is identical regardless of the codec in use, and can group functions logically (e.g., call control, video functions, audio functions).1. A telemedicine device implemented in an integrated housing, the telemedicine device comprising: a display; a plurality of input/output ports; a videoconferencing codec; a codec-independent hardware user interface; and a processor configured: to receive an input through the codec-independent hardware user interface; and to translate the received input into an instruction understandable by the videoconferencing codec; and to send the instruction to the videoconferencing codec. 2. The telemedicine device according to claim 1, wherein the codec-independent hardware user interface comprises: a codec-independent call control hardware user interface; a codec-independent video function hardware user interface; and a codec-independent audio function hardware user interface. 3. The telemedicine device according to claim 1, wherein the processor is further configured: to determine whether the received input is executable without inducing an error state; and to ignore the received input if the received input is not executable without inducing an error state. 4. The telemedicine device according to claim 1, further comprising a camera. 5. The telemedicine device according to claim 4, wherein the camera further comprises an integrated microphone. 6. The telemedicine device according to claim 1, further comprising a microphone. 7. The telemedicine device according to claim 1, further comprising a wireless remote control, the wireless remote control including a codec-independent hardware user interface. 8. The telemedicine device according to claim 5, wherein the codec-independent hardware user interface of the wireless remote control mirrors the codec-independent hardware user interface of the telemedicine device. 9. The telemedicine device according to claim 1, wherein the processor is further configured: to translate the received input into an instruction understandable by a peripheral device coupled to one of the plurality of input/output ports; and to send the instruction to the peripheral device. 10. A telemedicine device, comprising: an integrated housing, comprising: a display; a plurality of input/output ports; a videoconferencing codec; a codec-independent hardware user interface; and a processor configured to translate user inputs received through the codec-independent hardware user interface into instructions understandable by the videoconferencing codec. 11. The telemedicine device according to claim 10, wherein the codec-independent hardware user interface comprises a plurality of codec-independent hardware control groupings, and wherein each codec-independent hardware control grouping includes a plurality of hardware controls. 12. The telemedicine device according to claim 11, wherein the plurality of codec-independent hardware control groupings comprises: a codec-independent hardware call control grouping; a codec-independent hardware video control grouping; and a codec-independent hardware audio control grouping. 13. The telemedicine device according to claim 10, further comprising a plurality of input/output devices coupled to the plurality of input/output ports. 14. The telemedicine device according to claim 13, wherein the plurality of input/output devices are selected from the group consisting of examination tools, cameras, microphones, and speakers. 15. The telemedicine device according to claim 10, further comprising a wireless remote control including a codec-independent hardware user interface. 16. The telemedicine device according to claim 15, wherein the codec-independent hardware user interface of the wireless remote control mirrors the codec-independent hardware user interface of the telemedicine device.
2,600
9,981
9,981
15,664,748
2,647
A terminal device includes receiver circuitry that receives parameter data from a base station in a cellular wireless telecommunication network. The terminal device also includes a storage device that stores an identifier that uniquely identifies the terminal device, and control circuitry that controls the terminal device. The control circuitry controls the terminal device, when operating in an idle mode, to perform cell reselection using at least one cell reselection parameter derived from the received parameter data and the stored unique identifier.
1. A terminal device comprising: receiver circuitry configured to receive parameter data from a base station in a cellular wireless telecommunication network; a storage device configured to store an identifier that uniquely identifies the terminal device; and control circuitry configured to control the terminal device, when operating in an idle mode, to perform cell reselection using at least one cell reselection parameter derived from the received parameter data and the stored unique identifier. 2. The terminal device according to claim 1, wherein the identifier is a UE Identifier. 3. The terminal device according to claim 2, wherein the UE Identifier is an International Mobile Subscriber Identity (IMSI). 4. The terminal device according to claim 1, wherein the cell reselection parameter is one of multiple cell reselection offsets or multiple absolute priorities for the frequency or cell. 5. The terminal device according to claim 4, wherein the parameter data includes multiple cell reselection offsets or multiple absolute priorities for the frequency or cell. 6. The terminal device according to claim 4, wherein the control circuitry is configured to calculate a priority or offset based on the unique identifier. 7. The terminal device according to claim 4, wherein the parameter data further includes either i) a parameter controlling a percentage of terminal devices applying a specific offset value or priority or ii) a parameter controlling the amount of offset or relative priority applied. 8. The terminal device according to claim 4, wherein the multiple cell reselection offsets provide difference values from a default percentage. 9. The terminal device according to claim 6, wherein the multiple cell reselection offsets provide difference values from a default percentage. 10. The terminal device according to claim 7, wherein the parameter data further includes an offset to be applied to the stored unique identifier. 11. A communication system comprising: a terminal device according to claim 1; and a base station in communication with the terminal device. 12. A method of operating a terminal device in a cellular wireless telecommunication network, the method comprising: receiving parameter data from a base station in the cellular wireless telecommunication network; storing an identifier that uniquely identifies the terminal device; and controlling the terminal device, when operating in an idle mode, to perform cell reselection using at least one cell reselection parameter derived from the received parameter data and the stored unique identifier. 13. The method according to claim 12, wherein the identifier is a UE Identifier. 14. The method according to claim 13, wherein the UE Identifier is an International Mobile Subscriber Identity (IMSI). 15. The method according to claim 12 wherein the cell reselection parameter is one of multiple cell reselection offsets or multiple absolute priorities for the frequency or cell. 16. The method according to claim 15, wherein the parameter data includes multiple cell reselection offsets or multiple absolute priorities for the frequency or cell. 17. The method according to claim 15, further comprising calculating a priority or offset based on the unique identifier. 18. The method according to claim 15, wherein the parameter data further includes either i) a parameter controlling a percentage of terminal devices applying a specific offset value or priority or ii) a parameter controlling the amount of offset or relative priority applied. 19. The method according to claim 15, wherein the multiple cell reselection offsets provide difference values from a default percentage. 20. The method according to claim 17, wherein the multiple cell reselection offsets provide difference values from a default percentage. 21. The method according to claim 18, wherein the parameter data further includes an offset to be applied to the stored unique identifier. 22. A non-transitory computer-readable medium encoded with computer readable instructions that, when executed by a computer, cause the computer to perform a method according to claim 12.
A terminal device includes receiver circuitry that receives parameter data from a base station in a cellular wireless telecommunication network. The terminal device also includes a storage device that stores an identifier that uniquely identifies the terminal device, and control circuitry that controls the terminal device. The control circuitry controls the terminal device, when operating in an idle mode, to perform cell reselection using at least one cell reselection parameter derived from the received parameter data and the stored unique identifier.1. A terminal device comprising: receiver circuitry configured to receive parameter data from a base station in a cellular wireless telecommunication network; a storage device configured to store an identifier that uniquely identifies the terminal device; and control circuitry configured to control the terminal device, when operating in an idle mode, to perform cell reselection using at least one cell reselection parameter derived from the received parameter data and the stored unique identifier. 2. The terminal device according to claim 1, wherein the identifier is a UE Identifier. 3. The terminal device according to claim 2, wherein the UE Identifier is an International Mobile Subscriber Identity (IMSI). 4. The terminal device according to claim 1, wherein the cell reselection parameter is one of multiple cell reselection offsets or multiple absolute priorities for the frequency or cell. 5. The terminal device according to claim 4, wherein the parameter data includes multiple cell reselection offsets or multiple absolute priorities for the frequency or cell. 6. The terminal device according to claim 4, wherein the control circuitry is configured to calculate a priority or offset based on the unique identifier. 7. The terminal device according to claim 4, wherein the parameter data further includes either i) a parameter controlling a percentage of terminal devices applying a specific offset value or priority or ii) a parameter controlling the amount of offset or relative priority applied. 8. The terminal device according to claim 4, wherein the multiple cell reselection offsets provide difference values from a default percentage. 9. The terminal device according to claim 6, wherein the multiple cell reselection offsets provide difference values from a default percentage. 10. The terminal device according to claim 7, wherein the parameter data further includes an offset to be applied to the stored unique identifier. 11. A communication system comprising: a terminal device according to claim 1; and a base station in communication with the terminal device. 12. A method of operating a terminal device in a cellular wireless telecommunication network, the method comprising: receiving parameter data from a base station in the cellular wireless telecommunication network; storing an identifier that uniquely identifies the terminal device; and controlling the terminal device, when operating in an idle mode, to perform cell reselection using at least one cell reselection parameter derived from the received parameter data and the stored unique identifier. 13. The method according to claim 12, wherein the identifier is a UE Identifier. 14. The method according to claim 13, wherein the UE Identifier is an International Mobile Subscriber Identity (IMSI). 15. The method according to claim 12 wherein the cell reselection parameter is one of multiple cell reselection offsets or multiple absolute priorities for the frequency or cell. 16. The method according to claim 15, wherein the parameter data includes multiple cell reselection offsets or multiple absolute priorities for the frequency or cell. 17. The method according to claim 15, further comprising calculating a priority or offset based on the unique identifier. 18. The method according to claim 15, wherein the parameter data further includes either i) a parameter controlling a percentage of terminal devices applying a specific offset value or priority or ii) a parameter controlling the amount of offset or relative priority applied. 19. The method according to claim 15, wherein the multiple cell reselection offsets provide difference values from a default percentage. 20. The method according to claim 17, wherein the multiple cell reselection offsets provide difference values from a default percentage. 21. The method according to claim 18, wherein the parameter data further includes an offset to be applied to the stored unique identifier. 22. A non-transitory computer-readable medium encoded with computer readable instructions that, when executed by a computer, cause the computer to perform a method according to claim 12.
2,600
9,982
9,982
14,693,348
2,611
A thermal imaging accessory (TIA) is linked with a head-mounted smart device (HMSD) with a data display for displaying data for an eye of a user wearing the HMSD. The HMSD supports the TIA in an orientation where a field-of-view of a thermal imaging camera of the TIA is substantially in alignment with the field-of-view of an eye looking through the data display. The HMSD is configured to: link the TIA to the HMSD, activate a thermal imaging application on the HMSD to receive data from the TIA and display it on an HMSD data display, receive thermal imaging data of a target from the TIA, process the thermal imaging data received from the TIA, and initiate a display of the processed thermal imaging data on the HMSD data display.
1. A computer-implemented method comprising: linking a thermal imaging accessory (TIA) to a head-mounted smart device (HMSD); activating a thermal imaging application on the HMSD to receive data from the TIA and display it on an HMSD data display; receiving thermal imaging data of a target from the TIA; processing the thermal imaging data received from the TIA; and initiating a display of the processed thermal imaging data on the HMSD data display. 2. The computer-implemented method of claim 1, comprising linking a mobile computing device (MCD) to the at least one of the TIA or the HMSD. 3. The computer-implemented method of claim 2, comprising executing an application on the HMSD that seeks out at least one of an in-range TIA or MCD to establish a data connection with the HMSD. 4. The computer-implemented method of claim 1, comprising transmitting data from the HMSD to the TIA. 5. The computer-implemented method of claim 1, wherein additional data is processed with the thermal imaging data. 6. The computer-implemented method of claim 1, comprising determining a range from the HMSD to the target. 7. The computer-implemented method of claim 1, wherein the processed thermal imaging data is displayed according to preset or dynamically-determined preferences. 8. A non-transitory, computer-readable medium storing computer-readable instructions executable by a computer and configured to: link a thermal imaging accessory (TIA) to a head-mounted smart device (HMSD); activate a thermal imaging application on the HMSD to receive data from the TIA and display it on an HMSD data display; receive thermal imaging data of a target from the TIA; process the thermal imaging data received from the TIA; and initiate a display of the processed thermal imaging data on the HMSD data display. 9. The non-transitory, computer-readable medium of claim 8, comprising linking a mobile computing device (MCD) to the at least one of the TIA or the HMSD. 10. The non-transitory, computer-readable medium of claim 9, comprising executing an application on the HMSD that seeks out at least one of an in-range TIA or MCD to establish a data connection with the HMSD. 11. The non-transitory, computer-readable medium of claim 8, comprising transmitting data from the HMSD to the TIA. 12. The non-transitory, computer-readable medium of claim 8, wherein additional data is processed with the thermal imaging data. 13. The non-transitory, computer-readable medium of claim 8, comprising determining a range from the HMSD to the target. 14. The non-transitory, computer-readable medium of claim 8, wherein the processed thermal imaging data is displayed according to preset or dynamically-determined preferences. 15. A system, comprising: a thermal imaging accessory (TIA); and a head-mounted smart device (HMSD) with a data display for displaying data for an eye of a user wearing the HMSD, wherein the HMSD supports the TIA in an orientation where a field-of-view of a thermal imaging camera of the TIA is substantially in alignment with the field-of-view of an eye looking through the data display, and wherein the HMSD is configured to: link the TIA to the HMSD; activate a thermal imaging application on the HMSD to receive data from the TIA and display it on an HMSD data display; receive thermal imaging data of a target from the TIA; process the thermal imaging data received from the TIA; and initiate a display of the processed thermal imaging data on the HMSD data display, wherein the processed thermal imaging data is displayed according to preset or dynamically-determined preferences. 16. The system of claim 15, comprising linking a mobile computing device (MCD) to the at least one of the TIA or the HMSD. 17. The system of claim 16, comprising executing an application on the HMSD that seeks out at least one of an in-range TIA or MCD to establish a data connection with the HMSD. 18. The system of claim 15, comprising transmitting data from the HMSD to the TIA. 19. The system of claim 15, wherein additional data is processed with the thermal imaging data. 20. The system of claim 15, comprising determining a range from the HMSD to the target.
A thermal imaging accessory (TIA) is linked with a head-mounted smart device (HMSD) with a data display for displaying data for an eye of a user wearing the HMSD. The HMSD supports the TIA in an orientation where a field-of-view of a thermal imaging camera of the TIA is substantially in alignment with the field-of-view of an eye looking through the data display. The HMSD is configured to: link the TIA to the HMSD, activate a thermal imaging application on the HMSD to receive data from the TIA and display it on an HMSD data display, receive thermal imaging data of a target from the TIA, process the thermal imaging data received from the TIA, and initiate a display of the processed thermal imaging data on the HMSD data display.1. A computer-implemented method comprising: linking a thermal imaging accessory (TIA) to a head-mounted smart device (HMSD); activating a thermal imaging application on the HMSD to receive data from the TIA and display it on an HMSD data display; receiving thermal imaging data of a target from the TIA; processing the thermal imaging data received from the TIA; and initiating a display of the processed thermal imaging data on the HMSD data display. 2. The computer-implemented method of claim 1, comprising linking a mobile computing device (MCD) to the at least one of the TIA or the HMSD. 3. The computer-implemented method of claim 2, comprising executing an application on the HMSD that seeks out at least one of an in-range TIA or MCD to establish a data connection with the HMSD. 4. The computer-implemented method of claim 1, comprising transmitting data from the HMSD to the TIA. 5. The computer-implemented method of claim 1, wherein additional data is processed with the thermal imaging data. 6. The computer-implemented method of claim 1, comprising determining a range from the HMSD to the target. 7. The computer-implemented method of claim 1, wherein the processed thermal imaging data is displayed according to preset or dynamically-determined preferences. 8. A non-transitory, computer-readable medium storing computer-readable instructions executable by a computer and configured to: link a thermal imaging accessory (TIA) to a head-mounted smart device (HMSD); activate a thermal imaging application on the HMSD to receive data from the TIA and display it on an HMSD data display; receive thermal imaging data of a target from the TIA; process the thermal imaging data received from the TIA; and initiate a display of the processed thermal imaging data on the HMSD data display. 9. The non-transitory, computer-readable medium of claim 8, comprising linking a mobile computing device (MCD) to the at least one of the TIA or the HMSD. 10. The non-transitory, computer-readable medium of claim 9, comprising executing an application on the HMSD that seeks out at least one of an in-range TIA or MCD to establish a data connection with the HMSD. 11. The non-transitory, computer-readable medium of claim 8, comprising transmitting data from the HMSD to the TIA. 12. The non-transitory, computer-readable medium of claim 8, wherein additional data is processed with the thermal imaging data. 13. The non-transitory, computer-readable medium of claim 8, comprising determining a range from the HMSD to the target. 14. The non-transitory, computer-readable medium of claim 8, wherein the processed thermal imaging data is displayed according to preset or dynamically-determined preferences. 15. A system, comprising: a thermal imaging accessory (TIA); and a head-mounted smart device (HMSD) with a data display for displaying data for an eye of a user wearing the HMSD, wherein the HMSD supports the TIA in an orientation where a field-of-view of a thermal imaging camera of the TIA is substantially in alignment with the field-of-view of an eye looking through the data display, and wherein the HMSD is configured to: link the TIA to the HMSD; activate a thermal imaging application on the HMSD to receive data from the TIA and display it on an HMSD data display; receive thermal imaging data of a target from the TIA; process the thermal imaging data received from the TIA; and initiate a display of the processed thermal imaging data on the HMSD data display, wherein the processed thermal imaging data is displayed according to preset or dynamically-determined preferences. 16. The system of claim 15, comprising linking a mobile computing device (MCD) to the at least one of the TIA or the HMSD. 17. The system of claim 16, comprising executing an application on the HMSD that seeks out at least one of an in-range TIA or MCD to establish a data connection with the HMSD. 18. The system of claim 15, comprising transmitting data from the HMSD to the TIA. 19. The system of claim 15, wherein additional data is processed with the thermal imaging data. 20. The system of claim 15, comprising determining a range from the HMSD to the target.
2,600
9,983
9,983
15,087,321
2,642
Various technologies described herein pertain to detection of an opportune time period to deliver a notification. Responsive to receipt of the notification (e.g., at a user device), analysis of an attention state of a user can be initialized. Further, the opportune time period to deliver the notification can be detected based on the analysis of the attention state of the user. The opportune time period can be during a breakpoint or an influential context. The breakpoint is when the user has switched between tasks and lacks engagement with the tasks. The influential context is a particular context in which the user is available to attend to the notification. Moreover, the notification can be delivered during the opportune time period.
1. A method of delivering a notification via a user device, comprising: receiving the notification at the user device; responsive to receipt of the notification at the user device, initializing analysis of an attention state of a user by the user device; based on the analysis of the attention state of the user and a type of the notification, detecting an opportune time period to deliver the notification; and delivering the notification via the user device during the opportune time period. 2. The method of claim 1, wherein the analysis of the attention state of the user is performed by the user device subsequent to the receipt of the notification at the user device. 3. The method of claim 1, further comprising: receiving, at the user device, coarse-grained user state information prior to the receipt of the notification at the user device; wherein the analysis of the attention state of the user performed by the user device subsequent to the receipt of the notification is based at least in part on the coarse-grained user state information received prior to the receipt of the notification at the user device. 4. The method of claim 3, wherein the coarse-grained user state information comprises location information of the user. 5. The method of claim 1, further comprising: performing the analysis of the attention state of the user based at least in part on data from an online service. 6. The method of claim 5, wherein the online service comprises at least one of an email service, a calendar service, a social network, or a game. 7. The method of claim 1, further comprising: receiving, at the user device, data specifying device usage of a disparate user device; and performing the analysis of the attention state of the user based at least in part on the data specifying the device usage of the disparate user device. 8. The method of claim 1, further comprising: performing the analysis of the attention state of the user, wherein performing the analysis of the attention state of the user further comprises: receiving microphone data via a microphone of the user device; and inferring an activity of the user based on the microphone data; wherein the opportune time period to deliver the notification is detected based on the activity. 9. The method of claim 8, wherein the activity of the user is inferred based on at least one of: whether the microphone data comprises human voice data; or an identity of a person whose voice is present in the microphone data. 10. The method of claim 1, further comprising: performing the analysis of the attention state of the user, wherein performing the analysis of the attention state of the user further comprises: inferring activities of the user based on data from one or more information sources; detecting transitions between the activities; and filtering the transitions between the activities to output a filtered sequence of transitions; wherein the opportune time period to deliver the notification is detected based on the filtered sequence of transitions. 11. The method of claim 10, wherein the opportune time period to deliver the notification is further detected based on a previously learned set of transition sequences. 12. The method of claim 10, wherein the activities of the user are inferred based on at least one of data from a gyroscope of the user device or data from an accelerometer of the user device. 13. The method of claim 1, wherein: the notification is received by the user device from a notification distribution system; the notification is delayed by the notification distribution system; and the notification is received from the notification distribution system as the opportune time period approaches. 14. A user device, comprising: at least one processor; and memory that comprises computer-executable instructions that, when executed by the at least one processor, cause the at least one processor to perform acts including: responsive to receipt of a notification at the user device, initializing analysis of an attention state of a user, wherein the analysis of the attention state of the user is performed by the user device subsequent to the receipt of the notification at the user device; based on the analysis of the attention state of the user, detecting an opportune time period to deliver the notification; and delivering the notification via the user device during the opportune time period. 15. The user device of claim 14, wherein the user device is one of a video game console, a television, a household appliance, or a home automation device. 16. The user device of claim 14, wherein the user device is a mobile device. 17. The user device of claim 14, the memory further comprises computer-executable instructions that, when executed by the at least one processor, cause the at least one processor to perform acts including: performing the analysis of the attention state of the user based on data from at least one of a gyroscope or an accelerometer, wherein activities of the user are detected based on the data from the gyroscope or the accelerometer. 18. The user device of claim 14, the opportune time period to deliver the notification further being detected based on a type of the notification. 19. A method of controlling distribution of a notification for delivery to a user, comprising: receiving a notification for the user; choosing a user device from a set of user devices of the user for delivery of the notification to the user; detecting an upcoming opportune time period for the user device to deliver the notification, the upcoming opportune time period detected based on data from one or more online services; and sending the notification to the user device as the upcoming opportune time period approaches, wherein sending of the notification to the user device is delayed until the upcoming opportune time period approaches. 20. The method of claim 19, further comprising: detecting the upcoming opportune time period for the user device to deliver the notification based on one or more of preferences collected over a set of training users, preferences of the user learned through behavior monitoring at run time based at least in part on history data that comprises data pertaining to actions of the user responsive to previously received notifications, or rules comprised in a lookup table.
Various technologies described herein pertain to detection of an opportune time period to deliver a notification. Responsive to receipt of the notification (e.g., at a user device), analysis of an attention state of a user can be initialized. Further, the opportune time period to deliver the notification can be detected based on the analysis of the attention state of the user. The opportune time period can be during a breakpoint or an influential context. The breakpoint is when the user has switched between tasks and lacks engagement with the tasks. The influential context is a particular context in which the user is available to attend to the notification. Moreover, the notification can be delivered during the opportune time period.1. A method of delivering a notification via a user device, comprising: receiving the notification at the user device; responsive to receipt of the notification at the user device, initializing analysis of an attention state of a user by the user device; based on the analysis of the attention state of the user and a type of the notification, detecting an opportune time period to deliver the notification; and delivering the notification via the user device during the opportune time period. 2. The method of claim 1, wherein the analysis of the attention state of the user is performed by the user device subsequent to the receipt of the notification at the user device. 3. The method of claim 1, further comprising: receiving, at the user device, coarse-grained user state information prior to the receipt of the notification at the user device; wherein the analysis of the attention state of the user performed by the user device subsequent to the receipt of the notification is based at least in part on the coarse-grained user state information received prior to the receipt of the notification at the user device. 4. The method of claim 3, wherein the coarse-grained user state information comprises location information of the user. 5. The method of claim 1, further comprising: performing the analysis of the attention state of the user based at least in part on data from an online service. 6. The method of claim 5, wherein the online service comprises at least one of an email service, a calendar service, a social network, or a game. 7. The method of claim 1, further comprising: receiving, at the user device, data specifying device usage of a disparate user device; and performing the analysis of the attention state of the user based at least in part on the data specifying the device usage of the disparate user device. 8. The method of claim 1, further comprising: performing the analysis of the attention state of the user, wherein performing the analysis of the attention state of the user further comprises: receiving microphone data via a microphone of the user device; and inferring an activity of the user based on the microphone data; wherein the opportune time period to deliver the notification is detected based on the activity. 9. The method of claim 8, wherein the activity of the user is inferred based on at least one of: whether the microphone data comprises human voice data; or an identity of a person whose voice is present in the microphone data. 10. The method of claim 1, further comprising: performing the analysis of the attention state of the user, wherein performing the analysis of the attention state of the user further comprises: inferring activities of the user based on data from one or more information sources; detecting transitions between the activities; and filtering the transitions between the activities to output a filtered sequence of transitions; wherein the opportune time period to deliver the notification is detected based on the filtered sequence of transitions. 11. The method of claim 10, wherein the opportune time period to deliver the notification is further detected based on a previously learned set of transition sequences. 12. The method of claim 10, wherein the activities of the user are inferred based on at least one of data from a gyroscope of the user device or data from an accelerometer of the user device. 13. The method of claim 1, wherein: the notification is received by the user device from a notification distribution system; the notification is delayed by the notification distribution system; and the notification is received from the notification distribution system as the opportune time period approaches. 14. A user device, comprising: at least one processor; and memory that comprises computer-executable instructions that, when executed by the at least one processor, cause the at least one processor to perform acts including: responsive to receipt of a notification at the user device, initializing analysis of an attention state of a user, wherein the analysis of the attention state of the user is performed by the user device subsequent to the receipt of the notification at the user device; based on the analysis of the attention state of the user, detecting an opportune time period to deliver the notification; and delivering the notification via the user device during the opportune time period. 15. The user device of claim 14, wherein the user device is one of a video game console, a television, a household appliance, or a home automation device. 16. The user device of claim 14, wherein the user device is a mobile device. 17. The user device of claim 14, the memory further comprises computer-executable instructions that, when executed by the at least one processor, cause the at least one processor to perform acts including: performing the analysis of the attention state of the user based on data from at least one of a gyroscope or an accelerometer, wherein activities of the user are detected based on the data from the gyroscope or the accelerometer. 18. The user device of claim 14, the opportune time period to deliver the notification further being detected based on a type of the notification. 19. A method of controlling distribution of a notification for delivery to a user, comprising: receiving a notification for the user; choosing a user device from a set of user devices of the user for delivery of the notification to the user; detecting an upcoming opportune time period for the user device to deliver the notification, the upcoming opportune time period detected based on data from one or more online services; and sending the notification to the user device as the upcoming opportune time period approaches, wherein sending of the notification to the user device is delayed until the upcoming opportune time period approaches. 20. The method of claim 19, further comprising: detecting the upcoming opportune time period for the user device to deliver the notification based on one or more of preferences collected over a set of training users, preferences of the user learned through behavior monitoring at run time based at least in part on history data that comprises data pertaining to actions of the user responsive to previously received notifications, or rules comprised in a lookup table.
2,600
9,984
9,984
12,828,858
2,622
Adapting a user interface of a mobile computing device when the mobile computing device is in a motion state is provided. Upon detecting that a mobile computing device is in motion by utilization of a location or motion determining system, such as a GPS navigation and/or accelerometer system, a motion mode UI may be activated on the device, wherein a display of device functionalities may be simplified by modifying one or more displayed elements of the device user interface.
1. A method for providing a simplified user interface of a mobile computing device while the mobile computing device is in motion, the method comprising: detecting a motion state of the mobile computing device; determining if the motion state of the mobile computing device meets a prescribed threshold; and altering one or more characteristics of the user interface if the motion state of the mobile computing device meets the prescribed threshold. 2. The method of claim 1, wherein determining if the motion state of the mobile computing device meets a prescribed threshold includes safeguarding against a false motion state detection. 3. The method of claim 2, wherein detecting a motion state of a mobile computing device and determining if the motion state of the computing device meets a prescribed threshold comprises deriving speed information for the mobile computing device from a location determining system. 4. The method of claim 3, wherein determining if the motion state of the mobile computing device meets a prescribed threshold includes determining whether the mobile computing device is moving at or above a prescribed speed. 5. The method of claim 2, wherein detecting a motion state of a mobile computing device and determining if the motion state of the computing device meets a prescribed threshold comprises deriving acceleration information from an accelerometer. 6. The method of claim 5, wherein determining if the motion state of the mobile computing device meets a prescribed threshold includes determining whether the mobile computing device is moving at or above a prescribed speed. 7. The method of claim 1, wherein detecting a motion state of a mobile computing device and determining if the motion state of the computing device meets a prescribed threshold comprises deriving acceleration information from a network-based location determining system. 8. The method of claim 7, wherein determining if the motion state of the mobile computing device meets a prescribed threshold includes determining whether the mobile computing device is moving at or above a prescribed speed. 9. The method of claim 1, wherein altering one or more characteristics of the user interface if the motion state of the mobile computing device meets the prescribed threshold includes changing a display of selectable functionalities in the user interface to enhance use of the selectable functionalities when the motion state of the mobile computing device meets the prescribed threshold. 10. The method of claim 9, wherein altering one or more characteristics of the user interface if the motion state of the mobile computing device meets the prescribed threshold includes changing at least one of font typography, font size, colors used, overall screen layout, screen brightness, iconography, icon sizing, onscreen visual elements, onscreen control elements, ringer settings, notification settings, manual button functionalities, availability of various phone applications, availability of various phone features, availability of various settings, voice command and control functionalities, or audible feedback. 11. The method of claim 1, further comprising preventing an altering of at least one characteristic of the user interface while the motion state of the mobile computing device meets the prescribed threshold. 12. A system for providing a simplified user interface of a mobile computing device in a vehicle, the system comprising: a motion detection system operative to detect a motion state of a mobile computing device and operative to determine if the motion state of the computing device meets a prescribed threshold; and an application operative to alter at least one characteristic of the mobile computing device user interface while the motion state of the mobile computing device meets the prescribed threshold. 13. The system of claim 12, the motion detection system further operative to safeguard against a false motion state detection. 14. The system of claim 13, wherein the system operative to detect a motion of a mobile computing device and determine if the motion state of the computing device meets a prescribed threshold is a location determining system. 15. The system of claim 14, wherein the prescribed threshold comprises a speed threshold. 16. The system of claim 13, wherein the system operative to detect a motion of a mobile computing device and determine if the motion state of the computing device meets a prescribed threshold is an accelerometer. 17. The system of claim 16, wherein the prescribed threshold includes an acceleration threshold. 18. The system of claim 12, wherein the application is further operative to alter one or more functionalities of the mobile computing device including altering at least one of font typography, font size, colors used, overall screen layout, screen brightness, iconography, icon sizing, onscreen visual elements, onscreen control elements, ringer settings, notification settings, manual button functionalities, availability of various phone applications, availability of various phone features, availability of various settings, voice command and control functionalities, or audible feedback. 19. The system of claim 12, wherein the application is further operative to prevent an altering of at least one characteristic of the user interface while the motion state of the mobile computing device meets the prescribed threshold. 20. A computer readable medium containing computer executable instructions which when executed by a computer perform a method for providing a simplified user interface of a mobile computing device while the mobile computing device is in motion, the method comprising: detecting a motion state of the mobile computing device; determining if the motion state of the mobile computing device meets a prescribed threshold; and altering one or more characteristics of the user interface if the motion state of the mobile computing device meets the prescribed threshold. 21. The computer readable medium of claim 20, wherein detecting a motion state of a mobile computing device and determining if the motion state of the computing device meets a prescribed threshold comprises deriving motion information from a location determining system. 22. The computer readable medium of claim 20, wherein altering one or more characteristics of the user interface if the motion state of the mobile computing device meets the prescribed threshold includes changing at least one of font typography, font size, colors used, overall screen layout, screen brightness, iconography, icon sizing, onscreen visual elements, onscreen control elements, ringer settings, notification settings, manual button functionalities, availability of various phone applications, availability of various phone features, availability of various settings, voice command and control functionalities, or audible feedback.
Adapting a user interface of a mobile computing device when the mobile computing device is in a motion state is provided. Upon detecting that a mobile computing device is in motion by utilization of a location or motion determining system, such as a GPS navigation and/or accelerometer system, a motion mode UI may be activated on the device, wherein a display of device functionalities may be simplified by modifying one or more displayed elements of the device user interface.1. A method for providing a simplified user interface of a mobile computing device while the mobile computing device is in motion, the method comprising: detecting a motion state of the mobile computing device; determining if the motion state of the mobile computing device meets a prescribed threshold; and altering one or more characteristics of the user interface if the motion state of the mobile computing device meets the prescribed threshold. 2. The method of claim 1, wherein determining if the motion state of the mobile computing device meets a prescribed threshold includes safeguarding against a false motion state detection. 3. The method of claim 2, wherein detecting a motion state of a mobile computing device and determining if the motion state of the computing device meets a prescribed threshold comprises deriving speed information for the mobile computing device from a location determining system. 4. The method of claim 3, wherein determining if the motion state of the mobile computing device meets a prescribed threshold includes determining whether the mobile computing device is moving at or above a prescribed speed. 5. The method of claim 2, wherein detecting a motion state of a mobile computing device and determining if the motion state of the computing device meets a prescribed threshold comprises deriving acceleration information from an accelerometer. 6. The method of claim 5, wherein determining if the motion state of the mobile computing device meets a prescribed threshold includes determining whether the mobile computing device is moving at or above a prescribed speed. 7. The method of claim 1, wherein detecting a motion state of a mobile computing device and determining if the motion state of the computing device meets a prescribed threshold comprises deriving acceleration information from a network-based location determining system. 8. The method of claim 7, wherein determining if the motion state of the mobile computing device meets a prescribed threshold includes determining whether the mobile computing device is moving at or above a prescribed speed. 9. The method of claim 1, wherein altering one or more characteristics of the user interface if the motion state of the mobile computing device meets the prescribed threshold includes changing a display of selectable functionalities in the user interface to enhance use of the selectable functionalities when the motion state of the mobile computing device meets the prescribed threshold. 10. The method of claim 9, wherein altering one or more characteristics of the user interface if the motion state of the mobile computing device meets the prescribed threshold includes changing at least one of font typography, font size, colors used, overall screen layout, screen brightness, iconography, icon sizing, onscreen visual elements, onscreen control elements, ringer settings, notification settings, manual button functionalities, availability of various phone applications, availability of various phone features, availability of various settings, voice command and control functionalities, or audible feedback. 11. The method of claim 1, further comprising preventing an altering of at least one characteristic of the user interface while the motion state of the mobile computing device meets the prescribed threshold. 12. A system for providing a simplified user interface of a mobile computing device in a vehicle, the system comprising: a motion detection system operative to detect a motion state of a mobile computing device and operative to determine if the motion state of the computing device meets a prescribed threshold; and an application operative to alter at least one characteristic of the mobile computing device user interface while the motion state of the mobile computing device meets the prescribed threshold. 13. The system of claim 12, the motion detection system further operative to safeguard against a false motion state detection. 14. The system of claim 13, wherein the system operative to detect a motion of a mobile computing device and determine if the motion state of the computing device meets a prescribed threshold is a location determining system. 15. The system of claim 14, wherein the prescribed threshold comprises a speed threshold. 16. The system of claim 13, wherein the system operative to detect a motion of a mobile computing device and determine if the motion state of the computing device meets a prescribed threshold is an accelerometer. 17. The system of claim 16, wherein the prescribed threshold includes an acceleration threshold. 18. The system of claim 12, wherein the application is further operative to alter one or more functionalities of the mobile computing device including altering at least one of font typography, font size, colors used, overall screen layout, screen brightness, iconography, icon sizing, onscreen visual elements, onscreen control elements, ringer settings, notification settings, manual button functionalities, availability of various phone applications, availability of various phone features, availability of various settings, voice command and control functionalities, or audible feedback. 19. The system of claim 12, wherein the application is further operative to prevent an altering of at least one characteristic of the user interface while the motion state of the mobile computing device meets the prescribed threshold. 20. A computer readable medium containing computer executable instructions which when executed by a computer perform a method for providing a simplified user interface of a mobile computing device while the mobile computing device is in motion, the method comprising: detecting a motion state of the mobile computing device; determining if the motion state of the mobile computing device meets a prescribed threshold; and altering one or more characteristics of the user interface if the motion state of the mobile computing device meets the prescribed threshold. 21. The computer readable medium of claim 20, wherein detecting a motion state of a mobile computing device and determining if the motion state of the computing device meets a prescribed threshold comprises deriving motion information from a location determining system. 22. The computer readable medium of claim 20, wherein altering one or more characteristics of the user interface if the motion state of the mobile computing device meets the prescribed threshold includes changing at least one of font typography, font size, colors used, overall screen layout, screen brightness, iconography, icon sizing, onscreen visual elements, onscreen control elements, ringer settings, notification settings, manual button functionalities, availability of various phone applications, availability of various phone features, availability of various settings, voice command and control functionalities, or audible feedback.
2,600
9,985
9,985
15,316,103
2,616
The invention is related to a method of presenting a digital information related to a real object, comprising determining a real object, providing a plurality of presentation modes, wherein the plurality of presentation modes comprises an augmented reality mode, and at least one of a virtual reality mode and an audio mode, providing at least one representation of a digital information related to the real object, determining a spatial relationship between a camera and a reference coordinate system under consideration of an image captured by the camera, selecting a presentation mode from the plurality of presentation modes according to the spatial relationship, and presenting the at least one representation of the digital information using the selected presentation mode.
1-29. (canceled) 30. A method of presenting a digital information related to a real object, comprising: determining a spatial relationship between a camera and a real object, obtaining a plurality of presentation modes, wherein the plurality of presentation modes comprises an augmented reality mode, and at least one alternative mode, obtaining at least one representation of a digital information related to the real object, selecting a presentation mode from the plurality of presentation modes according to the spatial relationship, presenting the at least one representation of the digital information using the selected presentation mode. 31. The method of claim 30, wherein the at least one alternative mode comprises a virtual reality mode and an audio mode. 32. The method according to claim 31, wherein: the augmented reality mode visually blends in the at least one representation of the digital information on a display device in a live view of the real object according to at least part of a spatial relationship between the camera or human eye, respectively, and the real object, the virtual reality mode visually presents the at least one representation of the digital information and a representation of the real object on a display device, and the audio mode generates a sound according to the at least one representation of the digital information. 33. The method according to claim 30, wherein the selecting a presentation mode from the plurality of presentation modes according to the spatial relationship comprises determining whether at least part of the real object is within the field of view of the camera, and in response to determining that the at least part of the real object is within the field of view of the camera, selecting the augmented reality mode as the presentation mode. 34. The method according to claim 30, wherein the selecting a presentation mode from the plurality of presentation modes according to the spatial relationship comprises determining whether the spatial relationship indicates that a distance between the camera and the real object is below a threshold, in response to determining that the spatial relationship indicates that a distance between the camera and the real object is below a threshold, selecting the augmented reality mode as the presentation mode. 35. The method of claim 30, wherein the selecting a presentation mode from the plurality of presentation modes according to the spatial relationship comprises determining whether the spatial relationship indicates that a distance between the camera and the real object is below a threshold, in response to determining that the spatial relationship indicates that a distance between the camera and the real object is not below a threshold, selecting one of the one or more alternative modes. 36. The method according to claim 35, wherein the selecting one of the one or more alternative modes comprises: determining an orientation of the camera with respect to a gravity direction, selecting one of a virtual reality mode and an audio mode as the presentation mode according to the orientation of the camera. 37. A computer readable medium comprising computer readable code executable by one or more processors to: determine a spatial relationship between a camera and a real object, obtain a plurality of presentation modes, wherein the plurality of presentation modes comprises an augmented reality mode, and at least one alternative mode, obtain at least one representation of a digital information related to the real object, select a presentation mode from the plurality of presentation modes according to the spatial relationship, present the at least one representation of the digital information using the selected presentation mode. 38. The computer readable medium of claim 37, wherein the at least one alternative mode comprises a virtual reality mode and an audio mode. 39. The computer readable medium of claim 38, wherein: the augmented reality mode visually blends in the at least one representation of the digital information on a display device in a live view of the real object according to at least part of a spatial relationship between the camera or human eye, respectively, and the real object, the virtual reality mode visually presents the at least one representation of the digital information and a representation of the real object on a display device, and the audio mode generates a sound according to the at least one representation of the digital information. 40. The computer readable medium of claim 37, wherein the computer readable code to select a presentation mode from the plurality of presentation modes according to the spatial relationship comprises computer readable code to: determine whether at least part of the real object is within the field of view of the camera, and in response to determining that the at least part of the real object is within the field of view of the camera, select the augmented reality mode as the presentation mode. 41. The computer readable medium of claim 37, wherein the computer readable code to select a presentation mode from the plurality of presentation modes according to the spatial relationship comprises computer readable code to: determine whether the spatial relationship indicates that a distance between the camera and the real object is below a threshold, and in response to determining that the spatial relationship indicates that a distance between the camera and the real object is below a threshold, select the augmented reality mode as the presentation mode. 42. The computer readable medium of claim 37, wherein the computer readable code to select a presentation mode from the plurality of presentation modes according to the spatial relationship comprises computer readable code to: determine whether the spatial relationship indicates that a distance between the camera and the real object is below a threshold, and in response to determining that the spatial relationship indicates that a distance between the camera and the real object is not below a threshold, select one of the one or more alternative modes. 43. The computer readable medium of claim 42, wherein the computer readable code to select one of the one or more alternative modes comprises computer readable code to: determine an orientation of the camera with respect to a gravity direction, and select one of a virtual reality mode and an audio mode as the presentation mode according to the orientation of the camera. 44. A system for presenting a digital information related to a real object, comprising: one or more processors; and a memory coupled to the one or more processors and comprising computer readable code executable by the one or more processors to cause the system to: determine a spatial relationship between a camera and a real object, obtain a plurality of presentation modes, wherein the plurality of presentation modes comprises an augmented reality mode, and at least one alternative mode, obtain at least one representation of a digital information related to the real object, select a presentation mode from the plurality of presentation modes according to the spatial relationship, present the at least one representation of the digital information using the selected presentation mode. 45. The system of claim 44, wherein: the at least one alternative mode comprises a virtual reality mode and an audio mode the augmented reality mode visually blends in the at least one representation of the digital information on a display device in a live view of the real object according to at least part of a spatial relationship between the camera or human eye, respectively, and the real object, the virtual reality mode visually presents the at least one representation of the digital information and a representation of the real object on a display device, and the audio mode generates a sound according to the at least one representation of the digital information. 46. The system of claim 44, wherein the computer readable code to select a presentation mode from the plurality of presentation modes according to the spatial relationship comprises computer readable code to: determine whether at least part of the real object is within the field of view of the camera, and in response to determining that the at least part of the real object is within the field of view of the camera, select the augmented reality mode as the presentation mode. 47. The system of claim 44, wherein the computer readable code to select a presentation mode from the plurality of presentation modes according to the spatial relationship comprises computer readable code to: determine whether the spatial relationship indicates that a distance between the camera and the real object is below a threshold, and in response to determining that the spatial relationship indicates that a distance between the camera and the real object is below a threshold, select the augmented reality mode as the presentation mode. 48. The system of claim 44, wherein the computer readable code to select a presentation mode from the plurality of presentation modes according to the spatial relationship comprises computer readable code to: determine whether the spatial relationship indicates that a distance between the camera and the real object is below a threshold, and in response to determining that the spatial relationship indicates that a distance between the camera and the real object is not below a threshold, select one of the one or more alternative modes. 49. The system of claim 48, wherein the computer readable code to select one of the one or more alternative modes comprises computer readable code to: determine an orientation of the camera with respect to a gravity direction, and select one of a virtual reality mode and an audio mode as the presentation mode according to the orientation of the camera.
The invention is related to a method of presenting a digital information related to a real object, comprising determining a real object, providing a plurality of presentation modes, wherein the plurality of presentation modes comprises an augmented reality mode, and at least one of a virtual reality mode and an audio mode, providing at least one representation of a digital information related to the real object, determining a spatial relationship between a camera and a reference coordinate system under consideration of an image captured by the camera, selecting a presentation mode from the plurality of presentation modes according to the spatial relationship, and presenting the at least one representation of the digital information using the selected presentation mode.1-29. (canceled) 30. A method of presenting a digital information related to a real object, comprising: determining a spatial relationship between a camera and a real object, obtaining a plurality of presentation modes, wherein the plurality of presentation modes comprises an augmented reality mode, and at least one alternative mode, obtaining at least one representation of a digital information related to the real object, selecting a presentation mode from the plurality of presentation modes according to the spatial relationship, presenting the at least one representation of the digital information using the selected presentation mode. 31. The method of claim 30, wherein the at least one alternative mode comprises a virtual reality mode and an audio mode. 32. The method according to claim 31, wherein: the augmented reality mode visually blends in the at least one representation of the digital information on a display device in a live view of the real object according to at least part of a spatial relationship between the camera or human eye, respectively, and the real object, the virtual reality mode visually presents the at least one representation of the digital information and a representation of the real object on a display device, and the audio mode generates a sound according to the at least one representation of the digital information. 33. The method according to claim 30, wherein the selecting a presentation mode from the plurality of presentation modes according to the spatial relationship comprises determining whether at least part of the real object is within the field of view of the camera, and in response to determining that the at least part of the real object is within the field of view of the camera, selecting the augmented reality mode as the presentation mode. 34. The method according to claim 30, wherein the selecting a presentation mode from the plurality of presentation modes according to the spatial relationship comprises determining whether the spatial relationship indicates that a distance between the camera and the real object is below a threshold, in response to determining that the spatial relationship indicates that a distance between the camera and the real object is below a threshold, selecting the augmented reality mode as the presentation mode. 35. The method of claim 30, wherein the selecting a presentation mode from the plurality of presentation modes according to the spatial relationship comprises determining whether the spatial relationship indicates that a distance between the camera and the real object is below a threshold, in response to determining that the spatial relationship indicates that a distance between the camera and the real object is not below a threshold, selecting one of the one or more alternative modes. 36. The method according to claim 35, wherein the selecting one of the one or more alternative modes comprises: determining an orientation of the camera with respect to a gravity direction, selecting one of a virtual reality mode and an audio mode as the presentation mode according to the orientation of the camera. 37. A computer readable medium comprising computer readable code executable by one or more processors to: determine a spatial relationship between a camera and a real object, obtain a plurality of presentation modes, wherein the plurality of presentation modes comprises an augmented reality mode, and at least one alternative mode, obtain at least one representation of a digital information related to the real object, select a presentation mode from the plurality of presentation modes according to the spatial relationship, present the at least one representation of the digital information using the selected presentation mode. 38. The computer readable medium of claim 37, wherein the at least one alternative mode comprises a virtual reality mode and an audio mode. 39. The computer readable medium of claim 38, wherein: the augmented reality mode visually blends in the at least one representation of the digital information on a display device in a live view of the real object according to at least part of a spatial relationship between the camera or human eye, respectively, and the real object, the virtual reality mode visually presents the at least one representation of the digital information and a representation of the real object on a display device, and the audio mode generates a sound according to the at least one representation of the digital information. 40. The computer readable medium of claim 37, wherein the computer readable code to select a presentation mode from the plurality of presentation modes according to the spatial relationship comprises computer readable code to: determine whether at least part of the real object is within the field of view of the camera, and in response to determining that the at least part of the real object is within the field of view of the camera, select the augmented reality mode as the presentation mode. 41. The computer readable medium of claim 37, wherein the computer readable code to select a presentation mode from the plurality of presentation modes according to the spatial relationship comprises computer readable code to: determine whether the spatial relationship indicates that a distance between the camera and the real object is below a threshold, and in response to determining that the spatial relationship indicates that a distance between the camera and the real object is below a threshold, select the augmented reality mode as the presentation mode. 42. The computer readable medium of claim 37, wherein the computer readable code to select a presentation mode from the plurality of presentation modes according to the spatial relationship comprises computer readable code to: determine whether the spatial relationship indicates that a distance between the camera and the real object is below a threshold, and in response to determining that the spatial relationship indicates that a distance between the camera and the real object is not below a threshold, select one of the one or more alternative modes. 43. The computer readable medium of claim 42, wherein the computer readable code to select one of the one or more alternative modes comprises computer readable code to: determine an orientation of the camera with respect to a gravity direction, and select one of a virtual reality mode and an audio mode as the presentation mode according to the orientation of the camera. 44. A system for presenting a digital information related to a real object, comprising: one or more processors; and a memory coupled to the one or more processors and comprising computer readable code executable by the one or more processors to cause the system to: determine a spatial relationship between a camera and a real object, obtain a plurality of presentation modes, wherein the plurality of presentation modes comprises an augmented reality mode, and at least one alternative mode, obtain at least one representation of a digital information related to the real object, select a presentation mode from the plurality of presentation modes according to the spatial relationship, present the at least one representation of the digital information using the selected presentation mode. 45. The system of claim 44, wherein: the at least one alternative mode comprises a virtual reality mode and an audio mode the augmented reality mode visually blends in the at least one representation of the digital information on a display device in a live view of the real object according to at least part of a spatial relationship between the camera or human eye, respectively, and the real object, the virtual reality mode visually presents the at least one representation of the digital information and a representation of the real object on a display device, and the audio mode generates a sound according to the at least one representation of the digital information. 46. The system of claim 44, wherein the computer readable code to select a presentation mode from the plurality of presentation modes according to the spatial relationship comprises computer readable code to: determine whether at least part of the real object is within the field of view of the camera, and in response to determining that the at least part of the real object is within the field of view of the camera, select the augmented reality mode as the presentation mode. 47. The system of claim 44, wherein the computer readable code to select a presentation mode from the plurality of presentation modes according to the spatial relationship comprises computer readable code to: determine whether the spatial relationship indicates that a distance between the camera and the real object is below a threshold, and in response to determining that the spatial relationship indicates that a distance between the camera and the real object is below a threshold, select the augmented reality mode as the presentation mode. 48. The system of claim 44, wherein the computer readable code to select a presentation mode from the plurality of presentation modes according to the spatial relationship comprises computer readable code to: determine whether the spatial relationship indicates that a distance between the camera and the real object is below a threshold, and in response to determining that the spatial relationship indicates that a distance between the camera and the real object is not below a threshold, select one of the one or more alternative modes. 49. The system of claim 48, wherein the computer readable code to select one of the one or more alternative modes comprises computer readable code to: determine an orientation of the camera with respect to a gravity direction, and select one of a virtual reality mode and an audio mode as the presentation mode according to the orientation of the camera.
2,600
9,986
9,986
15,152,380
2,656
Methods and systems are provided for creating a calendar event using context. A natural language expression including at least one of words, terms, and phrases of text may be received at a calendar event creation module from an application. The calendar event creation module may identify one or more slots in the text of the natural language expression related to the calendar event using a first grammar module and a second grammar module. The one or more slots identified by the first grammar module and the second grammar module that indicate a calendar event may be compared to determine whether there is a match between the one or more identified slots. If a match is found, at least one calendar event using the one or more slots identified by the first grammar module and the second grammar module may be created.
1. A method for creating a calendar event, the method comprising: receiving a natural language expression, wherein the natural language expression includes at least one of words, terms, and phrases of text; identifying one or more slots in the text of the natural language expression that indicate a calendar event using a first grammar module and a second grammar module; comparing the one or more slots identified by the first grammar module that indicate a calendar event with the one or more slots identified by the second grammar module that indicate a calendar event to determine whether the one or more slots identified by the first grammar module and the one or more slots identified by the second grammar module match; and when the one or more slots identified by the first grammar module and the one or more slots identified by the second grammar module match, creating at least one calendar event using the one or more slots identified by the first grammar module and the second grammar module. 2. The method of claim 1, wherein the one or more slots related to a calendar event include at least one of a date, time, date/time, subject, duration and location type slot. 3. The method of claim 1, wherein the first grammar module and the second grammar module are context-independent grammar modules, wherein the first grammar module is an intent grammar module, and wherein the second grammar module is a slot grammar module. 4. The method of claim 1, wherein identifying one or more slots in the text of the natural language expression comprises executing a parsing algorithm. 5. The method of claim 2, wherein the first grammar module and the second grammar module have defined rules associated with the one or more types of slots. 6. The method of claim 5, wherein the defined rules associated with the one or more types of slots include rules for context surrounding ambiguous words, terms, and phrases in the text. 7. The method of claim 5, wherein the defined rules associated with the one or more types of slots include rules for context surrounding identified slots indicating a calendar event that negates the slots indicating a calendar event. 8. The method of claim 5, wherein the defined rules associated with the one or more types of slots include rules that indicate a past event. 9. The method of claim 5, wherein the defined rules associated with the one or more types of slots include rules that indicate a negation of an event. 10. The method of claim 5, wherein identifying one or more slots in the natural language expression further comprises tagging each word, term, and phrase in the natural language expression that is found in the defined rules as the slot type associated with the defined rule where the word, term, or phrase is found. 11. The method of claim 2, further comprising inferring information from the one or more identified slots based on context derived from the text of the natural language expression. 12. The method of claim 2, further comprising linking one or more identified slots of the same type together when more than one slot of the same type is identified in the text of the natural language expression. 13. The method of claim 1, further comprising: presenting, on a display, a visual indicator associated with the at least one calendar event; receiving an indication that the visual indicator has been invoked; and displaying a proposed calendar event with auto-filled data from the one of more slots identified in the text of the natural language expression. 14. A computer storage device, having computer-executable instructions that, when executed by at least one processor, perform a method for creating a calendar event, the method comprising: receiving a natural language expression, wherein the natural language expression includes at least one of words, terms, and phrases of text; identifying one or more slots in the text of the natural language expression that indicate a calendar event using a first grammar module and a second grammar module; comparing the one or more slots identified by the first grammar module that indicate a calendar event with the one or more slots identified by the second grammar module that indicate a calendar event to determine whether the one or more slots identified by the first grammar module and the one or more slots identified by the second grammar module match; and when the one or more slots identified by the first grammar module and the one or more slots identified by the second grammar module match, creating at least one calendar event using the one or more slots identified by the first grammar module and the second grammar module. 15. The computer storage device of claim 14, wherein the one or more slots related to a calendar event include at least one of a date, time, date/time, subject, duration and location type slot. 16. The method of claim 15, wherein the first grammar module and the second grammar module have defined rules associated with the one or more types of slots. 17. The method of claim 15, wherein the defined rules associated with the one or more types of slots include rules for context surrounding ambiguous words, terms, and phrases in the text. 18. The method of claim 15, wherein the defined rules associated with the one or more types of slots include rules for context surrounding identified slots indicating a calendar event that negates the slots indicating a calendar event. 19. The computer storage device of claim 15, the method further comprising inferring information from the one or more identified slots based on context derived from the text of the natural language expression. 20. A system comprising: at least one processor; and memory encoding computer executable instructions that, when executed by at least one processor, perform a method for creating a calendar event, the method comprising: receiving a natural language expression, wherein the natural language expression includes at least one of words, terms, and phrases of text; identifying one or more slots in the text of the natural language expression that indicate a calendar event using a first grammar module and a second grammar module; comparing the one or more slots identified by the first grammar module that indicate a calendar event with the one or more slots identified by the second grammar module that indicate a calendar event to determine whether the one or more slots identified by the first grammar module and the one or more slots identified by the second grammar module match; when the one or more slots identified by the first grammar module and the one or more slots identified by the second grammar module match, creating at least one calendar event using the one or more slots identified by the first grammar module and the second grammar module; presenting, on a display, a visual indicator associated with the at least one calendar event; receiving an indication that the visual indicator has been invoked; and displaying a proposed calendar event with auto-filled data from the one of more slots identified in the text of the natural language expression.
Methods and systems are provided for creating a calendar event using context. A natural language expression including at least one of words, terms, and phrases of text may be received at a calendar event creation module from an application. The calendar event creation module may identify one or more slots in the text of the natural language expression related to the calendar event using a first grammar module and a second grammar module. The one or more slots identified by the first grammar module and the second grammar module that indicate a calendar event may be compared to determine whether there is a match between the one or more identified slots. If a match is found, at least one calendar event using the one or more slots identified by the first grammar module and the second grammar module may be created.1. A method for creating a calendar event, the method comprising: receiving a natural language expression, wherein the natural language expression includes at least one of words, terms, and phrases of text; identifying one or more slots in the text of the natural language expression that indicate a calendar event using a first grammar module and a second grammar module; comparing the one or more slots identified by the first grammar module that indicate a calendar event with the one or more slots identified by the second grammar module that indicate a calendar event to determine whether the one or more slots identified by the first grammar module and the one or more slots identified by the second grammar module match; and when the one or more slots identified by the first grammar module and the one or more slots identified by the second grammar module match, creating at least one calendar event using the one or more slots identified by the first grammar module and the second grammar module. 2. The method of claim 1, wherein the one or more slots related to a calendar event include at least one of a date, time, date/time, subject, duration and location type slot. 3. The method of claim 1, wherein the first grammar module and the second grammar module are context-independent grammar modules, wherein the first grammar module is an intent grammar module, and wherein the second grammar module is a slot grammar module. 4. The method of claim 1, wherein identifying one or more slots in the text of the natural language expression comprises executing a parsing algorithm. 5. The method of claim 2, wherein the first grammar module and the second grammar module have defined rules associated with the one or more types of slots. 6. The method of claim 5, wherein the defined rules associated with the one or more types of slots include rules for context surrounding ambiguous words, terms, and phrases in the text. 7. The method of claim 5, wherein the defined rules associated with the one or more types of slots include rules for context surrounding identified slots indicating a calendar event that negates the slots indicating a calendar event. 8. The method of claim 5, wherein the defined rules associated with the one or more types of slots include rules that indicate a past event. 9. The method of claim 5, wherein the defined rules associated with the one or more types of slots include rules that indicate a negation of an event. 10. The method of claim 5, wherein identifying one or more slots in the natural language expression further comprises tagging each word, term, and phrase in the natural language expression that is found in the defined rules as the slot type associated with the defined rule where the word, term, or phrase is found. 11. The method of claim 2, further comprising inferring information from the one or more identified slots based on context derived from the text of the natural language expression. 12. The method of claim 2, further comprising linking one or more identified slots of the same type together when more than one slot of the same type is identified in the text of the natural language expression. 13. The method of claim 1, further comprising: presenting, on a display, a visual indicator associated with the at least one calendar event; receiving an indication that the visual indicator has been invoked; and displaying a proposed calendar event with auto-filled data from the one of more slots identified in the text of the natural language expression. 14. A computer storage device, having computer-executable instructions that, when executed by at least one processor, perform a method for creating a calendar event, the method comprising: receiving a natural language expression, wherein the natural language expression includes at least one of words, terms, and phrases of text; identifying one or more slots in the text of the natural language expression that indicate a calendar event using a first grammar module and a second grammar module; comparing the one or more slots identified by the first grammar module that indicate a calendar event with the one or more slots identified by the second grammar module that indicate a calendar event to determine whether the one or more slots identified by the first grammar module and the one or more slots identified by the second grammar module match; and when the one or more slots identified by the first grammar module and the one or more slots identified by the second grammar module match, creating at least one calendar event using the one or more slots identified by the first grammar module and the second grammar module. 15. The computer storage device of claim 14, wherein the one or more slots related to a calendar event include at least one of a date, time, date/time, subject, duration and location type slot. 16. The method of claim 15, wherein the first grammar module and the second grammar module have defined rules associated with the one or more types of slots. 17. The method of claim 15, wherein the defined rules associated with the one or more types of slots include rules for context surrounding ambiguous words, terms, and phrases in the text. 18. The method of claim 15, wherein the defined rules associated with the one or more types of slots include rules for context surrounding identified slots indicating a calendar event that negates the slots indicating a calendar event. 19. The computer storage device of claim 15, the method further comprising inferring information from the one or more identified slots based on context derived from the text of the natural language expression. 20. A system comprising: at least one processor; and memory encoding computer executable instructions that, when executed by at least one processor, perform a method for creating a calendar event, the method comprising: receiving a natural language expression, wherein the natural language expression includes at least one of words, terms, and phrases of text; identifying one or more slots in the text of the natural language expression that indicate a calendar event using a first grammar module and a second grammar module; comparing the one or more slots identified by the first grammar module that indicate a calendar event with the one or more slots identified by the second grammar module that indicate a calendar event to determine whether the one or more slots identified by the first grammar module and the one or more slots identified by the second grammar module match; when the one or more slots identified by the first grammar module and the one or more slots identified by the second grammar module match, creating at least one calendar event using the one or more slots identified by the first grammar module and the second grammar module; presenting, on a display, a visual indicator associated with the at least one calendar event; receiving an indication that the visual indicator has been invoked; and displaying a proposed calendar event with auto-filled data from the one of more slots identified in the text of the natural language expression.
2,600
9,987
9,987
15,602,160
2,664
A near-field device, including: a near-field receiver coupled to a near-field receiver antenna and a decoder circuit; wherein the near-field receiver antenna is configured to be capacitively coupled at a first location on a conductive structure; wherein the near-field receiver antenna is configured to receive a near-field signal from the conductive structure through the receiver's capacitive coupling; and wherein the decoder circuit is configured to detect variations in the near-field signal.
1. A near-field device, comprising: a near-field receiver coupled to a near-field receiver antenna and a decoder circuit; wherein the near-field receiver antenna is configured to be capacitively coupled at a first location on a conductive structure and receive a non-propagating quasi-static magnetic near-field signal and a non-propagating quasi-static electric near-field signal from the conductive structure through the receiver's capacitive coupling, wherein the decoder circuit is configured to detect variations in the magnetic and electric near-field signals. 2. The near-field device of claim 1, further comprising: another near-field device including a near-field transmitter coupled to a near-field transmitter antenna and an encoder circuit; wherein the near-field transmitter antenna is configured to be capacitively coupled at a second location on the conductive structure, the encoder circuit is configured to generate the non-propagating quasi-static magnetic and electric near-field signals, and the near-field transmitter antenna is configured to transmit the non-propagating quasi-static magnetic and electric near-field signals to the conductive structure through the transmitter's capacitive coupling. 3. The near-field device of claim 2, wherein the near-field transmitter antenna is separated from the near-field receiver antenna by greater than two centimeters. 4. The device of claim 1: wherein the near-field receiver antenna is configured to be capacitively coupled to the conductive structure using an air-gap between the conductive structure and an outside of the near-field device. 5. The device of claim 1: wherein the near-field signal includes a carrier frequency that is ≤40 MHz. 6. The device of claim 1: wherein the conductive structure includes at least one of: a pipe, a planar surface, a building girder, a vehicle chassis, a container, a metal box, a medicine bottle, a food package, a wire, or a tube. 7. The device of claim 1: wherein the conductive structure includes a conductive material including at least one of: iron, copper, or carbon. 8. The device of claim 1: wherein the conductive structure is a container having an inside and an outside; wherein the container blocks RF radiation from entering the inside; and wherein the container passes the near-field signal to the inside. 9. The device of claim 8: wherein the container further includes an opening; wherein the container passes the near-field signal to the inside through the opening; and wherein the opening does not pass RF radiation. 10. The device of claim 2: wherein the conductive structure is a container having an inside and an outside; wherein the container blocks RF radiation from entering the inside; wherein the container passes the near-field signal to the inside; and wherein one of the near-field devices is outside of the container and one of the near-field devices is inside of the container. 11. The near-field device of claim 1, wherein the decoder circuit includes a degradation detector configured to interpret the variations in the non-propagating quasi-static magnetic and electric near-field signals as a structural degradation in the conductive structure. 12. The near-field device of claim 11, wherein the structural degradation includes at least one of: a crack, a break, a bend, application of a coating to the conductive structure, a discontinuity, or an abnormal change in conductivity. 13. The near-field device of claim 11, wherein the degradation detector is configured to interpret the variations in the non-propagating quasi-static magnetic and electric near-field signals as indicating the structural degradation after there is a difference between a previously received magnetic near-field signal and a currently received magnetic near-field signal. 14. The near-field device of claim 11, wherein the degradation detector is configured to interpret the variations in the non-propagating quasi-static magnetic and electric near-field signals as indicating the structural degradation after there is a difference between a previously received electric near-field signal and a currently received electric near-field signal. 15. The near-field device of claim 1, wherein the decoder circuit includes a communications circuit configured to interpret the variations in the non-propagating quasi-static magnetic and electric near-field signals as a communications signal transmitted by another near-field device. 16. A method of processing a near-field signal, comprising: capacitively coupling a near-field device at a first location on a conductive structure; receiving a non-propagating quasi-static magnetic near-field signal and a non-propagating quasi-static electric near-field signal at the near-field device from the conductive structure through the capacitive coupling; and detecting variations in the non-propagating quasi-static magnetic and electric near-field signals. 17. The method of claim 16, further comprising: capacitively coupling another near-field device at a second location on the conductive structure; and transmitting the non-propagating quasi-static magnetic and electric near-field signals at the another near-field device to the conductive structure through the capacitive coupling of the another near-field device. 18. The method of claim 16: interpreting the variations in the near-field signal as a structural degradation in the conductive structure. 19. The method of claim 16, further comprising: interpreting the variations in the magnetic and electric near-field signals as a communications signal transmitted by another near-field device. 20. The near-field device of claim 1, wherein the non-propagating quasi-static magnetic near-field signal has a magnetic field vector curved around the conductive structure and the non-propagating quasi-static electric near-field signal has an electric field vector perpendicular to conductive structure.
A near-field device, including: a near-field receiver coupled to a near-field receiver antenna and a decoder circuit; wherein the near-field receiver antenna is configured to be capacitively coupled at a first location on a conductive structure; wherein the near-field receiver antenna is configured to receive a near-field signal from the conductive structure through the receiver's capacitive coupling; and wherein the decoder circuit is configured to detect variations in the near-field signal.1. A near-field device, comprising: a near-field receiver coupled to a near-field receiver antenna and a decoder circuit; wherein the near-field receiver antenna is configured to be capacitively coupled at a first location on a conductive structure and receive a non-propagating quasi-static magnetic near-field signal and a non-propagating quasi-static electric near-field signal from the conductive structure through the receiver's capacitive coupling, wherein the decoder circuit is configured to detect variations in the magnetic and electric near-field signals. 2. The near-field device of claim 1, further comprising: another near-field device including a near-field transmitter coupled to a near-field transmitter antenna and an encoder circuit; wherein the near-field transmitter antenna is configured to be capacitively coupled at a second location on the conductive structure, the encoder circuit is configured to generate the non-propagating quasi-static magnetic and electric near-field signals, and the near-field transmitter antenna is configured to transmit the non-propagating quasi-static magnetic and electric near-field signals to the conductive structure through the transmitter's capacitive coupling. 3. The near-field device of claim 2, wherein the near-field transmitter antenna is separated from the near-field receiver antenna by greater than two centimeters. 4. The device of claim 1: wherein the near-field receiver antenna is configured to be capacitively coupled to the conductive structure using an air-gap between the conductive structure and an outside of the near-field device. 5. The device of claim 1: wherein the near-field signal includes a carrier frequency that is ≤40 MHz. 6. The device of claim 1: wherein the conductive structure includes at least one of: a pipe, a planar surface, a building girder, a vehicle chassis, a container, a metal box, a medicine bottle, a food package, a wire, or a tube. 7. The device of claim 1: wherein the conductive structure includes a conductive material including at least one of: iron, copper, or carbon. 8. The device of claim 1: wherein the conductive structure is a container having an inside and an outside; wherein the container blocks RF radiation from entering the inside; and wherein the container passes the near-field signal to the inside. 9. The device of claim 8: wherein the container further includes an opening; wherein the container passes the near-field signal to the inside through the opening; and wherein the opening does not pass RF radiation. 10. The device of claim 2: wherein the conductive structure is a container having an inside and an outside; wherein the container blocks RF radiation from entering the inside; wherein the container passes the near-field signal to the inside; and wherein one of the near-field devices is outside of the container and one of the near-field devices is inside of the container. 11. The near-field device of claim 1, wherein the decoder circuit includes a degradation detector configured to interpret the variations in the non-propagating quasi-static magnetic and electric near-field signals as a structural degradation in the conductive structure. 12. The near-field device of claim 11, wherein the structural degradation includes at least one of: a crack, a break, a bend, application of a coating to the conductive structure, a discontinuity, or an abnormal change in conductivity. 13. The near-field device of claim 11, wherein the degradation detector is configured to interpret the variations in the non-propagating quasi-static magnetic and electric near-field signals as indicating the structural degradation after there is a difference between a previously received magnetic near-field signal and a currently received magnetic near-field signal. 14. The near-field device of claim 11, wherein the degradation detector is configured to interpret the variations in the non-propagating quasi-static magnetic and electric near-field signals as indicating the structural degradation after there is a difference between a previously received electric near-field signal and a currently received electric near-field signal. 15. The near-field device of claim 1, wherein the decoder circuit includes a communications circuit configured to interpret the variations in the non-propagating quasi-static magnetic and electric near-field signals as a communications signal transmitted by another near-field device. 16. A method of processing a near-field signal, comprising: capacitively coupling a near-field device at a first location on a conductive structure; receiving a non-propagating quasi-static magnetic near-field signal and a non-propagating quasi-static electric near-field signal at the near-field device from the conductive structure through the capacitive coupling; and detecting variations in the non-propagating quasi-static magnetic and electric near-field signals. 17. The method of claim 16, further comprising: capacitively coupling another near-field device at a second location on the conductive structure; and transmitting the non-propagating quasi-static magnetic and electric near-field signals at the another near-field device to the conductive structure through the capacitive coupling of the another near-field device. 18. The method of claim 16: interpreting the variations in the near-field signal as a structural degradation in the conductive structure. 19. The method of claim 16, further comprising: interpreting the variations in the magnetic and electric near-field signals as a communications signal transmitted by another near-field device. 20. The near-field device of claim 1, wherein the non-propagating quasi-static magnetic near-field signal has a magnetic field vector curved around the conductive structure and the non-propagating quasi-static electric near-field signal has an electric field vector perpendicular to conductive structure.
2,600
9,988
9,988
12,890,888
2,652
A device for obtaining, storing and displaying information from a remote server, the device has a modem for establishing communication sessions with the remote server. A memory coupled to the modem stores the obtained information, and a display is coupled to the memory for displaying the stored information. The device automatically and periodically communicates with the remote server for obtaining the information.
1. A telephone set operative for making and receiving telephone calls via, and for being powered from, a cable, the cable being connected for concurrently carrying digital data including digital video data and a DC power signal, said telephone set being further operative for displaying the digital video data, and said telephone set comprising: a connector for connecting to the cable; a transceiver coupled to said connector for transmitting digital data to, and receiving digital data from, the cable; a video display coupled to said transceiver for visually displaying the digital video data; firmware and a processor for executing said firmware, said processor being coupled to control at least said transceiver and said video display; and a single enclosure housing said connector, said processor, said transceiver and said video display; wherein the telephone set is addressable in a Local Area Network (LAN), and said telephone set is at least in part powered from the DC power signal carried over the cable. 2. The telephone set according to claim 1, wherein said telephone set is further operative for storing at least part of the digital data, and said telephone set further comprises a first memory coupled to said transceiver for storing at least part of the digital data received by said transceiver. 3. The telephone set according to claim 2, wherein said telephone set is further operative for storing at least part of the digital video data, and said first memory is coupled to said transceiver for storing at least part of the digital video data received by said transceiver. 4. The telephone set according to claim 3, wherein said telephone set is further operative to display the stored digital video data, and said video display is coupled to said first memory for displaying the digital video data stored in said first memory. 5. The telephone set according to claim 1, wherein said single enclosure is dimensioned and has an appearance of a conventional flat, wall-mountable framed picture. 6. The telephone set according to claim 1, wherein said single enclosure dimensioned and has an appearance of a conventional telephone set. 7. The telephone set according to claim 1, wherein said telephone set is operative for communicating with a data unit via the cable. 8. The telephone set according to claim 7, wherein said telephone set is further operative for automatically and periodically communicating with the data unit at all times when said telephone set is in operation. 9. The telephone set according to claim 7, wherein said transceiver is a modem and the data unit is a personal computer. 10. The telephone set according to claim 7, wherein: the cable extends outside a building and is part of a Wide Area Network (WAN); the data unit is a first remote information server outside the building; and the telephone set is connected to the data unit via the Internet. 11. The telephone set according to claim 10, wherein the first remote information server is organized as a website including web pages as part of the World Wide Web (WWW), and is further identified by said telephone set using the website Uniform Resource Locator (URL). 12. The telephone set according to claim 10, wherein said telephone set is operative for communicating with a second remote information server via the Internet for receiving information from the second remote information server, and for storing and displaying the information received from the second remote information server. 13. The telephone set according to claim 12, wherein said telephone set is adapted to communicate with the first and second remote information servers for receiving selected and distinct information from each remote information server. 14. The telephone set according to claim 13, wherein said telephone set communicates with the first and second remote servers one at a time. 15. The telephone set according to claim 10, wherein communication with the first remote information server is based on Internet protocol suite. 16. The telephone set according to claim 15, wherein communication with the first remote information server is based on TCP/IP. 17. The telephone set according to claim 10, wherein said telephone set is operative to initiate a communication with the first remote information server on a daily basis at a pre-set time of day (TOD). 18. The telephone set according to claim 17, wherein the pre-set time of day is at least one of: set by the user; set previously in the telephone set; and set by the remote information server in a previous communication session. 19. The telephone set according to claim 10, wherein information received from said first remote information server is publicly available at no cost. 20. The telephone set according to claim 19, wherein the information received from the first remote information server is also available in other mediums. 21. The telephone set according to claim 20, wherein the first remote information server is also associated with one of: a newspaper; a radio station; and a television station. 22. The telephone set according to claim 10, wherein information received from the first remote information server and displayed relates to a future event, a planned activity or a forecast of a situation. 23. The telephone set according to claim 22, wherein the information received from the first remote information server includes at least one of: a weather forecast; a future sports event; a future culture event; a future entertainment event; a TV station guide; and a radio station guide. 24. The telephone set according to claim 10, wherein: said telephone set has a digital address in a Local Area Network (LAN) using; and said telephone set is operative to send the digital address and a request for information, and to receive and display information received from the first remote information server in response to the sent request for information. 25. The telephone set according to claim 1, wherein the cable is connected to concurrently carry the digital data and the power signal using Frequency Division Multiplexing (FDM). 26. The telephone set according to claim 1, wherein the cable is a telephone wire pair connected to a PSTN, and said transceiver is a dial-up modem. 27. The telephone set according to claim 1, wherein the cable is part of a Local Area Network (LAN) in a building. 28. The telephone set according to claim 27, wherein the cable is a LAN cable connected for carrying a LAN signal, said connector is a LAN connector, and said transceiver is a LAN transceiver. 29. The telephone set according to claim 28, wherein: communication over the LAN cable is based on IEEE802.3 standard; said LAN connector is a RJ-45 type connector; and said LAN transceiver is an Ethernet transceiver. 30. The telephone set according to claim 1, wherein said telephone set is further operative to receive High Definition (HD) video, and said video display is operative for displaying the High Definition (HD) video. 31. The telephone set according to claim 30, wherein said telephone set is further operative to receive and display television channels. 32. The telephone set according to claim 31, wherein the High Definition (HD) video is High Definition Television (HDTV). 33. The telephone set according to claim 1, wherein said telephone set is further operative for storing at least part of the digital data received from the cable, and said telephone set further comprises a non-volatile first memory coupled to said transceiver for storing at least part of the digital data received by said transceiver. 34. The telephone set according to claim 33, wherein said first memory is based on Flash memory. 35. The telephone set according to claim 1, wherein said video display comprises a flat screen that is based on Liquid Crystal Display (LCD) technology. 36. The telephone set according to claim 1, further comprising a battery, and wherein said telephone set is operative to be at least in part powered from said battery, and said battery is a primary battery or a rechargeable battery. 37. The telephone set according to claim 1, wherein said telephone set is further adapted to communicate with the Internet via a gateway. 38. The telephone set according to claim 1, wherein said processor is one of: a microprocessor; and a microcomputer, and said telephone set further comprises at least one user operated button or switch coupled to said processor, for user control of operation of said telephone set. 39. The telephone set according to claim 38, wherein the user control of operation of said telephone set comprises at least one out of: turning said telephone set on and off; resetting said telephone set to default values; changing the contrast of said video display; changing the brightness of said video display; changing the zoom of images presented on said video display; selecting a language; and selecting the information to be presented on said video display. 40. The telephone set according to claim 38, wherein said telephone set is further operative for communication with a data unit over the cable, and said firmware includes at least part of a web client for communication with, and accessing information stored in, the data unit. 41. The telephone set according to claim 40, wherein said at least part of a web client includes at least part of a graphical web browser. 42. The telephone set according to claim 41, wherein said at least part of a graphical web browser is based on Windows Internet Explorer. 43. The telephone set according to claim 1, wherein said telephone set is configured for wall mounting in a residential building. 44. The telephone set according to claim 1, wherein said video display provides alphanumeric information. 45. The telephone set according to claim 1, further comprising a non-volatile memory operative for storing a digital address for uniquely identifying said telephone set in the Local Area Network (LAN) or on the Internet. 46. The telephone set according to claim 45, wherein the digital address is either a MAC address or an IP address. 47. The telephone set according to claim 1, wherein said telephone set is further operative to store and play digital audio data. 48. The telephone set according to claim 1, wherein: said telephone set is further operative to receive and display information from a connected unit; said telephone set further comprises a second connector coupled to said processor for connecting to, and controlling, the unit; and said telephone set is operative to receive digital data comprising information from the unit and displaying the information on said display. 49. The telephone set according to claim 48, wherein said telephone set is further operative to transmit digital data to the unit. 50. The telephone set according to claim 49, wherein communication with the unit via said second connector uses a standard serial digital data stream. 51. The telephone set according to claim 48, wherein: the unit has a battery for powering the unit; and said telephone set further comprises a charger coupled to said second connector for charging the battery; and said charger is coupled to be powered from the DC power signal. 52. The telephone set according to claim 48, wherein the unit is a handheld unit, and said telephone set is further adapted to mechanically dock, supply power to, and communicate with the handheld unit. 53. The telephone set according to claim 52 in combination with a cradle for detachable mounting of the handheld unit, the handheld unit having a mating connector, wherein said connector is part of said cradle, and said second connector connects with the handheld unit mating connector when the handheld unit is mounted in said cradle. 54. The telephone set according to claim 53, wherein said handheld unit is a Personal Digital Assistant (PDA), or a cellular telephone. 55. The telephone set according to claim 1, wherein said telephone set is further operative as a clock for maintaining and displaying the current hour, minute and second. 56. The telephone set according to claim 55, wherein said telephone set is further operative to display the current year, the current month and the current day of the month. 57. The telephone set according to claim 55, wherein said telephone set is further operative to display the time of a last information update or a last communication session. 58. The telephone set according to claim 1, wherein said single enclosure is constructed to have at least one of the following: a form substantially similar to that of a standard picture frame; wall mounting elements substantially similar to those of a standard picture frame for hanging on a wall; and a shape to at least in part substitute for a standard picture frame. 59. The telephone set according to claim 1, wherein said single enclosure is constructed to have a form substantially similar to that of a standard telephone set. 60. The telephone set according to claim 1, further comprising a digital to analog converter coupled to said transceiver for converting digital data received by said transceiver to an analog signal. 59. The telephone set according to claim 60, wherein the analog signal is an analog media signal for connecting to an analog media unit. 60. The telephone set according to claim 59, wherein the analog media signal is an analog video signal and the analog media unit is an analog video unit. 61. The telephone set according to claim 60, wherein the analog video signal is an S-Video signal or a composite video signal in a PAL or NTSC format. 62. A telephone set operative for making and receiving telephone calls via, and for being powered from, a cable, the cable being connected for concurrently carrying digital data, including digital video data, and a DC power signal, said telephone set being further operative for displaying information stored in a unit, said telephone set comprising: a first connector for connecting to the cable; an adapter mechanically attachable to the unit, said adapter including a second connector for connecting to the unit when the unit is mechanically attached thereon; a transceiver coupled to said second connector for serial digital data communication with the unit; a display for visually presenting information, said display being coupled to said transceiver for displaying information received from said unit; a power supply coupled to said first connector for being powered by the DC power signal, said power supply being coupled to supply DC power to said transceiver and said display; and a single enclosure housing said first connector, said adapter, said transceiver and said display, wherein said power supply is coupled to said second connector for supplying DC power to the unit when the unit is connected to said second connector. 63. The telephone set according to claim 62, wherein said cable is connected to concurrently carry the digital data and the power signal using Frequency Division Multiplexing (FDM). 64. The telephone set according to claim 62, wherein the unit is a handheld unit. 65. The telephone set according to claim 64, wherein the handheld unit is a Personal Digital Assistant (PDA), or a cellular telephone handset. 66. The telephone set according to claim 62, wherein said telephone set is operative for communicating with a first remote information server via the Internet. 67. The telephone set according to claim 66, wherein the communication with a first remote information server is via the connected unit. 68. The telephone set according to claim 67, wherein said telephone set is operative for automatically and periodically communicating with the first remote information server at all times when said telephone set is in operation. 69. The telephone set according to claim 68, wherein said telephone set is further operative for displaying information received from the first remote information server on said display. 70. The telephone set according to claim 62, further comprising a first memory coupled to said transceiver for storing digital data received by said transceiver. 71. The telephone set according to claim 70, wherein said first memory is non-volatile memory that is based on a Flash memory. 72. The telephone set according to claim 62, wherein said single enclosure is constructed to have a form substantially similar to that of a conventional telephone set. 73. The telephone set according to claim 62, further comprising a second memory connected for storing a digital address uniquely identifying said telephone set in a Local Area Network (LAN) or in a Wide Area Network (WAN). 74. The telephone set according to claim 73, wherein the digital address is either a MAC address or an IP address. 75. The telephone set according to claim 62, wherein said display comprises a flat screen that is based on Liquid Crystal Display (LCD) technology. 76. The telephone set according to claim 62, wherein the information stored in the unit is a digital video data, and wherein said display is a video display. 77. The telephone set according to claim 78, wherein said telephone set is further operative to receive and play television channels. 78. The telephone set according to claim 77, wherein said telephone set is further operative to receive High Definition (HD) video, and wherein said display is operative to display the High Definition (HD) video. 79. The telephone set according to claim 78, wherein the High Definition (HD) video is High Definition Television (HDTV). 80. The telephone set according to claim 62, further comprising firmware and a processor for executing said firmware, said processor being coupled to control at least said transceiver and said display. 81. The telephone set according to claim 80, wherein said processor is one of: a microprocessor; and a microcomputer, and said telephone set further comprises at least one user operated button or switch coupled to said processor, for user control of operation of said telephone set. 82. A system for obtaining digital video content from a video server via the Internet, and for storing and displaying the digital video content on a television set, said system comprising: a video server organized as a web site, storing digital video content, and including web pages as part of the World Wide Web (WWW), connected to the Internet and identified using a web site Uniform Resource Locator (URL); a television set for receiving television signals and comprising a screen for displaying images provided by the received television signals, said television set further comprising a first analog video connector for receiving a first analog video signal for display on said screen; a LAN cable at least in part in walls of a building and connected for concurrently carrying digital video content from said video server and a power signal; and a device connected for storing the digital video content and for displaying images based on the stored digital video content on said screen, said device comprising, in a single enclosure: a LAN connector connected to said LAN cable; a LAN transceiver for transmitting digital data to, and receiving digital video data from, said LAN cable; a digital memory coupled to said LAN transceiver for storing digital video content received from said video server via said LAN cable; a second analog video connector connected to said first analog video connector for transmitting the first analog video signal to said television set; and a digital to analog converter coupled between said digital memory and said second analog video connector for converting the digital video data content stored in said digital memory to said first analog video signal; wherein: said device is addressable in the Internet; and said device is operative for automatically and periodically communicating with the video server via the Internet through said LAN cable for receiving and storing the digital video content therefrom, and for transmitting and displaying the received digital video content in analog form on said screen of said television set, and wherein said device is powered only by the power signal carried over said LAN cable. 83. The system according to claim 82, wherein said television set has dimensions and an appearance of a conventional flat, wall-mountable framed picture. 84. The system according to claim 82, wherein the first analog video signal is an S-Video signal or a composite video signal in a PAL or NTSC format. 85. The system according to claim 82, wherein said LAN cable is connected to concurrently carry the digital video and the power signal using Frequency Division Multiplexing (FDM). 86. The system according to claim 82, wherein the power signal is a Direct Current (DC) signal. 87. The system according to claim 82, wherein said LAN cable is connected to a Wide Area Network (WAN). 88. The system according to claim 82, wherein the LAN cable is connected for carrying a LAN signal containing the digital video content. 89. The system according to claim 88, wherein: said LAN cable, and communication over said LAN cable, are based on IEEE802.3 standard; said LAN connector is a RJ-45 type connector; and said LAN transceiver is an Ethernet transceiver. 90. The system according to claim 82, wherein said screen is a flat screen that is based on Liquid Crystal Display (LCD) technology. 91. The system according to claim 82, wherein said device further comprises firmware and a processor for executing said firmware, said processor being coupled to control at least said LAN transceiver. 92. The system according to claim 82, wherein communication with the video server is based on TCP/IP. 93. A television set for storing and displaying digital video content and for receiving a television signal, said television set including a flat screen for displaying images provided in the received television signal, and said television set being enclosed in a single enclosure and comprising, in said single enclosure: a non-volatile memory for storing digital video data content; a digital to analog converter coupled between said digital memory and said screen for converting the digital video content stored in said digital memory to an analog video signal; and firmware and a processor for executing said firmware, said processor being coupled to control at least said non-volatile memory and said digital to analog converter, wherein said television set is operative in a first state to couple the television signal to said screen for displaying television content, and said television set is operative in a second state to couple the analog video signal to said screen for displaying the digital video content. 94. The television set according to claim 93, wherein said non-volatile memory is Flash memory based. 95. The television set according to claim 93, wherein said single enclosure has dimensions and an appearance of a conventional flat, wall-mountable framed picture. 96. The television set according to claim 93, wherein the analog video signal is an S-Video signal or a composite video signal in a PAL or NTSC format. 97. The television set according to claim 93, wherein said single enclose in constructed to have at least one of the following: a form substantially similar to that of a standard picture frame; wall mounting elements substantially similar to those of a standard picture frame for hanging on a wall; and a shape to at least in part substitute for a standard picture frame. 98. The television set according to claim 93, further comprising: an AC power plug for connecting to an AC power outlet; and a power supply connected to said AC power plug to be powered by power supplied from the AC power outlet, said power supply comprising an AC to DC converter for DC powering said screen and said non-volatile memory. 99. A device for displaying digital video data, for use with a cable connected for concurrently carrying high-definition digital video data and a power signal, the device comprising in a single enclosure: a digital connector for connecting to the cable; a high-definition video display for presenting images, the video display being coupled to said digital connector for displaying the high-definition digital video data carried over the cable; a firmware and a processor for executing said firmware, said processor being coupled to control the device operation; and a non-volatile memory storing digital data identifying said device; wherein said non-volatile memory is coupled to said digital connector for being powered from said power signal carried over said cable. 100. The device according to claim 99, wherein the high-definition digital video data includes high-definition digital television (HDTV) data, and wherein said high-definition video display is adapted to display said high-definition digital television (HDTV) data. 101. The device according to claim 99, further being part of as a television set for receiving and displaying a television signal on said video display, said television set further comprising a first analog video connector coupled to said video display for receiving an analog video signal and for displaying the analog video signal on said video display. 102. The device according to claim 99, wherein said non-volatile memory is based on a Flash memory. 103. The device according to claim 99, wherein said video display comprises a flat screen that is based on one of Liquid Crystal Display (LCD), Field Emission Display (FED), and Cathode Ray Tube (CRT) technologies. 104. The device according to claim 99, wherein said single enclosure has dimensions and an appearance of a conventional flat, wall-mountable framed picture. 105. The device according to claim 99, wherein said single enclose in constructed to have at least one of the following: a form substantially similar to that of a standard picture frame; wall mounting elements substantially similar to those of a standard picture frame for hanging on a wall; and a shape to at least in part substitute for a standard picture frame. 106. The device according to claim 99, wherein said single enclosure is configured for wall mounting in a building. 107. The device according to claim 99, wherein the digital data identifying said device is a digital address uniquely identifying said device in a digital data network. 108. The device according to claim 107, wherein said digital data network is a Local Area Network (LAN) or on the Internet. 109. The device according to claim 108, wherein the digital address is either a MAC address or an IP address. 110. The device according to claim 99, wherein the digital data identifying the device is personalized information. 111. The device according to claim 110, wherein the personalized information is a user name or a password. 112. The device according to claim 110, wherein the personalized information is set by the user. 113. The device according to claim 110, wherein the personalized information is associated with the physical geographical location of said device. 114. The device according to claim 99, wherein the digital data identifying the device is set during production of said device. 115. The device according to claim 99, further operative for serial and bidirectional communication with a data unit connected to said cable, and for receiving the high-definition digital video data from the data unit. 116. The device according to claim 115, further operative for transmitting the data stored in said non-volatile memory to said data unit over the cable. 117. The device according to claim 115, further comprising a transceiver coupled between said digital connector and said non-volatile memory for transmitting the data stored in said non-volatile memory content to said data unit over the cable. 118. The device according to claim 117, wherein said transceiver is connected to said digital connector for being powered from the power signal. 119. The system according to claim 99, wherein the power signal is a Direct Current (DC) signal. 120. The system according to claim 99, wherein the power signal is an Alternating Current (AC) signal. 121. The system according to claim 99, wherein the cable is connected to concurrently carry the high-definition digital video data and the power signal over the same wires. 122. The system according to claim 121, wherein the cable is connected to concurrently carry the high-definition digital video data and the power signal over the same wires using Frequency Division Multiplexing (FDM). 123. A telephone set operative for making and receiving telephone calls in a telephone network over a wired connection, said telephone set being further operative for receiving digital data from a first remote information server using wireless communication and for displaying the digital data, said telephone set comprising: a first memory storing a digital address uniquely identifying the telephone set in a digital data network; an antenna for transmitting and receiving digital data over the air; a wireless transceiver coupled to said antenna for bi-directional packet-based digital data communication with a mating wireless transceiver of the same type over the air; a display component coupled to said wireless transceiver for displaying an image based on digital data received via said wireless transceiver; and a single enclosure housing said first memory, said antenna, said wireless transceiver, and said display component, wherein said telephone set is further operative for transmitting the digital address and for receiving and displaying the digital data received from the first remote information server. 124. The telephone set according to claim 123, wherein the first remote information server is identified by a Uniform Resource Locator (URL) on the Internet, and said telephone set further comprises a second memory housed in said enclosure for storing the website URL identifying the first remote information server. 125. The telephone set according to claim 124, wherein said telephone set is operative to communicate with the first remote information server via the Internet. 126. The telephone set according to claim 123, wherein said telephone set is operative for automatically and periodically communicating with the first remote information server at all times when said telephone set is in operation for receiving digital data from the first remote information server. 127. The telephone set according to claim 123, wherein the wireless communication is based on Bluetooth, and said wireless transceiver is operative for transmitting and receiving substantially according to Bluetooth standard. 128. The telephone set according to claim 123, wherein the wireless communication is over a Wireless Local Area Network (WLAN), said antenna is a WLAN antenna and said wireless transceiver is a WLAN transceiver. 129. The telephone set according to claim 128, wherein said WLAN is substantially according to IEEE802.11 standard, and said WLAN transceiver is operative to communicate substantially according to IEEE802.11 standard. 130. The telephone set according to claim 123, wherein said telephone set is further operative to store at least part of the digital data received from the first remote information server, and said telephone set further comprises, a second memory coupled to said wireless transceiver for storing at least part of the digital data received from the first remote information server via said wireless transceiver. 131. The telephone set according to claim 130, wherein said second memory is non-volatile. 132. The telephone set according to claim 131, wherein said second memory is based on a Flash memory. 133. The telephone set according to claim 123, wherein the digital data includes digital video data, said display component is a video display component, and said telephone set is further operative for receiving and displaying the digital video data. 134. The telephone set according to claim 123, wherein the wireless communication uses a license-free radio frequency band. 135. The telephone set according to claim 134, wherein the license-free radio frequency band is one of: 900 MHz; 2.4 GHz; and 5.8 GHz. 136. The telephone set according to claim 123, wherein the communication with said first remote information server is based on Internet Protocol (IP) suite. 137. The telephone set according to claim 136, wherein the communication with the first remote information server is based on TCP/IP. 138. The telephone set according to claim 123, wherein the wireless communication is based on spread spectrum modulation. 139. The telephone set according to claim 123, wherein the digital address is either a MAC address or an IP address. 140. The telephone set according to claim 123, wherein said telephone set is further operative to store and play digital audio data. 141. The telephone set according to claim 123, wherein said telephone set is further operative to receive and display information from a satellite. 142. The telephone set according to claim 123, wherein said display component comprises a flat screen that is based on Liquid Crystal Display (LCD) technology. 143. The telephone set according to claim 123, wherein said telephone set is operative for automatically and periodically communicating with the first remote information server at all times when said telephone set is in operation. 144. The telephone set according to claim 123, further comprising firmware and a processor for executing said firmware, said processor being coupled to control at least said wireless transceiver and said display component. 145. The telephone set according to claim 144, wherein said processor is one of: a microprocessor; and a microcomputer, and said telephone set further comprises at least one user operated button or switch coupled to said processor, for user control of operation of said telephone set. 146. The telephone set according to claim 144, wherein said firmware include at least part of a web client for communication with, and accessing information stored in, the remote information server. 147. The telephone set according to claim 146, wherein said at least part of a web client includes at least part of a graphical web browser. 148. The telephone set according to claim 123, wherein said telephone set is operative for communicating with a second remote information server via the Internet for receiving information from the second remote information server, and for displaying the information received from the second remote information server. 149. The telephone set according to claim 123, wherein said telephone set is operative to initiate a communication with the first remote information server on a daily basis at a pre-set time of day (TOD). 150. The telephone set according to claim 149, wherein the pre-set time of day is at least one of: set by the user; set previously in the telephone set; and set by the remote information server in a previous communication session. 151. The telephone set according to claim 123, wherein the digital date received from the first remote information server and displayed contains information relating to a future event, a planned activity, or a forecast of a situation. 152. The telephone set according to claim 151, wherein the information contained in the digital data received from the first remote information server includes at least one of: a weather forecast; a future sports event; a future culture event; a future entertainment event; a TV station guide; and a radio station guide. 153. The telephone set according to claim 123, wherein said telephone set is further operative to receive and display High Definition (HD) video, and said display component is operative to display High Definition (HD) video. 154. The telephone set according to claim 153, wherein the High Definition (HD) video is High Definition Television (HDTV). 155. The telephone set according to claim 123, wherein said telephone set is further operative to receive and display, on said display component, information relating to a user-selected geographical location region. 156. The telephone set according to claim 155, wherein the information includes traffic information, or travel information, or a weather forecast. 157. The telephone set according to claim 123, wherein said telephone set is a mobile telephone set and is operative for making and receiving telephone calls in a cellular telephone network. 158. The telephone set according to claim 123 for use with a cable connected for concurrently carrying a digital data signal and a DC power signal, wherein said telephone set further comprises a rechargeable battery for powering at least part of said telephone set, and said telephone set further comprises a connector for connecting to the cable for communication by said telephone set over the cable and for charging said rechargeable battery by the DC power signal from the cable. 159. The telephone set according to claim 158 wherein the digital data signal is carried as a serial data stream over the cable.
A device for obtaining, storing and displaying information from a remote server, the device has a modem for establishing communication sessions with the remote server. A memory coupled to the modem stores the obtained information, and a display is coupled to the memory for displaying the stored information. The device automatically and periodically communicates with the remote server for obtaining the information.1. A telephone set operative for making and receiving telephone calls via, and for being powered from, a cable, the cable being connected for concurrently carrying digital data including digital video data and a DC power signal, said telephone set being further operative for displaying the digital video data, and said telephone set comprising: a connector for connecting to the cable; a transceiver coupled to said connector for transmitting digital data to, and receiving digital data from, the cable; a video display coupled to said transceiver for visually displaying the digital video data; firmware and a processor for executing said firmware, said processor being coupled to control at least said transceiver and said video display; and a single enclosure housing said connector, said processor, said transceiver and said video display; wherein the telephone set is addressable in a Local Area Network (LAN), and said telephone set is at least in part powered from the DC power signal carried over the cable. 2. The telephone set according to claim 1, wherein said telephone set is further operative for storing at least part of the digital data, and said telephone set further comprises a first memory coupled to said transceiver for storing at least part of the digital data received by said transceiver. 3. The telephone set according to claim 2, wherein said telephone set is further operative for storing at least part of the digital video data, and said first memory is coupled to said transceiver for storing at least part of the digital video data received by said transceiver. 4. The telephone set according to claim 3, wherein said telephone set is further operative to display the stored digital video data, and said video display is coupled to said first memory for displaying the digital video data stored in said first memory. 5. The telephone set according to claim 1, wherein said single enclosure is dimensioned and has an appearance of a conventional flat, wall-mountable framed picture. 6. The telephone set according to claim 1, wherein said single enclosure dimensioned and has an appearance of a conventional telephone set. 7. The telephone set according to claim 1, wherein said telephone set is operative for communicating with a data unit via the cable. 8. The telephone set according to claim 7, wherein said telephone set is further operative for automatically and periodically communicating with the data unit at all times when said telephone set is in operation. 9. The telephone set according to claim 7, wherein said transceiver is a modem and the data unit is a personal computer. 10. The telephone set according to claim 7, wherein: the cable extends outside a building and is part of a Wide Area Network (WAN); the data unit is a first remote information server outside the building; and the telephone set is connected to the data unit via the Internet. 11. The telephone set according to claim 10, wherein the first remote information server is organized as a website including web pages as part of the World Wide Web (WWW), and is further identified by said telephone set using the website Uniform Resource Locator (URL). 12. The telephone set according to claim 10, wherein said telephone set is operative for communicating with a second remote information server via the Internet for receiving information from the second remote information server, and for storing and displaying the information received from the second remote information server. 13. The telephone set according to claim 12, wherein said telephone set is adapted to communicate with the first and second remote information servers for receiving selected and distinct information from each remote information server. 14. The telephone set according to claim 13, wherein said telephone set communicates with the first and second remote servers one at a time. 15. The telephone set according to claim 10, wherein communication with the first remote information server is based on Internet protocol suite. 16. The telephone set according to claim 15, wherein communication with the first remote information server is based on TCP/IP. 17. The telephone set according to claim 10, wherein said telephone set is operative to initiate a communication with the first remote information server on a daily basis at a pre-set time of day (TOD). 18. The telephone set according to claim 17, wherein the pre-set time of day is at least one of: set by the user; set previously in the telephone set; and set by the remote information server in a previous communication session. 19. The telephone set according to claim 10, wherein information received from said first remote information server is publicly available at no cost. 20. The telephone set according to claim 19, wherein the information received from the first remote information server is also available in other mediums. 21. The telephone set according to claim 20, wherein the first remote information server is also associated with one of: a newspaper; a radio station; and a television station. 22. The telephone set according to claim 10, wherein information received from the first remote information server and displayed relates to a future event, a planned activity or a forecast of a situation. 23. The telephone set according to claim 22, wherein the information received from the first remote information server includes at least one of: a weather forecast; a future sports event; a future culture event; a future entertainment event; a TV station guide; and a radio station guide. 24. The telephone set according to claim 10, wherein: said telephone set has a digital address in a Local Area Network (LAN) using; and said telephone set is operative to send the digital address and a request for information, and to receive and display information received from the first remote information server in response to the sent request for information. 25. The telephone set according to claim 1, wherein the cable is connected to concurrently carry the digital data and the power signal using Frequency Division Multiplexing (FDM). 26. The telephone set according to claim 1, wherein the cable is a telephone wire pair connected to a PSTN, and said transceiver is a dial-up modem. 27. The telephone set according to claim 1, wherein the cable is part of a Local Area Network (LAN) in a building. 28. The telephone set according to claim 27, wherein the cable is a LAN cable connected for carrying a LAN signal, said connector is a LAN connector, and said transceiver is a LAN transceiver. 29. The telephone set according to claim 28, wherein: communication over the LAN cable is based on IEEE802.3 standard; said LAN connector is a RJ-45 type connector; and said LAN transceiver is an Ethernet transceiver. 30. The telephone set according to claim 1, wherein said telephone set is further operative to receive High Definition (HD) video, and said video display is operative for displaying the High Definition (HD) video. 31. The telephone set according to claim 30, wherein said telephone set is further operative to receive and display television channels. 32. The telephone set according to claim 31, wherein the High Definition (HD) video is High Definition Television (HDTV). 33. The telephone set according to claim 1, wherein said telephone set is further operative for storing at least part of the digital data received from the cable, and said telephone set further comprises a non-volatile first memory coupled to said transceiver for storing at least part of the digital data received by said transceiver. 34. The telephone set according to claim 33, wherein said first memory is based on Flash memory. 35. The telephone set according to claim 1, wherein said video display comprises a flat screen that is based on Liquid Crystal Display (LCD) technology. 36. The telephone set according to claim 1, further comprising a battery, and wherein said telephone set is operative to be at least in part powered from said battery, and said battery is a primary battery or a rechargeable battery. 37. The telephone set according to claim 1, wherein said telephone set is further adapted to communicate with the Internet via a gateway. 38. The telephone set according to claim 1, wherein said processor is one of: a microprocessor; and a microcomputer, and said telephone set further comprises at least one user operated button or switch coupled to said processor, for user control of operation of said telephone set. 39. The telephone set according to claim 38, wherein the user control of operation of said telephone set comprises at least one out of: turning said telephone set on and off; resetting said telephone set to default values; changing the contrast of said video display; changing the brightness of said video display; changing the zoom of images presented on said video display; selecting a language; and selecting the information to be presented on said video display. 40. The telephone set according to claim 38, wherein said telephone set is further operative for communication with a data unit over the cable, and said firmware includes at least part of a web client for communication with, and accessing information stored in, the data unit. 41. The telephone set according to claim 40, wherein said at least part of a web client includes at least part of a graphical web browser. 42. The telephone set according to claim 41, wherein said at least part of a graphical web browser is based on Windows Internet Explorer. 43. The telephone set according to claim 1, wherein said telephone set is configured for wall mounting in a residential building. 44. The telephone set according to claim 1, wherein said video display provides alphanumeric information. 45. The telephone set according to claim 1, further comprising a non-volatile memory operative for storing a digital address for uniquely identifying said telephone set in the Local Area Network (LAN) or on the Internet. 46. The telephone set according to claim 45, wherein the digital address is either a MAC address or an IP address. 47. The telephone set according to claim 1, wherein said telephone set is further operative to store and play digital audio data. 48. The telephone set according to claim 1, wherein: said telephone set is further operative to receive and display information from a connected unit; said telephone set further comprises a second connector coupled to said processor for connecting to, and controlling, the unit; and said telephone set is operative to receive digital data comprising information from the unit and displaying the information on said display. 49. The telephone set according to claim 48, wherein said telephone set is further operative to transmit digital data to the unit. 50. The telephone set according to claim 49, wherein communication with the unit via said second connector uses a standard serial digital data stream. 51. The telephone set according to claim 48, wherein: the unit has a battery for powering the unit; and said telephone set further comprises a charger coupled to said second connector for charging the battery; and said charger is coupled to be powered from the DC power signal. 52. The telephone set according to claim 48, wherein the unit is a handheld unit, and said telephone set is further adapted to mechanically dock, supply power to, and communicate with the handheld unit. 53. The telephone set according to claim 52 in combination with a cradle for detachable mounting of the handheld unit, the handheld unit having a mating connector, wherein said connector is part of said cradle, and said second connector connects with the handheld unit mating connector when the handheld unit is mounted in said cradle. 54. The telephone set according to claim 53, wherein said handheld unit is a Personal Digital Assistant (PDA), or a cellular telephone. 55. The telephone set according to claim 1, wherein said telephone set is further operative as a clock for maintaining and displaying the current hour, minute and second. 56. The telephone set according to claim 55, wherein said telephone set is further operative to display the current year, the current month and the current day of the month. 57. The telephone set according to claim 55, wherein said telephone set is further operative to display the time of a last information update or a last communication session. 58. The telephone set according to claim 1, wherein said single enclosure is constructed to have at least one of the following: a form substantially similar to that of a standard picture frame; wall mounting elements substantially similar to those of a standard picture frame for hanging on a wall; and a shape to at least in part substitute for a standard picture frame. 59. The telephone set according to claim 1, wherein said single enclosure is constructed to have a form substantially similar to that of a standard telephone set. 60. The telephone set according to claim 1, further comprising a digital to analog converter coupled to said transceiver for converting digital data received by said transceiver to an analog signal. 59. The telephone set according to claim 60, wherein the analog signal is an analog media signal for connecting to an analog media unit. 60. The telephone set according to claim 59, wherein the analog media signal is an analog video signal and the analog media unit is an analog video unit. 61. The telephone set according to claim 60, wherein the analog video signal is an S-Video signal or a composite video signal in a PAL or NTSC format. 62. A telephone set operative for making and receiving telephone calls via, and for being powered from, a cable, the cable being connected for concurrently carrying digital data, including digital video data, and a DC power signal, said telephone set being further operative for displaying information stored in a unit, said telephone set comprising: a first connector for connecting to the cable; an adapter mechanically attachable to the unit, said adapter including a second connector for connecting to the unit when the unit is mechanically attached thereon; a transceiver coupled to said second connector for serial digital data communication with the unit; a display for visually presenting information, said display being coupled to said transceiver for displaying information received from said unit; a power supply coupled to said first connector for being powered by the DC power signal, said power supply being coupled to supply DC power to said transceiver and said display; and a single enclosure housing said first connector, said adapter, said transceiver and said display, wherein said power supply is coupled to said second connector for supplying DC power to the unit when the unit is connected to said second connector. 63. The telephone set according to claim 62, wherein said cable is connected to concurrently carry the digital data and the power signal using Frequency Division Multiplexing (FDM). 64. The telephone set according to claim 62, wherein the unit is a handheld unit. 65. The telephone set according to claim 64, wherein the handheld unit is a Personal Digital Assistant (PDA), or a cellular telephone handset. 66. The telephone set according to claim 62, wherein said telephone set is operative for communicating with a first remote information server via the Internet. 67. The telephone set according to claim 66, wherein the communication with a first remote information server is via the connected unit. 68. The telephone set according to claim 67, wherein said telephone set is operative for automatically and periodically communicating with the first remote information server at all times when said telephone set is in operation. 69. The telephone set according to claim 68, wherein said telephone set is further operative for displaying information received from the first remote information server on said display. 70. The telephone set according to claim 62, further comprising a first memory coupled to said transceiver for storing digital data received by said transceiver. 71. The telephone set according to claim 70, wherein said first memory is non-volatile memory that is based on a Flash memory. 72. The telephone set according to claim 62, wherein said single enclosure is constructed to have a form substantially similar to that of a conventional telephone set. 73. The telephone set according to claim 62, further comprising a second memory connected for storing a digital address uniquely identifying said telephone set in a Local Area Network (LAN) or in a Wide Area Network (WAN). 74. The telephone set according to claim 73, wherein the digital address is either a MAC address or an IP address. 75. The telephone set according to claim 62, wherein said display comprises a flat screen that is based on Liquid Crystal Display (LCD) technology. 76. The telephone set according to claim 62, wherein the information stored in the unit is a digital video data, and wherein said display is a video display. 77. The telephone set according to claim 78, wherein said telephone set is further operative to receive and play television channels. 78. The telephone set according to claim 77, wherein said telephone set is further operative to receive High Definition (HD) video, and wherein said display is operative to display the High Definition (HD) video. 79. The telephone set according to claim 78, wherein the High Definition (HD) video is High Definition Television (HDTV). 80. The telephone set according to claim 62, further comprising firmware and a processor for executing said firmware, said processor being coupled to control at least said transceiver and said display. 81. The telephone set according to claim 80, wherein said processor is one of: a microprocessor; and a microcomputer, and said telephone set further comprises at least one user operated button or switch coupled to said processor, for user control of operation of said telephone set. 82. A system for obtaining digital video content from a video server via the Internet, and for storing and displaying the digital video content on a television set, said system comprising: a video server organized as a web site, storing digital video content, and including web pages as part of the World Wide Web (WWW), connected to the Internet and identified using a web site Uniform Resource Locator (URL); a television set for receiving television signals and comprising a screen for displaying images provided by the received television signals, said television set further comprising a first analog video connector for receiving a first analog video signal for display on said screen; a LAN cable at least in part in walls of a building and connected for concurrently carrying digital video content from said video server and a power signal; and a device connected for storing the digital video content and for displaying images based on the stored digital video content on said screen, said device comprising, in a single enclosure: a LAN connector connected to said LAN cable; a LAN transceiver for transmitting digital data to, and receiving digital video data from, said LAN cable; a digital memory coupled to said LAN transceiver for storing digital video content received from said video server via said LAN cable; a second analog video connector connected to said first analog video connector for transmitting the first analog video signal to said television set; and a digital to analog converter coupled between said digital memory and said second analog video connector for converting the digital video data content stored in said digital memory to said first analog video signal; wherein: said device is addressable in the Internet; and said device is operative for automatically and periodically communicating with the video server via the Internet through said LAN cable for receiving and storing the digital video content therefrom, and for transmitting and displaying the received digital video content in analog form on said screen of said television set, and wherein said device is powered only by the power signal carried over said LAN cable. 83. The system according to claim 82, wherein said television set has dimensions and an appearance of a conventional flat, wall-mountable framed picture. 84. The system according to claim 82, wherein the first analog video signal is an S-Video signal or a composite video signal in a PAL or NTSC format. 85. The system according to claim 82, wherein said LAN cable is connected to concurrently carry the digital video and the power signal using Frequency Division Multiplexing (FDM). 86. The system according to claim 82, wherein the power signal is a Direct Current (DC) signal. 87. The system according to claim 82, wherein said LAN cable is connected to a Wide Area Network (WAN). 88. The system according to claim 82, wherein the LAN cable is connected for carrying a LAN signal containing the digital video content. 89. The system according to claim 88, wherein: said LAN cable, and communication over said LAN cable, are based on IEEE802.3 standard; said LAN connector is a RJ-45 type connector; and said LAN transceiver is an Ethernet transceiver. 90. The system according to claim 82, wherein said screen is a flat screen that is based on Liquid Crystal Display (LCD) technology. 91. The system according to claim 82, wherein said device further comprises firmware and a processor for executing said firmware, said processor being coupled to control at least said LAN transceiver. 92. The system according to claim 82, wherein communication with the video server is based on TCP/IP. 93. A television set for storing and displaying digital video content and for receiving a television signal, said television set including a flat screen for displaying images provided in the received television signal, and said television set being enclosed in a single enclosure and comprising, in said single enclosure: a non-volatile memory for storing digital video data content; a digital to analog converter coupled between said digital memory and said screen for converting the digital video content stored in said digital memory to an analog video signal; and firmware and a processor for executing said firmware, said processor being coupled to control at least said non-volatile memory and said digital to analog converter, wherein said television set is operative in a first state to couple the television signal to said screen for displaying television content, and said television set is operative in a second state to couple the analog video signal to said screen for displaying the digital video content. 94. The television set according to claim 93, wherein said non-volatile memory is Flash memory based. 95. The television set according to claim 93, wherein said single enclosure has dimensions and an appearance of a conventional flat, wall-mountable framed picture. 96. The television set according to claim 93, wherein the analog video signal is an S-Video signal or a composite video signal in a PAL or NTSC format. 97. The television set according to claim 93, wherein said single enclose in constructed to have at least one of the following: a form substantially similar to that of a standard picture frame; wall mounting elements substantially similar to those of a standard picture frame for hanging on a wall; and a shape to at least in part substitute for a standard picture frame. 98. The television set according to claim 93, further comprising: an AC power plug for connecting to an AC power outlet; and a power supply connected to said AC power plug to be powered by power supplied from the AC power outlet, said power supply comprising an AC to DC converter for DC powering said screen and said non-volatile memory. 99. A device for displaying digital video data, for use with a cable connected for concurrently carrying high-definition digital video data and a power signal, the device comprising in a single enclosure: a digital connector for connecting to the cable; a high-definition video display for presenting images, the video display being coupled to said digital connector for displaying the high-definition digital video data carried over the cable; a firmware and a processor for executing said firmware, said processor being coupled to control the device operation; and a non-volatile memory storing digital data identifying said device; wherein said non-volatile memory is coupled to said digital connector for being powered from said power signal carried over said cable. 100. The device according to claim 99, wherein the high-definition digital video data includes high-definition digital television (HDTV) data, and wherein said high-definition video display is adapted to display said high-definition digital television (HDTV) data. 101. The device according to claim 99, further being part of as a television set for receiving and displaying a television signal on said video display, said television set further comprising a first analog video connector coupled to said video display for receiving an analog video signal and for displaying the analog video signal on said video display. 102. The device according to claim 99, wherein said non-volatile memory is based on a Flash memory. 103. The device according to claim 99, wherein said video display comprises a flat screen that is based on one of Liquid Crystal Display (LCD), Field Emission Display (FED), and Cathode Ray Tube (CRT) technologies. 104. The device according to claim 99, wherein said single enclosure has dimensions and an appearance of a conventional flat, wall-mountable framed picture. 105. The device according to claim 99, wherein said single enclose in constructed to have at least one of the following: a form substantially similar to that of a standard picture frame; wall mounting elements substantially similar to those of a standard picture frame for hanging on a wall; and a shape to at least in part substitute for a standard picture frame. 106. The device according to claim 99, wherein said single enclosure is configured for wall mounting in a building. 107. The device according to claim 99, wherein the digital data identifying said device is a digital address uniquely identifying said device in a digital data network. 108. The device according to claim 107, wherein said digital data network is a Local Area Network (LAN) or on the Internet. 109. The device according to claim 108, wherein the digital address is either a MAC address or an IP address. 110. The device according to claim 99, wherein the digital data identifying the device is personalized information. 111. The device according to claim 110, wherein the personalized information is a user name or a password. 112. The device according to claim 110, wherein the personalized information is set by the user. 113. The device according to claim 110, wherein the personalized information is associated with the physical geographical location of said device. 114. The device according to claim 99, wherein the digital data identifying the device is set during production of said device. 115. The device according to claim 99, further operative for serial and bidirectional communication with a data unit connected to said cable, and for receiving the high-definition digital video data from the data unit. 116. The device according to claim 115, further operative for transmitting the data stored in said non-volatile memory to said data unit over the cable. 117. The device according to claim 115, further comprising a transceiver coupled between said digital connector and said non-volatile memory for transmitting the data stored in said non-volatile memory content to said data unit over the cable. 118. The device according to claim 117, wherein said transceiver is connected to said digital connector for being powered from the power signal. 119. The system according to claim 99, wherein the power signal is a Direct Current (DC) signal. 120. The system according to claim 99, wherein the power signal is an Alternating Current (AC) signal. 121. The system according to claim 99, wherein the cable is connected to concurrently carry the high-definition digital video data and the power signal over the same wires. 122. The system according to claim 121, wherein the cable is connected to concurrently carry the high-definition digital video data and the power signal over the same wires using Frequency Division Multiplexing (FDM). 123. A telephone set operative for making and receiving telephone calls in a telephone network over a wired connection, said telephone set being further operative for receiving digital data from a first remote information server using wireless communication and for displaying the digital data, said telephone set comprising: a first memory storing a digital address uniquely identifying the telephone set in a digital data network; an antenna for transmitting and receiving digital data over the air; a wireless transceiver coupled to said antenna for bi-directional packet-based digital data communication with a mating wireless transceiver of the same type over the air; a display component coupled to said wireless transceiver for displaying an image based on digital data received via said wireless transceiver; and a single enclosure housing said first memory, said antenna, said wireless transceiver, and said display component, wherein said telephone set is further operative for transmitting the digital address and for receiving and displaying the digital data received from the first remote information server. 124. The telephone set according to claim 123, wherein the first remote information server is identified by a Uniform Resource Locator (URL) on the Internet, and said telephone set further comprises a second memory housed in said enclosure for storing the website URL identifying the first remote information server. 125. The telephone set according to claim 124, wherein said telephone set is operative to communicate with the first remote information server via the Internet. 126. The telephone set according to claim 123, wherein said telephone set is operative for automatically and periodically communicating with the first remote information server at all times when said telephone set is in operation for receiving digital data from the first remote information server. 127. The telephone set according to claim 123, wherein the wireless communication is based on Bluetooth, and said wireless transceiver is operative for transmitting and receiving substantially according to Bluetooth standard. 128. The telephone set according to claim 123, wherein the wireless communication is over a Wireless Local Area Network (WLAN), said antenna is a WLAN antenna and said wireless transceiver is a WLAN transceiver. 129. The telephone set according to claim 128, wherein said WLAN is substantially according to IEEE802.11 standard, and said WLAN transceiver is operative to communicate substantially according to IEEE802.11 standard. 130. The telephone set according to claim 123, wherein said telephone set is further operative to store at least part of the digital data received from the first remote information server, and said telephone set further comprises, a second memory coupled to said wireless transceiver for storing at least part of the digital data received from the first remote information server via said wireless transceiver. 131. The telephone set according to claim 130, wherein said second memory is non-volatile. 132. The telephone set according to claim 131, wherein said second memory is based on a Flash memory. 133. The telephone set according to claim 123, wherein the digital data includes digital video data, said display component is a video display component, and said telephone set is further operative for receiving and displaying the digital video data. 134. The telephone set according to claim 123, wherein the wireless communication uses a license-free radio frequency band. 135. The telephone set according to claim 134, wherein the license-free radio frequency band is one of: 900 MHz; 2.4 GHz; and 5.8 GHz. 136. The telephone set according to claim 123, wherein the communication with said first remote information server is based on Internet Protocol (IP) suite. 137. The telephone set according to claim 136, wherein the communication with the first remote information server is based on TCP/IP. 138. The telephone set according to claim 123, wherein the wireless communication is based on spread spectrum modulation. 139. The telephone set according to claim 123, wherein the digital address is either a MAC address or an IP address. 140. The telephone set according to claim 123, wherein said telephone set is further operative to store and play digital audio data. 141. The telephone set according to claim 123, wherein said telephone set is further operative to receive and display information from a satellite. 142. The telephone set according to claim 123, wherein said display component comprises a flat screen that is based on Liquid Crystal Display (LCD) technology. 143. The telephone set according to claim 123, wherein said telephone set is operative for automatically and periodically communicating with the first remote information server at all times when said telephone set is in operation. 144. The telephone set according to claim 123, further comprising firmware and a processor for executing said firmware, said processor being coupled to control at least said wireless transceiver and said display component. 145. The telephone set according to claim 144, wherein said processor is one of: a microprocessor; and a microcomputer, and said telephone set further comprises at least one user operated button or switch coupled to said processor, for user control of operation of said telephone set. 146. The telephone set according to claim 144, wherein said firmware include at least part of a web client for communication with, and accessing information stored in, the remote information server. 147. The telephone set according to claim 146, wherein said at least part of a web client includes at least part of a graphical web browser. 148. The telephone set according to claim 123, wherein said telephone set is operative for communicating with a second remote information server via the Internet for receiving information from the second remote information server, and for displaying the information received from the second remote information server. 149. The telephone set according to claim 123, wherein said telephone set is operative to initiate a communication with the first remote information server on a daily basis at a pre-set time of day (TOD). 150. The telephone set according to claim 149, wherein the pre-set time of day is at least one of: set by the user; set previously in the telephone set; and set by the remote information server in a previous communication session. 151. The telephone set according to claim 123, wherein the digital date received from the first remote information server and displayed contains information relating to a future event, a planned activity, or a forecast of a situation. 152. The telephone set according to claim 151, wherein the information contained in the digital data received from the first remote information server includes at least one of: a weather forecast; a future sports event; a future culture event; a future entertainment event; a TV station guide; and a radio station guide. 153. The telephone set according to claim 123, wherein said telephone set is further operative to receive and display High Definition (HD) video, and said display component is operative to display High Definition (HD) video. 154. The telephone set according to claim 153, wherein the High Definition (HD) video is High Definition Television (HDTV). 155. The telephone set according to claim 123, wherein said telephone set is further operative to receive and display, on said display component, information relating to a user-selected geographical location region. 156. The telephone set according to claim 155, wherein the information includes traffic information, or travel information, or a weather forecast. 157. The telephone set according to claim 123, wherein said telephone set is a mobile telephone set and is operative for making and receiving telephone calls in a cellular telephone network. 158. The telephone set according to claim 123 for use with a cable connected for concurrently carrying a digital data signal and a DC power signal, wherein said telephone set further comprises a rechargeable battery for powering at least part of said telephone set, and said telephone set further comprises a connector for connecting to the cable for communication by said telephone set over the cable and for charging said rechargeable battery by the DC power signal from the cable. 159. The telephone set according to claim 158 wherein the digital data signal is carried as a serial data stream over the cable.
2,600
9,989
9,989
15,195,918
2,694
A computing device is provided, which includes an input device, a display device, and a processor configured to, at a rendering stage of a rendering pipeline, render visual scene data to a frame buffer, and generate a signed distance field of edges of vector graphic data, and, at a reprojection stage of the rendering pipeline prior to displaying the rendered visual scene, receive post rendering user input via the input device that updates the user perspective, reproject the rendered visual scene data in the frame buffer based on the updated user perspective, reproject data of the signed distance field based on an updated user perspective, evaluate the signed distance field to generate reprojected vector graphic data, and generate a composite image including the reprojected rendered visual scene data and the reprojected graphic data, and display the composite image on the display device.
1. A computing device, comprising: an input device; a display device; and a processor configured to: at a rendering stage of a rendering pipeline: determine based on data output by an application program a scene from a user perspective, the scene including visual scene data and vector graphic data, the user perspective determined based on user input from the input device; render the visual scene data as two dimensional pixel data to a frame buffer; and generate a signed distance field of edges of the vector graphic data; at a reprojection stage of the rendering pipeline prior to displaying the rendered visual scene: receive post rendering user input via the input device that updates the user perspective; reproject the rendered visual scene data in the frame buffer based on the updated user perspective; reproject data of the signed distance field based on the updated user perspective; evaluate reprojected data of the signed distance field to generate reprojected vector graphic data; generate a composite image including the reprojected rendered visual scene data and the reprojected graphic data; and display the composite image on the display device. 2. The computing device of claim 1, wherein a value of each pixel in the signed distance field represents a distance to a nearest edge of a vector graphic in the vector graphic data, or wherein a plurality of values are stored in each pixel in the signed distance field representing distances to each of a plurality of edges in the vicinity of the vector graphic in the vector graphic data. 3. The computing device of claim 2, wherein each pixel in the signed distance field further includes a color or texture value. 4. The computing device of claim 2, wherein each pixel in the signed distance field further includes a depth value for that pixel in the scene. 5. The computing device of claim 1, wherein the vector graphic data is text data. 6. The computing device of claim 1, wherein the processor is further configured to: generate a graphical user interface overlay that is locked to a viewport of the display device; and generate the composite image including the reprojected rendered visual scene data, the reprojected vector graphic data, and the graphical user interface overlay. 7. The computing device of claim 1, wherein the reprojection stage of the rendering pipeline is executed on a dedicated processing device separate from the rendering stage of the rendering pipeline. 8. The computing device of claim 1, wherein the computing device is a head mounted display device, and the input device includes sensors configured to detect head movement of a user of the head mounted display device. 9. The computing device of claim 8, wherein the head mounted display includes an inward facing image sensor configured to track a user's gaze direction, and the processor is further configured to: generate the signed distance field to include a higher resolution of signed distance field data for vector graphic data near the user's gaze direction than a resolution of signed distance field data for vector graphic data peripheral to the user's gaze direction. 10. A computer-implemented method, comprising: at a rendering stage of a rendering pipeline: determining based on data output by an application program a scene from a user perspective, the scene including visual scene data and vector graphic data, the user perspective determined based on user input from an input device; rendering the visual scene data as two dimensional pixel data to a frame buffer; and generating a signed distance field of edges of the vector graphic data; at a reprojection stage of the rendering pipeline prior to displaying the rendered visual scene: receiving post rendering user input via the input device that updates the user perspective; reprojecting the rendered visual scene data in the frame buffer based on the updated perspective; reprojecting data of the signed distance field based on the updated perspective; evaluating reprojected data of the signed distance field to generate reprojected vector graphic data; generating a composite image including the reprojected rendered visual scene data and the reprojected vector graphic data; and displaying the composite image on the display device. 11. The method of claim 10, wherein a value of each pixel in the signed distance field represents a distance to a nearest edge of a vector graphic in the vector graphic data, or wherein a plurality of values are stored in each pixel in the signed distance field representing distances to each of a plurality of edges in the vicinity of the vector graphic in the vector graphic data. 12. The method of claim 11, wherein each pixel in the signed distance field further includes a color or texture value. 13. The method of claim 11, wherein each pixel in the signed distance field further includes a depth value for that pixel in the scene. 14. The method of claim 10, wherein the vector graphic data is text data. 15. The method of claim 10, further comprising: generating a graphical user interface overlay that is locked to a viewport of the display device; and generating the composite image including the reprojected rendered visual scene data, the reprojected vector graphic data, and the graphical user interface overlay. 16. The method of claim 10, wherein the reprojection stage of the rendering pipeline is executed on a dedicated processing device separate from the rendering stage of the rendering pipeline. 17. The method of claim 10, wherein the method is implemented on a head mounted display device, and the input device includes sensors configured to detect head movement of a user of the head mounted display device. 18. The method of claim 17, wherein the head mounted display includes an inward facing image sensor configured to track a user's gaze direction, and the method further comprises: generating the signed distance field to include a higher resolution of signed distance field data for vector graphic data near the user's gaze direction than a resolution of signed distance field data for vector graphic data peripheral to the user's gaze direction. 19. A computer-implemented method, comprising: in a rendering pipeline determining a user perspective based on input data from an input device at a first moment in time; rendering a composite image for display including a first layer with two dimensional pixel data representing a scene and a second layer with vector graphics data, the second layer being encoded in signed distance field, based on the user perspective; prior to displaying the rendered composite image, determining an updated user perspective based on updated user input data from the user input device; reprojecting the rendered pixel data and the text data encoded in the signed distance field format based on the updated perspective; evaluating reprojected data of the signed distance field to generate reprojected vector graphic data; generating a updated composite image including the reprojected rendered pixel data and the reprojected graphic data; and displaying the updated composite image on a display device. 20. The method of claim 19, wherein the display device is a head mounted display device that includes an at least partially see through display on which the updated composite image is displayed, and the input device includes one or more sensors that sense position and orientation of the head mounted display device.
A computing device is provided, which includes an input device, a display device, and a processor configured to, at a rendering stage of a rendering pipeline, render visual scene data to a frame buffer, and generate a signed distance field of edges of vector graphic data, and, at a reprojection stage of the rendering pipeline prior to displaying the rendered visual scene, receive post rendering user input via the input device that updates the user perspective, reproject the rendered visual scene data in the frame buffer based on the updated user perspective, reproject data of the signed distance field based on an updated user perspective, evaluate the signed distance field to generate reprojected vector graphic data, and generate a composite image including the reprojected rendered visual scene data and the reprojected graphic data, and display the composite image on the display device.1. A computing device, comprising: an input device; a display device; and a processor configured to: at a rendering stage of a rendering pipeline: determine based on data output by an application program a scene from a user perspective, the scene including visual scene data and vector graphic data, the user perspective determined based on user input from the input device; render the visual scene data as two dimensional pixel data to a frame buffer; and generate a signed distance field of edges of the vector graphic data; at a reprojection stage of the rendering pipeline prior to displaying the rendered visual scene: receive post rendering user input via the input device that updates the user perspective; reproject the rendered visual scene data in the frame buffer based on the updated user perspective; reproject data of the signed distance field based on the updated user perspective; evaluate reprojected data of the signed distance field to generate reprojected vector graphic data; generate a composite image including the reprojected rendered visual scene data and the reprojected graphic data; and display the composite image on the display device. 2. The computing device of claim 1, wherein a value of each pixel in the signed distance field represents a distance to a nearest edge of a vector graphic in the vector graphic data, or wherein a plurality of values are stored in each pixel in the signed distance field representing distances to each of a plurality of edges in the vicinity of the vector graphic in the vector graphic data. 3. The computing device of claim 2, wherein each pixel in the signed distance field further includes a color or texture value. 4. The computing device of claim 2, wherein each pixel in the signed distance field further includes a depth value for that pixel in the scene. 5. The computing device of claim 1, wherein the vector graphic data is text data. 6. The computing device of claim 1, wherein the processor is further configured to: generate a graphical user interface overlay that is locked to a viewport of the display device; and generate the composite image including the reprojected rendered visual scene data, the reprojected vector graphic data, and the graphical user interface overlay. 7. The computing device of claim 1, wherein the reprojection stage of the rendering pipeline is executed on a dedicated processing device separate from the rendering stage of the rendering pipeline. 8. The computing device of claim 1, wherein the computing device is a head mounted display device, and the input device includes sensors configured to detect head movement of a user of the head mounted display device. 9. The computing device of claim 8, wherein the head mounted display includes an inward facing image sensor configured to track a user's gaze direction, and the processor is further configured to: generate the signed distance field to include a higher resolution of signed distance field data for vector graphic data near the user's gaze direction than a resolution of signed distance field data for vector graphic data peripheral to the user's gaze direction. 10. A computer-implemented method, comprising: at a rendering stage of a rendering pipeline: determining based on data output by an application program a scene from a user perspective, the scene including visual scene data and vector graphic data, the user perspective determined based on user input from an input device; rendering the visual scene data as two dimensional pixel data to a frame buffer; and generating a signed distance field of edges of the vector graphic data; at a reprojection stage of the rendering pipeline prior to displaying the rendered visual scene: receiving post rendering user input via the input device that updates the user perspective; reprojecting the rendered visual scene data in the frame buffer based on the updated perspective; reprojecting data of the signed distance field based on the updated perspective; evaluating reprojected data of the signed distance field to generate reprojected vector graphic data; generating a composite image including the reprojected rendered visual scene data and the reprojected vector graphic data; and displaying the composite image on the display device. 11. The method of claim 10, wherein a value of each pixel in the signed distance field represents a distance to a nearest edge of a vector graphic in the vector graphic data, or wherein a plurality of values are stored in each pixel in the signed distance field representing distances to each of a plurality of edges in the vicinity of the vector graphic in the vector graphic data. 12. The method of claim 11, wherein each pixel in the signed distance field further includes a color or texture value. 13. The method of claim 11, wherein each pixel in the signed distance field further includes a depth value for that pixel in the scene. 14. The method of claim 10, wherein the vector graphic data is text data. 15. The method of claim 10, further comprising: generating a graphical user interface overlay that is locked to a viewport of the display device; and generating the composite image including the reprojected rendered visual scene data, the reprojected vector graphic data, and the graphical user interface overlay. 16. The method of claim 10, wherein the reprojection stage of the rendering pipeline is executed on a dedicated processing device separate from the rendering stage of the rendering pipeline. 17. The method of claim 10, wherein the method is implemented on a head mounted display device, and the input device includes sensors configured to detect head movement of a user of the head mounted display device. 18. The method of claim 17, wherein the head mounted display includes an inward facing image sensor configured to track a user's gaze direction, and the method further comprises: generating the signed distance field to include a higher resolution of signed distance field data for vector graphic data near the user's gaze direction than a resolution of signed distance field data for vector graphic data peripheral to the user's gaze direction. 19. A computer-implemented method, comprising: in a rendering pipeline determining a user perspective based on input data from an input device at a first moment in time; rendering a composite image for display including a first layer with two dimensional pixel data representing a scene and a second layer with vector graphics data, the second layer being encoded in signed distance field, based on the user perspective; prior to displaying the rendered composite image, determining an updated user perspective based on updated user input data from the user input device; reprojecting the rendered pixel data and the text data encoded in the signed distance field format based on the updated perspective; evaluating reprojected data of the signed distance field to generate reprojected vector graphic data; generating a updated composite image including the reprojected rendered pixel data and the reprojected graphic data; and displaying the updated composite image on a display device. 20. The method of claim 19, wherein the display device is a head mounted display device that includes an at least partially see through display on which the updated composite image is displayed, and the input device includes one or more sensors that sense position and orientation of the head mounted display device.
2,600
9,990
9,990
15,966,307
2,683
An example locating assembly includes, among other things, a display within an electrified vehicle, a vehicle symbol presented on the display, and a charge port symbol presented on the display. The charge port symbol is positioned relative to the vehicle symbol to indicate a position of a charge port. An example locating method includes displaying a vehicle symbol and a charge port symbol on a display of an electrified vehicle. The charge port symbol is positioned relative to the vehicle symbol to indicate a position of a charge port.
1. A locating assembly, comprising: a display within an electrified vehicle; a vehicle symbol presented on the display; and a charge port symbol presented on the display, the charge port symbol positioned relative to the vehicle symbol to indicate a position of a charge port. 2. The locating assembly of claim 1, wherein the vehicle symbol is a battery state of charge icon. 3. The locating assembly of claim 2, wherein the vehicle symbol is battery shaped. 4. The locating assembly of claim 1, wherein a vertically upper end of the vehicle symbol represents a front portion of the electrified vehicle, and a vertically lower end of the vehicle symbol represents a rear portion of the electrified vehicle. 5. The locating assembly of claim 1, wherein the vehicle symbol includes a visual indicator representing a state of charge of a traction battery of the electrified vehicle. 6. The locating assembly of claim 1, wherein the vehicle symbol represents an overhead view of the electrified vehicle. 7. The locating assembly of claim 1, wherein the charge port symbol represents a position of the charge port along a longitudinal axis of the electrified vehicle. 8. The locating assembly of claim 1, wherein the charge port symbol is a first charge port symbol and the charge port is a first charge port, and further comprising a second charge port symbol presented on the display, the second charge port symbol positioned relative to the vehicle symbol to indicate the position of a second charge port. 9. The locating assembly of claim 8, wherein the first charge port is an AC charge port, and the second charge port is a DC charge port. 10. The locating assembly of claim 8, wherein a color of the first charge port symbol is different than a color of the second charge port symbol. 11. The locating assembly of claim 8, wherein a shape of the first charge port symbol is different than a shape of the second charge port symbol. 12. A locating method, comprising: displaying a vehicle symbol and a charge port symbol on a display of an electrified vehicle, the charge port symbol positioned relative to the vehicle symbol to indicate a position of a charge port. 13. The locating method of claim 12, further comprising altering the vehicle symbol in response to a state of charge of a traction battery of the electrified vehicle. 14. The locating method of claim 12, further comprising representing a front portion of the vehicle with a vertically upper end of the vehicle symbol and a rear portion of the vehicle with a vertically lower end of the vehicle symbol. 15. The locating method of claim 12, wherein the vehicle symbol represents an overhead view of the electrified vehicle. 16. The locating method of claim 12, further comprising displaying the charge port symbol relative to the vehicle symbol such that the charge port symbol represents a position of the charge port along a longitudinal axis of the electrified vehicle. 17. The locating method of claim 12, wherein the charge port symbol is a first charge port symbol and the charge port is a first charge port, and further comprising displaying a second charge port symbol on the display, the second charge port symbol positioned relative to the vehicle symbol to indicate the position of a second charge port. 18. The locating method of claim 17, wherein the first charge port is an AC charge port and the second charge port is a DC charge port. 19. The locating method of claim 17, further comprising displaying the first charge port symbol in a first color and displaying the second charge port symbol in a different, second color. 20. The locating method of claim 17, wherein a shape of the first charge port symbol is different than a shape of the second charge port symbol.
An example locating assembly includes, among other things, a display within an electrified vehicle, a vehicle symbol presented on the display, and a charge port symbol presented on the display. The charge port symbol is positioned relative to the vehicle symbol to indicate a position of a charge port. An example locating method includes displaying a vehicle symbol and a charge port symbol on a display of an electrified vehicle. The charge port symbol is positioned relative to the vehicle symbol to indicate a position of a charge port.1. A locating assembly, comprising: a display within an electrified vehicle; a vehicle symbol presented on the display; and a charge port symbol presented on the display, the charge port symbol positioned relative to the vehicle symbol to indicate a position of a charge port. 2. The locating assembly of claim 1, wherein the vehicle symbol is a battery state of charge icon. 3. The locating assembly of claim 2, wherein the vehicle symbol is battery shaped. 4. The locating assembly of claim 1, wherein a vertically upper end of the vehicle symbol represents a front portion of the electrified vehicle, and a vertically lower end of the vehicle symbol represents a rear portion of the electrified vehicle. 5. The locating assembly of claim 1, wherein the vehicle symbol includes a visual indicator representing a state of charge of a traction battery of the electrified vehicle. 6. The locating assembly of claim 1, wherein the vehicle symbol represents an overhead view of the electrified vehicle. 7. The locating assembly of claim 1, wherein the charge port symbol represents a position of the charge port along a longitudinal axis of the electrified vehicle. 8. The locating assembly of claim 1, wherein the charge port symbol is a first charge port symbol and the charge port is a first charge port, and further comprising a second charge port symbol presented on the display, the second charge port symbol positioned relative to the vehicle symbol to indicate the position of a second charge port. 9. The locating assembly of claim 8, wherein the first charge port is an AC charge port, and the second charge port is a DC charge port. 10. The locating assembly of claim 8, wherein a color of the first charge port symbol is different than a color of the second charge port symbol. 11. The locating assembly of claim 8, wherein a shape of the first charge port symbol is different than a shape of the second charge port symbol. 12. A locating method, comprising: displaying a vehicle symbol and a charge port symbol on a display of an electrified vehicle, the charge port symbol positioned relative to the vehicle symbol to indicate a position of a charge port. 13. The locating method of claim 12, further comprising altering the vehicle symbol in response to a state of charge of a traction battery of the electrified vehicle. 14. The locating method of claim 12, further comprising representing a front portion of the vehicle with a vertically upper end of the vehicle symbol and a rear portion of the vehicle with a vertically lower end of the vehicle symbol. 15. The locating method of claim 12, wherein the vehicle symbol represents an overhead view of the electrified vehicle. 16. The locating method of claim 12, further comprising displaying the charge port symbol relative to the vehicle symbol such that the charge port symbol represents a position of the charge port along a longitudinal axis of the electrified vehicle. 17. The locating method of claim 12, wherein the charge port symbol is a first charge port symbol and the charge port is a first charge port, and further comprising displaying a second charge port symbol on the display, the second charge port symbol positioned relative to the vehicle symbol to indicate the position of a second charge port. 18. The locating method of claim 17, wherein the first charge port is an AC charge port and the second charge port is a DC charge port. 19. The locating method of claim 17, further comprising displaying the first charge port symbol in a first color and displaying the second charge port symbol in a different, second color. 20. The locating method of claim 17, wherein a shape of the first charge port symbol is different than a shape of the second charge port symbol.
2,600
9,991
9,991
14,311,727
2,622
In one aspect, a first device includes a display, a processor, and a memory accessible to the processor. The memory bears instructions executable by the processor to receive data from an input device pertaining to the orientation of the input device and, at least in part based on the data, determine whether the input device is positioned to provide input to the first device.
1. A first device, comprising: a display; a processor; and a memory accessible to the processor and bearing, instructions executable by the processor to: receive first data from an input device pertaining to an orientation of the input device; receive second data representing an orientation of the first device relative to the Earth's gravity; and at least in part based on the first and second data, determine whether the input device is positioned to provide input to the first device. 2. The first device of claim 1, wherein input device is a stylus and wherein the display a touch-enabled display. 3. The first device of claim 2, wherein the determination whether the input device is positioned to provide input to the first device comprises a determination whether the stylus is positioned to provide input to the touch-enabled display. 4. The first device of claim 3, wherein the determination whether the stylus is positioned to provide input to the touch-enabled display comprises a determination whether a longitudinal axis established by the stylus is at least substantially perpendicular to a plane established by the touch-enabled display. 5. The first device of claim 4, comprising an orientation sensor, wherein data from the orientation sensor is used for the determination whether the longitudinal axis of the stylus is at least substantially perpendicular to the plane established by the touch-enabled display. 6. The first device of claim 5, wherein substantially perpendicular to the plane is within a threshold number of degrees from perpendicular to the plane. 7. The first device of claim 1, wherein the instructions are further executable to, in response to a determination that the input device is positioned to provide input to the first device, enable execution of one or more functions at the first device based on input from the input device, the execution of one or more functions based on input from the input device being otherwise disabled. 8. The first device of claim 1, wherein the instructions are further executable to, in response to a determination that the input device is positioned to provide input to the first device, present a user interface (UI) on the display. 9. The first device of claim 8, wherein the UI is a first UI, and wherein a second UI different from the first UI is removed from the display in response to the determination that the input device is positioned to provide input to the first device. 10. The first device of claim 9, wherein the first UI comprises an area for input from the input device, and wherein the second UI comprises a keyboard. 11. The first device of claim 1, wherein the instructions are further executable to receive data from the input device pertaining to the location of the input device, and wherein the determination of whether the input device is positioned to provide input to the first device comprises a determination based on the data from the input device pertaining to the location of the input device of whether the input device is within a threshold distance to the first device. 12. A method, comprising: enabling execution of one or more functions at a first device based on input from a stylus in response to a determination that the stylus is oriented to provide input to the first device; disabling execution of the one or more functions at the first device based on the input from the stylus in response to a determination that the stylus is not oriented to provide input to the first device; and presenting on the first device at least one of: a selector element selectable to select use of, in addition to using orientation data, touch sensor data received from an input device to determine whether the stylus is oriented to provide input; a selector element selectable to select use of, in addition to using orientation data, distance data representing a distance between the stylus and the first device to determine whether the stylus is oriented to provide input. 13. The method of claim 12, comprising, in response to the determination that the stylus is not oriented to provide input to the first device, enabling execution of one or more functions at the first device had on touch input from a body part of a person. 14. The method of claim 12, comprising, in response to the determination that the stylus is oriented to provide input to the first device, disabling execution of one or more functions at the first device based on touch input from a body part of a person. 15. The method of claim 12, wherein the determinations are made at least in part based on data received at the first device from the stylus pertaining to whether at least one touch sensor on the stylus detects the presence of a person. 16. The method of claim 12, wherein the determinations are made at least in part based on a determination of whether the distance between the first device and the stylus is at least one of at a distance threshold and within a distance threshold. 17. The method of claim 12, wherein the determinations are made based at least in part on data from the stylus received at the first device pertaining to the orientation of the stylus, and wherein the determinations are made based at least in part on data from an orientation sensor on the first device pertaining to the orientation of the first device, the data from the stylus being compared to the data from the orientation sensor to determine whether a longitudinal axis established by the stylus is at least within a threshold number of degrees from perpendicular to a plane established by a display of the first device. 18. The method at claim 12, further comprising: in response to the determination that the stylus is oriented to provide input to the first device, presenting a first user interface (UI) on a touch-enabled display of the first device for receiving input from the stylus; and in response to the determination that the stylus is not oriented to provide input to the first device; presenting a second UI different front the first UI on the touch-enabled display for receiving touch input from the body of a person, the first UI and the second UI not being simultaneously presented. 19. A computer readable storage medium that is not a transient signal, the computer readable storage medium comprising instructions executable by a processor to: receive first data from an input device pertaining to an orientation of the input device; receive second data from an orientation sensor on first device different from the input device; and at least in part based on the first and second data, determine whether the input device is oriented to provide input to the first device different from the input device. 20. The computer readable storage medium of claim 19, wherein the instructions are executable to: in response to a determination that the input device is oriented to provide input to the first device, present a first application on a display for receiving input from the input device; and in response to a determination that the input device is not oriented to provide input to the first device, present a second application on the display different from the first application for receiving input from the body of a person; wherein the first and second applications are not simultaneously presented.
In one aspect, a first device includes a display, a processor, and a memory accessible to the processor. The memory bears instructions executable by the processor to receive data from an input device pertaining to the orientation of the input device and, at least in part based on the data, determine whether the input device is positioned to provide input to the first device.1. A first device, comprising: a display; a processor; and a memory accessible to the processor and bearing, instructions executable by the processor to: receive first data from an input device pertaining to an orientation of the input device; receive second data representing an orientation of the first device relative to the Earth's gravity; and at least in part based on the first and second data, determine whether the input device is positioned to provide input to the first device. 2. The first device of claim 1, wherein input device is a stylus and wherein the display a touch-enabled display. 3. The first device of claim 2, wherein the determination whether the input device is positioned to provide input to the first device comprises a determination whether the stylus is positioned to provide input to the touch-enabled display. 4. The first device of claim 3, wherein the determination whether the stylus is positioned to provide input to the touch-enabled display comprises a determination whether a longitudinal axis established by the stylus is at least substantially perpendicular to a plane established by the touch-enabled display. 5. The first device of claim 4, comprising an orientation sensor, wherein data from the orientation sensor is used for the determination whether the longitudinal axis of the stylus is at least substantially perpendicular to the plane established by the touch-enabled display. 6. The first device of claim 5, wherein substantially perpendicular to the plane is within a threshold number of degrees from perpendicular to the plane. 7. The first device of claim 1, wherein the instructions are further executable to, in response to a determination that the input device is positioned to provide input to the first device, enable execution of one or more functions at the first device based on input from the input device, the execution of one or more functions based on input from the input device being otherwise disabled. 8. The first device of claim 1, wherein the instructions are further executable to, in response to a determination that the input device is positioned to provide input to the first device, present a user interface (UI) on the display. 9. The first device of claim 8, wherein the UI is a first UI, and wherein a second UI different from the first UI is removed from the display in response to the determination that the input device is positioned to provide input to the first device. 10. The first device of claim 9, wherein the first UI comprises an area for input from the input device, and wherein the second UI comprises a keyboard. 11. The first device of claim 1, wherein the instructions are further executable to receive data from the input device pertaining to the location of the input device, and wherein the determination of whether the input device is positioned to provide input to the first device comprises a determination based on the data from the input device pertaining to the location of the input device of whether the input device is within a threshold distance to the first device. 12. A method, comprising: enabling execution of one or more functions at a first device based on input from a stylus in response to a determination that the stylus is oriented to provide input to the first device; disabling execution of the one or more functions at the first device based on the input from the stylus in response to a determination that the stylus is not oriented to provide input to the first device; and presenting on the first device at least one of: a selector element selectable to select use of, in addition to using orientation data, touch sensor data received from an input device to determine whether the stylus is oriented to provide input; a selector element selectable to select use of, in addition to using orientation data, distance data representing a distance between the stylus and the first device to determine whether the stylus is oriented to provide input. 13. The method of claim 12, comprising, in response to the determination that the stylus is not oriented to provide input to the first device, enabling execution of one or more functions at the first device had on touch input from a body part of a person. 14. The method of claim 12, comprising, in response to the determination that the stylus is oriented to provide input to the first device, disabling execution of one or more functions at the first device based on touch input from a body part of a person. 15. The method of claim 12, wherein the determinations are made at least in part based on data received at the first device from the stylus pertaining to whether at least one touch sensor on the stylus detects the presence of a person. 16. The method of claim 12, wherein the determinations are made at least in part based on a determination of whether the distance between the first device and the stylus is at least one of at a distance threshold and within a distance threshold. 17. The method of claim 12, wherein the determinations are made based at least in part on data from the stylus received at the first device pertaining to the orientation of the stylus, and wherein the determinations are made based at least in part on data from an orientation sensor on the first device pertaining to the orientation of the first device, the data from the stylus being compared to the data from the orientation sensor to determine whether a longitudinal axis established by the stylus is at least within a threshold number of degrees from perpendicular to a plane established by a display of the first device. 18. The method at claim 12, further comprising: in response to the determination that the stylus is oriented to provide input to the first device, presenting a first user interface (UI) on a touch-enabled display of the first device for receiving input from the stylus; and in response to the determination that the stylus is not oriented to provide input to the first device; presenting a second UI different front the first UI on the touch-enabled display for receiving touch input from the body of a person, the first UI and the second UI not being simultaneously presented. 19. A computer readable storage medium that is not a transient signal, the computer readable storage medium comprising instructions executable by a processor to: receive first data from an input device pertaining to an orientation of the input device; receive second data from an orientation sensor on first device different from the input device; and at least in part based on the first and second data, determine whether the input device is oriented to provide input to the first device different from the input device. 20. The computer readable storage medium of claim 19, wherein the instructions are executable to: in response to a determination that the input device is oriented to provide input to the first device, present a first application on a display for receiving input from the input device; and in response to a determination that the input device is not oriented to provide input to the first device, present a second application on the display different from the first application for receiving input from the body of a person; wherein the first and second applications are not simultaneously presented.
2,600
9,992
9,992
15,953,597
2,698
Image-sensing devices include odd-symmetry gratings that cast interference patterns over a photodetector array. Grating features offer considerable insensitivity to the wavelength of incident light, and also to the manufactured distance between the grating and the photodetector array. Photographs and other image information can be extracted from interference patterns captured by the photodetector array. Images can be captured without a lens, and cameras can be made smaller than those that are reliant on lenses and ray-optical focusing.
1. (canceled) 2. A transmissive phase grating comprising: a plurality of boundaries extending radially over an area, each boundary defined by adjacent and transparent first and second features symmetrically disposed on either side of that boundary; a first filter covering a first portion of the area; and a second filter covering a second portion of the area. 3. The transmissive phase grating of claim 2, wherein the first filter comprises a first polarization filter of a first orientation and the second filter comprises a second polarization filter of a second orientation different from the first orientation. 4. The transmissive phase grating of claim 3, wherein the first orientation is perpendicular to the second orientation. 5. The transmissive phase grating of claim 2, wherein the area is round. 6. The transmissive phase grating of claim 2, wherein the boundaries are curved. 7. The transmissive phase grating of claim 2, wherein the boundaries are discontinuous. 8. The transmissive phase grating of claim 2, each boundary to induce, at a position underlying that boundary, for light in a wavelength band of interest incident the transmissive phase grating and normal to a transverse plane of the grating, a half-wavelength shift of the light through the transparent first feature with respect to the light passing through the adjacent transparent second feature. 9. The transmissive phase grating of claim 2, wherein the boundaries are discontinuous. 10. The transmissive phase grating of claim 2, wherein the first filter surrounds the second filter. 11. The transmissive phase grating of claim 10, wherein the first filter and the second filter are concentric. 12. The transmissive phase grating of claim 10, further comprising a third filter covering a third portion of the area. 13. The transmissive phase grating of claim 12, wherein the third filter surrounds the first filter and the second filter. 14. The transmissive phase grating of claim 12, wherein the third filter surrounds the first filter and the second filter. 15. The transmissive phase grating of claim 14, wherein the first filter is a blue filter, the second filter is a green filter surrounding the first filter, and the third filter is a red filter surrounding the green filter. 16. The transmissive phase grating of claim 2, wherein the area is of a plane. 17. An imaging device comprising: a photodiode array; transmissive phase grating with diverging boundaries extending over an area of the photodiode array, each boundary defined by adjacent and transparent first and second features symmetrically disposed on either side of that boundary; a first filter covering a first portion of the area; and a second filter covering a second portion of the area. 18. The imaging device of claim 17, wherein the first filter comprises a first polarization filter of a first orientation and the second filter comprises a second polarization filter of a second orientation different from the first orientation. 19. The imaging device of claim 18, wherein the first orientation is perpendicular to the second orientation. 20. The imaging device of claim 17, wherein the first filter surrounds the second filter. 21. The transmissive phase grating of claim 20, wherein the first filter and the second filter are concentric.
Image-sensing devices include odd-symmetry gratings that cast interference patterns over a photodetector array. Grating features offer considerable insensitivity to the wavelength of incident light, and also to the manufactured distance between the grating and the photodetector array. Photographs and other image information can be extracted from interference patterns captured by the photodetector array. Images can be captured without a lens, and cameras can be made smaller than those that are reliant on lenses and ray-optical focusing.1. (canceled) 2. A transmissive phase grating comprising: a plurality of boundaries extending radially over an area, each boundary defined by adjacent and transparent first and second features symmetrically disposed on either side of that boundary; a first filter covering a first portion of the area; and a second filter covering a second portion of the area. 3. The transmissive phase grating of claim 2, wherein the first filter comprises a first polarization filter of a first orientation and the second filter comprises a second polarization filter of a second orientation different from the first orientation. 4. The transmissive phase grating of claim 3, wherein the first orientation is perpendicular to the second orientation. 5. The transmissive phase grating of claim 2, wherein the area is round. 6. The transmissive phase grating of claim 2, wherein the boundaries are curved. 7. The transmissive phase grating of claim 2, wherein the boundaries are discontinuous. 8. The transmissive phase grating of claim 2, each boundary to induce, at a position underlying that boundary, for light in a wavelength band of interest incident the transmissive phase grating and normal to a transverse plane of the grating, a half-wavelength shift of the light through the transparent first feature with respect to the light passing through the adjacent transparent second feature. 9. The transmissive phase grating of claim 2, wherein the boundaries are discontinuous. 10. The transmissive phase grating of claim 2, wherein the first filter surrounds the second filter. 11. The transmissive phase grating of claim 10, wherein the first filter and the second filter are concentric. 12. The transmissive phase grating of claim 10, further comprising a third filter covering a third portion of the area. 13. The transmissive phase grating of claim 12, wherein the third filter surrounds the first filter and the second filter. 14. The transmissive phase grating of claim 12, wherein the third filter surrounds the first filter and the second filter. 15. The transmissive phase grating of claim 14, wherein the first filter is a blue filter, the second filter is a green filter surrounding the first filter, and the third filter is a red filter surrounding the green filter. 16. The transmissive phase grating of claim 2, wherein the area is of a plane. 17. An imaging device comprising: a photodiode array; transmissive phase grating with diverging boundaries extending over an area of the photodiode array, each boundary defined by adjacent and transparent first and second features symmetrically disposed on either side of that boundary; a first filter covering a first portion of the area; and a second filter covering a second portion of the area. 18. The imaging device of claim 17, wherein the first filter comprises a first polarization filter of a first orientation and the second filter comprises a second polarization filter of a second orientation different from the first orientation. 19. The imaging device of claim 18, wherein the first orientation is perpendicular to the second orientation. 20. The imaging device of claim 17, wherein the first filter surrounds the second filter. 21. The transmissive phase grating of claim 20, wherein the first filter and the second filter are concentric.
2,600
9,993
9,993
15,426,101
2,689
Systems and methods for detecting a security tag. The methods comprise: detecting motion of the security tag while in use to protect an article from unauthorized removal from a protected area; and emitting a first waveform from a radiating device of the security tag in response to the motion's detection. The first waveform is detectable by an Electronic Article Surveillance (“EAS”) monitoring system. The radiating device comprises a device other than an EAS element, a Radio Frequency Identification (“RFID”) device and a Near Field Communication (“NFC”) enabled device.
1. A method for detecting a security tag, comprising: detecting motion of the security tag while in use to protect an article from unauthorized removal from a protected area; and emitting a first waveform from a radiating device of the security tag in response to the motion's detection, the first waveform detectable by an Electronic Article Surveillance (“EAS”) monitoring system; wherein the radiating device comprises a device other than an EAS element, a Radio Frequency Identification (“RFID”) device and a Near Field Communication (“NFC”) enabled device. 2. The method according to claim 1, wherein the radiating device comprises an audio speaker, a piezo, an antenna, a magnetic loop, or a metallic housing. 3. The method according to claim 1, wherein the EAS element is disposed within the security tag. 4. The method according to claim 3, wherein the EAS element is inoperative. 5. The method according to claim 1, wherein the security tag is exclusive of the EAS element. 6. The method according to claim 1, further comprising: performing operations by the security tag to determine if the security tag is still coupled to the article despite having authorization for the security tag's decoupling from the article; selecting a second waveform from a plurality of waveforms when a determination is made that the security tag is still coupled to the article despite having the authorization, the second waveform (a) being different from the first waveform, (b) indicating that the security tag is still coupled to the article and (c) being detectable by the EAS monitoring system; and emitting the second waveform from the radiating device. 7. The method according to claim 1, further comprising: performing operations by the security tag to select a third waveform from a plurality of waveforms based on a characteristic of the article relative to that of other articles, the third waveform being (a) different from the first waveform and (b) detectable by the EAS monitoring system; and emitting the third waveform from the radiating device. 8. The method according to claim 1, further comprising: determining if the security tag is being removed from a protected area without authorization; selecting a fourth waveform from a plurality of waveforms when a determination is made that the security tag is still coupled to the article despite having the authorization, the second waveform (a) being different from the first waveform, (b) indicating that the security tag is still coupled to the article and (c) being detectable by the EAS monitoring system; and emitting the fourth waveform from the radiating device. 9. The method according to claim 1, further comprising discontinuing emitting the first waveform when the motion of security tag is no longer detected. 10. The method according to claim 1, further comprising discontinuing emitting the first waveform when authorization has been obtained to decouple security tag from the article or remove the article from the protected area. 11. A security tag, comprising: a sensor configured to detect motion of the security tag while in use to protect an article from unauthorized removal from a protected area; and a radiating device configured to emit a first waveform in response to the motion's detection, the first waveform detectable by an Electronic Article Surveillance (“EAS”) monitoring system; wherein the radiating device comprises a device other than an EAS element, a Radio Frequency Identification (“RFID”) device and a Near Field Communication (“NFC”) enabled device. 12. The security tag according to claim 11, wherein the radiating device comprises an audio speaker, a piezo, an antenna, a magnetic loop, or a metallic housing. 13. The security tag according to claim 11, wherein the EAS element is disposed within the security tag. 14. The security tag according to claim 13, wherein the EAS element is inoperative. 15. The security tag according to claim 11, wherein the security tag is exclusive of the EAS element. 16. The security tag according to claim 11, wherein the security tag further comprises: a processing device configured to: determine if the security tag is still coupled to the article despite having authorization for the security tag's decoupling from the article; select a second waveform from a plurality of waveforms when a determination is made that the security tag is still coupled to the article despite having the authorization, the second waveform (a) being different from the first waveform, (b) indicating that the security tag is still coupled to the article and (c) being detectable by the EAS monitoring system; and wherein the radiating device emits the second waveform. 17. The security tag according to claim 11, wherein the security tag further comprises: a processing device configured to select a third waveform from a plurality of waveforms based on a characteristic of the article relative to that of other articles, the third waveform being (a) different from the first waveform and (b) detectable by the EAS monitoring system; and wherein the radiating device emits the third waveform. 18. The security tag according to claim 11, wherein the security tag further comprises: a processing device configured to determine if the security tag is being removed from a protected area without authorization; select a fourth waveform from a plurality of waveforms when a determination is made that the security tag is still coupled to the article despite having the authorization, the second waveform (a) being different from the first waveform, (b) indicating that the security tag is still coupled to the article and (c) being detectable by the EAS monitoring system; and wherein the radiating device emits the fourth waveform. 19. The security tag according to claim 11, wherein the radiating element discontinues emitting the first waveform when the motion of security tag is no longer detected. 20. The security tag according to claim 11, wherein the radiating element discontinues emitting the first waveform when authorization has been obtained to decouple security tag from the article or remove the article from the protected area.
Systems and methods for detecting a security tag. The methods comprise: detecting motion of the security tag while in use to protect an article from unauthorized removal from a protected area; and emitting a first waveform from a radiating device of the security tag in response to the motion's detection. The first waveform is detectable by an Electronic Article Surveillance (“EAS”) monitoring system. The radiating device comprises a device other than an EAS element, a Radio Frequency Identification (“RFID”) device and a Near Field Communication (“NFC”) enabled device.1. A method for detecting a security tag, comprising: detecting motion of the security tag while in use to protect an article from unauthorized removal from a protected area; and emitting a first waveform from a radiating device of the security tag in response to the motion's detection, the first waveform detectable by an Electronic Article Surveillance (“EAS”) monitoring system; wherein the radiating device comprises a device other than an EAS element, a Radio Frequency Identification (“RFID”) device and a Near Field Communication (“NFC”) enabled device. 2. The method according to claim 1, wherein the radiating device comprises an audio speaker, a piezo, an antenna, a magnetic loop, or a metallic housing. 3. The method according to claim 1, wherein the EAS element is disposed within the security tag. 4. The method according to claim 3, wherein the EAS element is inoperative. 5. The method according to claim 1, wherein the security tag is exclusive of the EAS element. 6. The method according to claim 1, further comprising: performing operations by the security tag to determine if the security tag is still coupled to the article despite having authorization for the security tag's decoupling from the article; selecting a second waveform from a plurality of waveforms when a determination is made that the security tag is still coupled to the article despite having the authorization, the second waveform (a) being different from the first waveform, (b) indicating that the security tag is still coupled to the article and (c) being detectable by the EAS monitoring system; and emitting the second waveform from the radiating device. 7. The method according to claim 1, further comprising: performing operations by the security tag to select a third waveform from a plurality of waveforms based on a characteristic of the article relative to that of other articles, the third waveform being (a) different from the first waveform and (b) detectable by the EAS monitoring system; and emitting the third waveform from the radiating device. 8. The method according to claim 1, further comprising: determining if the security tag is being removed from a protected area without authorization; selecting a fourth waveform from a plurality of waveforms when a determination is made that the security tag is still coupled to the article despite having the authorization, the second waveform (a) being different from the first waveform, (b) indicating that the security tag is still coupled to the article and (c) being detectable by the EAS monitoring system; and emitting the fourth waveform from the radiating device. 9. The method according to claim 1, further comprising discontinuing emitting the first waveform when the motion of security tag is no longer detected. 10. The method according to claim 1, further comprising discontinuing emitting the first waveform when authorization has been obtained to decouple security tag from the article or remove the article from the protected area. 11. A security tag, comprising: a sensor configured to detect motion of the security tag while in use to protect an article from unauthorized removal from a protected area; and a radiating device configured to emit a first waveform in response to the motion's detection, the first waveform detectable by an Electronic Article Surveillance (“EAS”) monitoring system; wherein the radiating device comprises a device other than an EAS element, a Radio Frequency Identification (“RFID”) device and a Near Field Communication (“NFC”) enabled device. 12. The security tag according to claim 11, wherein the radiating device comprises an audio speaker, a piezo, an antenna, a magnetic loop, or a metallic housing. 13. The security tag according to claim 11, wherein the EAS element is disposed within the security tag. 14. The security tag according to claim 13, wherein the EAS element is inoperative. 15. The security tag according to claim 11, wherein the security tag is exclusive of the EAS element. 16. The security tag according to claim 11, wherein the security tag further comprises: a processing device configured to: determine if the security tag is still coupled to the article despite having authorization for the security tag's decoupling from the article; select a second waveform from a plurality of waveforms when a determination is made that the security tag is still coupled to the article despite having the authorization, the second waveform (a) being different from the first waveform, (b) indicating that the security tag is still coupled to the article and (c) being detectable by the EAS monitoring system; and wherein the radiating device emits the second waveform. 17. The security tag according to claim 11, wherein the security tag further comprises: a processing device configured to select a third waveform from a plurality of waveforms based on a characteristic of the article relative to that of other articles, the third waveform being (a) different from the first waveform and (b) detectable by the EAS monitoring system; and wherein the radiating device emits the third waveform. 18. The security tag according to claim 11, wherein the security tag further comprises: a processing device configured to determine if the security tag is being removed from a protected area without authorization; select a fourth waveform from a plurality of waveforms when a determination is made that the security tag is still coupled to the article despite having the authorization, the second waveform (a) being different from the first waveform, (b) indicating that the security tag is still coupled to the article and (c) being detectable by the EAS monitoring system; and wherein the radiating device emits the fourth waveform. 19. The security tag according to claim 11, wherein the radiating element discontinues emitting the first waveform when the motion of security tag is no longer detected. 20. The security tag according to claim 11, wherein the radiating element discontinues emitting the first waveform when authorization has been obtained to decouple security tag from the article or remove the article from the protected area.
2,600
9,994
9,994
14,301,556
2,626
An aspect provides a method, including: receiving, from at least one detector, data input associated with a position of a user with respect to an information handling device; determining, using a processor, the position of the user with respect to the information handling device using the data input; and displaying, using a display, a user position based input modality based on the position that has been determined. Other aspects are described and claimed.
1. A method, comprising: receiving, from at least one detector, data input associated with a position of a user with respect to an information handling device; determining, using a processor, the position of the user with respect to the information handling device using the data input; and displaying, using a display, a user position based input modality based on the position that has been determined. 2. The method of claim 1, wherein the at least one detector is selected from the group consisting of: a camera, a touch sensor, a proximity sensor, an accelerometer, and a gyroscope. 3. The method of claim 1, wherein the determining the position of the user comprises identifying the orientation of the information handling device. 4. The method of claim 1, wherein the determining the position of the user comprises identifying a hand position of the user with respect to the information handling device. 5. The method of claim 4, wherein the displaying of the user position based input modality comprises displaying a user position based input modality located at a position on the display associated with the hand position. 6. The method of claim 1, further comprising determining a currently active application on the information handling device. 7. The method of claim 6, wherein the displaying a user position based input modality further comprises using the currently active application to adjust a displayed position of the user position based input modality. 8. The method of claim 1, further comprising displaying a user interface allowing the user to manually select the user position based input modality. 9. The method of claim 1, wherein the displaying of the user position based input modality comprises displaying the user position based input modality based upon previous user selections. 10. The method of claim 1, wherein the user position based input modality is selected from the group consisting of: a right-handed keyboard, a left-handed keyboard, a two-handed keyboard, a swipe keyboard, a track pad, and a numeric keypad. 11. An information handling device, comprising: at least one detector; a display; at least one processor operatively coupled to the display and the at least one detector; and a memory storing instructions that are executable by the processor to: receive, from the at least one detector, data input associated with a position of a user with respect to an information handling device; determine the position of the user with respect to the information handling device using the data input; and display, using the display, a user position based input modality based on the position that has been determined. 12. The information handling device of claim 11, wherein the at least one detector is selected from the group consisting of: a camera, a touch sensor, a proximity sensor, an accelerometer, and a gyroscope. 13. The information handling device of claim 11, wherein to determine the position of the user comprises identifying the orientation of the information handling device. 14. The information handling device of claim 11, wherein to determine the position of the user comprises identifying a hand position of the user with respect to the information handling device. 15. The information handling device of claim 14, wherein to display the user position based input modality comprises displaying a user position based input modality located at a position on the display associated with the hand position. 16. The information handling device of claim 11, wherein the instructions are further executable by the at least one processor to determine a currently active application on the information handling device. 17. The information handling device of claim 16, wherein to display a user position based input modality further comprises using the currently active application to adjust a displayed position of the user position based input modality. 18. The information handling device of claim 11, wherein the instructions are further executable by the at least one processor to display a user interface allowing the user to manually select the user position based input modality. 19. The information handling device of claim 11, wherein the user position based input modality is selected from the group consisting of: a right-handed keyboard, a left-handed keyboard, a two-handed keyboard, a SWYPE keyboard, a track pad, and a numeric keypad. 20. A product, comprising: a storage device having code stored therewith, the code being executable by a processor and comprising: code that receives, from at least one detector, data input associated with a position of a user with respect to an information handling device; code that determines, using a processor, the position of the user with respect to the information handling device using the data input; and code that displays, using a display, a user position based input modality based on the position that has been determined.
An aspect provides a method, including: receiving, from at least one detector, data input associated with a position of a user with respect to an information handling device; determining, using a processor, the position of the user with respect to the information handling device using the data input; and displaying, using a display, a user position based input modality based on the position that has been determined. Other aspects are described and claimed.1. A method, comprising: receiving, from at least one detector, data input associated with a position of a user with respect to an information handling device; determining, using a processor, the position of the user with respect to the information handling device using the data input; and displaying, using a display, a user position based input modality based on the position that has been determined. 2. The method of claim 1, wherein the at least one detector is selected from the group consisting of: a camera, a touch sensor, a proximity sensor, an accelerometer, and a gyroscope. 3. The method of claim 1, wherein the determining the position of the user comprises identifying the orientation of the information handling device. 4. The method of claim 1, wherein the determining the position of the user comprises identifying a hand position of the user with respect to the information handling device. 5. The method of claim 4, wherein the displaying of the user position based input modality comprises displaying a user position based input modality located at a position on the display associated with the hand position. 6. The method of claim 1, further comprising determining a currently active application on the information handling device. 7. The method of claim 6, wherein the displaying a user position based input modality further comprises using the currently active application to adjust a displayed position of the user position based input modality. 8. The method of claim 1, further comprising displaying a user interface allowing the user to manually select the user position based input modality. 9. The method of claim 1, wherein the displaying of the user position based input modality comprises displaying the user position based input modality based upon previous user selections. 10. The method of claim 1, wherein the user position based input modality is selected from the group consisting of: a right-handed keyboard, a left-handed keyboard, a two-handed keyboard, a swipe keyboard, a track pad, and a numeric keypad. 11. An information handling device, comprising: at least one detector; a display; at least one processor operatively coupled to the display and the at least one detector; and a memory storing instructions that are executable by the processor to: receive, from the at least one detector, data input associated with a position of a user with respect to an information handling device; determine the position of the user with respect to the information handling device using the data input; and display, using the display, a user position based input modality based on the position that has been determined. 12. The information handling device of claim 11, wherein the at least one detector is selected from the group consisting of: a camera, a touch sensor, a proximity sensor, an accelerometer, and a gyroscope. 13. The information handling device of claim 11, wherein to determine the position of the user comprises identifying the orientation of the information handling device. 14. The information handling device of claim 11, wherein to determine the position of the user comprises identifying a hand position of the user with respect to the information handling device. 15. The information handling device of claim 14, wherein to display the user position based input modality comprises displaying a user position based input modality located at a position on the display associated with the hand position. 16. The information handling device of claim 11, wherein the instructions are further executable by the at least one processor to determine a currently active application on the information handling device. 17. The information handling device of claim 16, wherein to display a user position based input modality further comprises using the currently active application to adjust a displayed position of the user position based input modality. 18. The information handling device of claim 11, wherein the instructions are further executable by the at least one processor to display a user interface allowing the user to manually select the user position based input modality. 19. The information handling device of claim 11, wherein the user position based input modality is selected from the group consisting of: a right-handed keyboard, a left-handed keyboard, a two-handed keyboard, a SWYPE keyboard, a track pad, and a numeric keypad. 20. A product, comprising: a storage device having code stored therewith, the code being executable by a processor and comprising: code that receives, from at least one detector, data input associated with a position of a user with respect to an information handling device; code that determines, using a processor, the position of the user with respect to the information handling device using the data input; and code that displays, using a display, a user position based input modality based on the position that has been determined.
2,600
9,995
9,995
15,780,193
2,621
The invention relates to a vehicle having a detection device for detecting a property of a vehicle occupant and an operator control and display device which is coupled to the detection device, has a display region and is configured to display information in the display region and by means of which a symbol can be displayed in the display region as a function of the sensed property. It is provided here that the property which is detected by means of the detection device is a position of the vehicle occupant, and the operator control and display device is configured to display the symbol in a partial region of the display region which is determined as a function of the detected position, and to assign the symbol to the vehicle occupant in such a way that when there is a change in position of the vehicle occupant that the symbol follows the vehicle occupant. The invention also relates to a method for operating such a vehicle. The invention makes particularly flexible use of the vehicle possible.
1-11. (canceled) 12. A vehicle, comprising: a recognition device configured to detect a position of a vehicle occupant; and an operator control and display apparatus, coupled to the recognition device, having a display region configured to present information items in the display region, including a symbol presentable in a portion of the display region determined depending on the position detected by the recognition device, to assign the symbol to the vehicle occupant and to change the portion of the display region where the symbol is displayed when the vehicle occupant changes position. 13. The vehicle as claimed in claim 12, further comprising a windowpane arrangement providing, at least in regions, at least part of the display region. 14. The vehicle as claimed in claim 12, wherein the display region, at least in regions, includes at least one of an active screen and an operator control element operable by touch. 15. The vehicle as claimed in claim 12, wherein the symbol is an operator control element, and wherein the operator control and display apparatus responds to operation of the symbol by presentation of the information items in the display region. 16. The vehicle as claimed in claim 15, wherein the information items presented in the display region following the operation of the symbol are personalized for the vehicle occupant performing the operation. 17. The vehicle as claimed in claim 12, further comprising a windowpane arrangement with a frame part, and wherein the display region for presenting the symbol is arranged at the frame part at least partly encompassing the windowpane arrangement of the vehicle. 18. The vehicle as claimed in claim 12, wherein the recognition device is configured to simultaneously recognize respective positions of a plurality of vehicle occupants, and wherein the operator control and display apparatus simultaneously presents a respective symbol in a respective portion of the display region determined depending on the position. 19. The vehicle as claimed in claim 18, wherein the operator control and display apparatus is configured to respectively assign at least one respective symbol to each respective vehicle occupant and to move the respective portion of the display region where the respective symbol is displayed to follow change in the position of the respective vehicle occupant. 20. The vehicle as claimed in claim 12, wherein a plurality of vehicle occupants, are identifiable by the recognition device. 21. A method for operating a vehicle having a recognition device for recognizing a position of at least one vehicle occupant and an operator control and display apparatus, coupled to the recognition device, with a display region in which information is presentable, comprising: continuously recognizing a respective position of the at least one vehicle occupant by the recognition device as a property of the at least one vehicle occupant; presenting a respective symbol assigned to the vehicle occupant by the operator control and display apparatus in a respective portion of the display region determined depending on the respective position of the at least one vehicle occupant, the respective symbol being assigned to the at least one vehicle occupant; and changing the respective portion of the display region where the respective symbol is displayed when the at least one vehicle occupant changes position. 22. The method as claimed in claim 21, wherein the respective symbol is an operator control element and the information is presented in the display region following an operation of the respective symbol by the at least one vehicle occupant. 23. The method as claimed in claim 21, wherein the information presented in the display region following the operation of the symbol is personalized for the at least one vehicle occupant performing the operation. 24. The method as claimed in claim 21, wherein said recognizing simultaneously recognizes respective positions of a plurality of vehicle occupants, and wherein said presenting simultaneously presents the respective symbol in the respective portion of the display region determined depending on the respective position. 25. The method as claimed in claim 24, further comprising: respectively assigning at least one respective symbol to each respective vehicle occupant; and moving the respective portion of the display region where the respective symbol is displayed to follow change in the position of the respective vehicle occupant. 26. The method as claimed in claim 21, further comprising identifying a plurality of vehicle occupants.
The invention relates to a vehicle having a detection device for detecting a property of a vehicle occupant and an operator control and display device which is coupled to the detection device, has a display region and is configured to display information in the display region and by means of which a symbol can be displayed in the display region as a function of the sensed property. It is provided here that the property which is detected by means of the detection device is a position of the vehicle occupant, and the operator control and display device is configured to display the symbol in a partial region of the display region which is determined as a function of the detected position, and to assign the symbol to the vehicle occupant in such a way that when there is a change in position of the vehicle occupant that the symbol follows the vehicle occupant. The invention also relates to a method for operating such a vehicle. The invention makes particularly flexible use of the vehicle possible.1-11. (canceled) 12. A vehicle, comprising: a recognition device configured to detect a position of a vehicle occupant; and an operator control and display apparatus, coupled to the recognition device, having a display region configured to present information items in the display region, including a symbol presentable in a portion of the display region determined depending on the position detected by the recognition device, to assign the symbol to the vehicle occupant and to change the portion of the display region where the symbol is displayed when the vehicle occupant changes position. 13. The vehicle as claimed in claim 12, further comprising a windowpane arrangement providing, at least in regions, at least part of the display region. 14. The vehicle as claimed in claim 12, wherein the display region, at least in regions, includes at least one of an active screen and an operator control element operable by touch. 15. The vehicle as claimed in claim 12, wherein the symbol is an operator control element, and wherein the operator control and display apparatus responds to operation of the symbol by presentation of the information items in the display region. 16. The vehicle as claimed in claim 15, wherein the information items presented in the display region following the operation of the symbol are personalized for the vehicle occupant performing the operation. 17. The vehicle as claimed in claim 12, further comprising a windowpane arrangement with a frame part, and wherein the display region for presenting the symbol is arranged at the frame part at least partly encompassing the windowpane arrangement of the vehicle. 18. The vehicle as claimed in claim 12, wherein the recognition device is configured to simultaneously recognize respective positions of a plurality of vehicle occupants, and wherein the operator control and display apparatus simultaneously presents a respective symbol in a respective portion of the display region determined depending on the position. 19. The vehicle as claimed in claim 18, wherein the operator control and display apparatus is configured to respectively assign at least one respective symbol to each respective vehicle occupant and to move the respective portion of the display region where the respective symbol is displayed to follow change in the position of the respective vehicle occupant. 20. The vehicle as claimed in claim 12, wherein a plurality of vehicle occupants, are identifiable by the recognition device. 21. A method for operating a vehicle having a recognition device for recognizing a position of at least one vehicle occupant and an operator control and display apparatus, coupled to the recognition device, with a display region in which information is presentable, comprising: continuously recognizing a respective position of the at least one vehicle occupant by the recognition device as a property of the at least one vehicle occupant; presenting a respective symbol assigned to the vehicle occupant by the operator control and display apparatus in a respective portion of the display region determined depending on the respective position of the at least one vehicle occupant, the respective symbol being assigned to the at least one vehicle occupant; and changing the respective portion of the display region where the respective symbol is displayed when the at least one vehicle occupant changes position. 22. The method as claimed in claim 21, wherein the respective symbol is an operator control element and the information is presented in the display region following an operation of the respective symbol by the at least one vehicle occupant. 23. The method as claimed in claim 21, wherein the information presented in the display region following the operation of the symbol is personalized for the at least one vehicle occupant performing the operation. 24. The method as claimed in claim 21, wherein said recognizing simultaneously recognizes respective positions of a plurality of vehicle occupants, and wherein said presenting simultaneously presents the respective symbol in the respective portion of the display region determined depending on the respective position. 25. The method as claimed in claim 24, further comprising: respectively assigning at least one respective symbol to each respective vehicle occupant; and moving the respective portion of the display region where the respective symbol is displayed to follow change in the position of the respective vehicle occupant. 26. The method as claimed in claim 21, further comprising identifying a plurality of vehicle occupants.
2,600
9,996
9,996
14,507,238
2,652
Agents of a contact center are trained and assessed without the need for a separate testing and assessment task. Work items are provided to agents, who are non-primary agents with respect to a particular skill associated with an attribute of a work item. With the controlled routing of the non-primary work items to the non-primary agent, the agent is provided with a chance to practice their non-primary skills, with the intention of improving said skills A number of successfully completed tasks may indicate the agent is entitled to “primary” designation and be provided with tasks having the attribute in the normal course of business.
1. An electronic system, comprising: a work item categorization module configured to access a work item of a contact center, determine an attribute of the work item, and determine eligibility for the work item to be a training work item; a routing engine configured to, upon the work item being determined to be eligible to be a training work item: select an agent to be an agent under test from a pool of agents, wherein the selected agent has previously been determined to have a proficiency with the attribute that is below a primary agent status proficiency level; route the work item to the agent under test for processing by the non-primary agent; an evaluation module configured to evaluate the performance of the agent under test in processing of the work item, wherein the evaluation is associated with the attribute; and a reporting module to report indicia of the evaluated performance of the agent under test performed by the evaluation module to a report receiving component of the contact center. 2. The system of claim 1, wherein the routing engine is further configured to, upon the work item not being determined eligible to be a training work item, selecting a primary agent and routing the work item to the selected primary agent for processing by the primary agent and not for processing by the agent under test. 3. The system of claim 1, wherein the evaluation module evaluates the performance of the agent under test in processing the work item and wherein the evaluation is further associated with an aspect of the work item other than the attribute. 4. The system of claim 1, wherein the routing selects the agent under test has previously been determined to have a proficiency with the attribute that is below a primary agent status proficiency level by failing to successfully process a prior work item having the attribute. 5. The system of claim 1, wherein the work item categorization module configured determine eligibility for the work item to be a training work item upon determining the work item is not associated with a customer that is further a associated with a previous work item that was routed to an agent under test within a previously determined time. 6. The system of claim 1, wherein the work item categorization module configured determine eligibility for the work item to be a training work item upon determining the work item has is not associated with a high-value customer of the contact center. 7. The system of claim 1, wherein the routing engine selects the agent under test in accord with a previously determined test frequency for the agent under test to receive work items that are determined to be training work items. 8. The system of claim 7, wherein the frequency is modified upon the evaluation module determining whether the agent under test successfully or unsuccessfully processed the work item. 9. The system of claim 1, wherein the routing engine selects the agent under test in accord with a previously determined test volume for the agent under test to receive a number of work items that are determined to be training work items until the previously determined test volume has been processed by the agent under test. 10. The system of claim 1, wherein the reporting module is operable to, upon the evaluation module determining that the agent under test has successfully processed the work item, cause the proficiency level of the agent under test associated with the attribute to be incremented. 11. A non-transitory computer readable medium with instructions thereon that when read by a computer cause the computer to perform: accessing a work item of a contact center; determining an attribute of the work item; determine eligibility for the work item to be a training work item; upon the work item being determined to be eligible to be a training work item: selecting an agent to be an agent under test from a pool of agents, wherein the selected agent has previously been determined to have a proficiency with the attribute that is below a primary agent status proficiency level; routing the work item to the agent under test for processing by the non-primary agent; evaluating the performance of the agent under test in processing of the work item, wherein the evaluation is associated with the attribute; and reporting indicia of the evaluated performance of the agent under test performed by the evaluation module to a report receiving component of the contact center. 12. The non-transitory medium of claim 11, further comprising instructions to, upon the work item not being determined eligible to be a training work item, selecting a primary agent and routing the work item to the selected primary agent for processing by the primary agent and not for processing by the agent under test. 13. The non-transitory medium of claim 11, wherein the instructions to evaluate the performance of the agent under test in processing the work item further comprise instructions to evaluate the agent under test with regard to an aspect of the work item other than the attribute. 14. The non-transitory medium of claim 11, wherein the instructions for selecting the agent under test, further comprise instructions to select the agent under test having previously been determined to have a proficiency with the attribute that is below a primary agent status proficiency level by failing to successfully process a prior work item having the attribute. 15. The non-transitory medium of claim 11, wherein the instructions to determine the eligibility for the work item to be a training work item, further comprise instructions to determine the work item is not associated with a customer that is further a associated with a previous work item that was routed to an agent under test within a previously determined time. 16. The non-transitory medium of claim 11, wherein the instructions for selecting the agent under test further comprise instructions to select the agent under test in accord with a previously determined test frequency for the agent under test to receive work items that are determined to be training work items and modifying the frequency upon determining whether the agent under test successfully or unsuccessfully processed the work item. 17. A server configured to: access a work item of a contact center; determine an attribute of the work item; determine eligibility for the work item to be a training work item; upon the work item being determined to be eligible to be a training work item: select an agent to be an agent under test from a pool of agents, wherein the selected agent has previously been determined to have a proficiency with the attribute that is below a primary agent status proficiency level; route the work item to the agent under test for processing by the non-primary agent; evaluate the performance of the agent under test in processing of the work item, wherein the evaluation is associated with the attribute; and report indicia of the evaluated performance of the agent under test performed by the evaluation module to a report receiving component of the contact center. 18. The server of claim 17, further comprising logic to, upon the work item not being determined eligible to be a training work item, select a primary agent and route the work item to the selected primary agent for processing by the primary agent and not for processing by the agent under test. 19. The server of claim 17, wherein the logic to evaluate the performance of the agent under test in processing the work item further comprise logic to evaluate the agent under test with regard to an aspect of the work item other than the attribute. 20. The server of claim 17, further comprising logic to, upon determining that the agent under test has successfully processed the work item, cause the proficiency level of the agent under test associated with the attribute to be incremented.
Agents of a contact center are trained and assessed without the need for a separate testing and assessment task. Work items are provided to agents, who are non-primary agents with respect to a particular skill associated with an attribute of a work item. With the controlled routing of the non-primary work items to the non-primary agent, the agent is provided with a chance to practice their non-primary skills, with the intention of improving said skills A number of successfully completed tasks may indicate the agent is entitled to “primary” designation and be provided with tasks having the attribute in the normal course of business.1. An electronic system, comprising: a work item categorization module configured to access a work item of a contact center, determine an attribute of the work item, and determine eligibility for the work item to be a training work item; a routing engine configured to, upon the work item being determined to be eligible to be a training work item: select an agent to be an agent under test from a pool of agents, wherein the selected agent has previously been determined to have a proficiency with the attribute that is below a primary agent status proficiency level; route the work item to the agent under test for processing by the non-primary agent; an evaluation module configured to evaluate the performance of the agent under test in processing of the work item, wherein the evaluation is associated with the attribute; and a reporting module to report indicia of the evaluated performance of the agent under test performed by the evaluation module to a report receiving component of the contact center. 2. The system of claim 1, wherein the routing engine is further configured to, upon the work item not being determined eligible to be a training work item, selecting a primary agent and routing the work item to the selected primary agent for processing by the primary agent and not for processing by the agent under test. 3. The system of claim 1, wherein the evaluation module evaluates the performance of the agent under test in processing the work item and wherein the evaluation is further associated with an aspect of the work item other than the attribute. 4. The system of claim 1, wherein the routing selects the agent under test has previously been determined to have a proficiency with the attribute that is below a primary agent status proficiency level by failing to successfully process a prior work item having the attribute. 5. The system of claim 1, wherein the work item categorization module configured determine eligibility for the work item to be a training work item upon determining the work item is not associated with a customer that is further a associated with a previous work item that was routed to an agent under test within a previously determined time. 6. The system of claim 1, wherein the work item categorization module configured determine eligibility for the work item to be a training work item upon determining the work item has is not associated with a high-value customer of the contact center. 7. The system of claim 1, wherein the routing engine selects the agent under test in accord with a previously determined test frequency for the agent under test to receive work items that are determined to be training work items. 8. The system of claim 7, wherein the frequency is modified upon the evaluation module determining whether the agent under test successfully or unsuccessfully processed the work item. 9. The system of claim 1, wherein the routing engine selects the agent under test in accord with a previously determined test volume for the agent under test to receive a number of work items that are determined to be training work items until the previously determined test volume has been processed by the agent under test. 10. The system of claim 1, wherein the reporting module is operable to, upon the evaluation module determining that the agent under test has successfully processed the work item, cause the proficiency level of the agent under test associated with the attribute to be incremented. 11. A non-transitory computer readable medium with instructions thereon that when read by a computer cause the computer to perform: accessing a work item of a contact center; determining an attribute of the work item; determine eligibility for the work item to be a training work item; upon the work item being determined to be eligible to be a training work item: selecting an agent to be an agent under test from a pool of agents, wherein the selected agent has previously been determined to have a proficiency with the attribute that is below a primary agent status proficiency level; routing the work item to the agent under test for processing by the non-primary agent; evaluating the performance of the agent under test in processing of the work item, wherein the evaluation is associated with the attribute; and reporting indicia of the evaluated performance of the agent under test performed by the evaluation module to a report receiving component of the contact center. 12. The non-transitory medium of claim 11, further comprising instructions to, upon the work item not being determined eligible to be a training work item, selecting a primary agent and routing the work item to the selected primary agent for processing by the primary agent and not for processing by the agent under test. 13. The non-transitory medium of claim 11, wherein the instructions to evaluate the performance of the agent under test in processing the work item further comprise instructions to evaluate the agent under test with regard to an aspect of the work item other than the attribute. 14. The non-transitory medium of claim 11, wherein the instructions for selecting the agent under test, further comprise instructions to select the agent under test having previously been determined to have a proficiency with the attribute that is below a primary agent status proficiency level by failing to successfully process a prior work item having the attribute. 15. The non-transitory medium of claim 11, wherein the instructions to determine the eligibility for the work item to be a training work item, further comprise instructions to determine the work item is not associated with a customer that is further a associated with a previous work item that was routed to an agent under test within a previously determined time. 16. The non-transitory medium of claim 11, wherein the instructions for selecting the agent under test further comprise instructions to select the agent under test in accord with a previously determined test frequency for the agent under test to receive work items that are determined to be training work items and modifying the frequency upon determining whether the agent under test successfully or unsuccessfully processed the work item. 17. A server configured to: access a work item of a contact center; determine an attribute of the work item; determine eligibility for the work item to be a training work item; upon the work item being determined to be eligible to be a training work item: select an agent to be an agent under test from a pool of agents, wherein the selected agent has previously been determined to have a proficiency with the attribute that is below a primary agent status proficiency level; route the work item to the agent under test for processing by the non-primary agent; evaluate the performance of the agent under test in processing of the work item, wherein the evaluation is associated with the attribute; and report indicia of the evaluated performance of the agent under test performed by the evaluation module to a report receiving component of the contact center. 18. The server of claim 17, further comprising logic to, upon the work item not being determined eligible to be a training work item, select a primary agent and route the work item to the selected primary agent for processing by the primary agent and not for processing by the agent under test. 19. The server of claim 17, wherein the logic to evaluate the performance of the agent under test in processing the work item further comprise logic to evaluate the agent under test with regard to an aspect of the work item other than the attribute. 20. The server of claim 17, further comprising logic to, upon determining that the agent under test has successfully processed the work item, cause the proficiency level of the agent under test associated with the attribute to be incremented.
2,600
9,997
9,997
14,022,774
2,646
A method and system for transliteration of a textual message are provided. The method includes receiving, from a first network texting element, the textual message sent from a first mobile device and destined to a second mobile device, wherein the textual message comprises a first character set; determining if the first character set is supported by the second mobile device; determining a second character set supported by the second mobile device when the first character set is not supported by the second mobile device; transliterating the textual message to the second character set; and sending the transliterated textual message to a second network texting element.
1. A computerized method of transliteration of a textual message, comprising: receiving, from a first network texting element, the textual message sent from a first mobile device and destined to a second mobile device, wherein the textual message comprises a first character set; determining if the first character set is supported by the second mobile device; determining a second character set supported by the second mobile device when the first character set is not supported by the second mobile device; transliterating the textual message to the second character set; and sending the transliterated textual message to a second network texting element. 2. The method of claim 1, wherein the first mobile device and the second mobile device are the same device. 3. The method of claim 1, wherein at least a portion of the transliteration takes place on one of: the first mobile device and the second mobile device. 4. The method of claim 1, wherein determining if the first character is supported by the second mobile device further comprising: checking a database maintaining supported character sets respective of at least one of: a list of mobile device models, a list of mobile subscribers, and a list of phone numbers. 5. The method of claim 1, wherein determining if the first character set is supported by the second mobile device further comprises interrogating the second mobile device respective of at least a supported character set. 6. The method of claim 5, wherein the interrogation comprises: sending a SMS message to the second mobile device and requesting identification of at least a supported character set. 7. The method of claim 1, wherein transliterating the textual message further comprises: performing one-to-one mapping of each character provided in the textual message from the first character set to the second character set; and confirming accuracy of the transliterated textual message. 8. The method of claim 1, wherein the texting network element is at least one of: a short message service center (SMSC) and an unstructured supplementary service data (USSD) gateway. 9. The method of claim 1, wherein the first and second network texting elements are the same. 10. The method of claim 1, wherein the first mobile device includes any one of: a SMS server and a SMS gateway. 11. The method of claim 1, wherein the textual message is provided using any one of: short message system (SMS) messaging, unstructured supplementary service data (USSD), and text. 12. A non-transitory computer readable medium having stored thereon instructions for causing one or more processing units to execute the method according to claim 1. 13. A system for performing transliteration of a textual message, comprising: a processing unit; an interface to a network communicatively coupled to the processing unit, for communicating with at least a first device and a second device; and a memory communicatively coupled to the processing unit, the memory containing instructions that when executed by the processing unit configure the system to: receive, from a first network texting element, the textual message sent from a first mobile device and destined to a second mobile device, wherein the textual message comprises a first character set; determine if the first character set is supported by the second mobile device; determine a second character set supported by the second mobile device when the first character set is not supported by the second mobile device; transliterate the textual message to the second character set; and send the transliterated textual message to a second network texting element. 14. The system of claim 13, wherein the first device and the second device are the same device. 15. The system of claim 13, wherein at least a portion of the transliteration takes place on one of: the first device and the second device. 16. The system of claim 13, wherein the network comprises at least a cellular network for communication between the first device and the second device. 17. The system of claim 13, wherein the textual message is provided using any one of: short message system (SMS) messaging, unstructured supplementary service data (USSD), and text. 18. The system of claim 13, wherein the system is further configured to: check a database maintaining supported character sets respective of at least one of: a list of mobile device models, a list of mobile subscribers, and a list of phone numbers. 19. The system of claim 13, wherein the system is further configured to interrogate the second mobile device about its respective supported character sets by sending an SMS message to the second mobile device and requesting identification of at least a supported character set. 20. The system of claim 13, wherein the system is further configured to: perform one-to-one mapping of each character provided in the textual message from the first character set to the second character set; and confirm at least validity and accuracy of the transliterated textual message. 21. The system of claim 13, wherein the network texting element is at least one of: short message service center (SMSC) and an unstructured supplementary service data (USSD) gateway. 22. The system of claim 21, wherein the system is implemented in the network texting element. 23. The system of claim 22, wherein the first and second network texting elements are the same. 24. The system of claim 22, wherein the first mobile device includes any one of: a SMS server and a SMS gateway.
A method and system for transliteration of a textual message are provided. The method includes receiving, from a first network texting element, the textual message sent from a first mobile device and destined to a second mobile device, wherein the textual message comprises a first character set; determining if the first character set is supported by the second mobile device; determining a second character set supported by the second mobile device when the first character set is not supported by the second mobile device; transliterating the textual message to the second character set; and sending the transliterated textual message to a second network texting element.1. A computerized method of transliteration of a textual message, comprising: receiving, from a first network texting element, the textual message sent from a first mobile device and destined to a second mobile device, wherein the textual message comprises a first character set; determining if the first character set is supported by the second mobile device; determining a second character set supported by the second mobile device when the first character set is not supported by the second mobile device; transliterating the textual message to the second character set; and sending the transliterated textual message to a second network texting element. 2. The method of claim 1, wherein the first mobile device and the second mobile device are the same device. 3. The method of claim 1, wherein at least a portion of the transliteration takes place on one of: the first mobile device and the second mobile device. 4. The method of claim 1, wherein determining if the first character is supported by the second mobile device further comprising: checking a database maintaining supported character sets respective of at least one of: a list of mobile device models, a list of mobile subscribers, and a list of phone numbers. 5. The method of claim 1, wherein determining if the first character set is supported by the second mobile device further comprises interrogating the second mobile device respective of at least a supported character set. 6. The method of claim 5, wherein the interrogation comprises: sending a SMS message to the second mobile device and requesting identification of at least a supported character set. 7. The method of claim 1, wherein transliterating the textual message further comprises: performing one-to-one mapping of each character provided in the textual message from the first character set to the second character set; and confirming accuracy of the transliterated textual message. 8. The method of claim 1, wherein the texting network element is at least one of: a short message service center (SMSC) and an unstructured supplementary service data (USSD) gateway. 9. The method of claim 1, wherein the first and second network texting elements are the same. 10. The method of claim 1, wherein the first mobile device includes any one of: a SMS server and a SMS gateway. 11. The method of claim 1, wherein the textual message is provided using any one of: short message system (SMS) messaging, unstructured supplementary service data (USSD), and text. 12. A non-transitory computer readable medium having stored thereon instructions for causing one or more processing units to execute the method according to claim 1. 13. A system for performing transliteration of a textual message, comprising: a processing unit; an interface to a network communicatively coupled to the processing unit, for communicating with at least a first device and a second device; and a memory communicatively coupled to the processing unit, the memory containing instructions that when executed by the processing unit configure the system to: receive, from a first network texting element, the textual message sent from a first mobile device and destined to a second mobile device, wherein the textual message comprises a first character set; determine if the first character set is supported by the second mobile device; determine a second character set supported by the second mobile device when the first character set is not supported by the second mobile device; transliterate the textual message to the second character set; and send the transliterated textual message to a second network texting element. 14. The system of claim 13, wherein the first device and the second device are the same device. 15. The system of claim 13, wherein at least a portion of the transliteration takes place on one of: the first device and the second device. 16. The system of claim 13, wherein the network comprises at least a cellular network for communication between the first device and the second device. 17. The system of claim 13, wherein the textual message is provided using any one of: short message system (SMS) messaging, unstructured supplementary service data (USSD), and text. 18. The system of claim 13, wherein the system is further configured to: check a database maintaining supported character sets respective of at least one of: a list of mobile device models, a list of mobile subscribers, and a list of phone numbers. 19. The system of claim 13, wherein the system is further configured to interrogate the second mobile device about its respective supported character sets by sending an SMS message to the second mobile device and requesting identification of at least a supported character set. 20. The system of claim 13, wherein the system is further configured to: perform one-to-one mapping of each character provided in the textual message from the first character set to the second character set; and confirm at least validity and accuracy of the transliterated textual message. 21. The system of claim 13, wherein the network texting element is at least one of: short message service center (SMSC) and an unstructured supplementary service data (USSD) gateway. 22. The system of claim 21, wherein the system is implemented in the network texting element. 23. The system of claim 22, wherein the first and second network texting elements are the same. 24. The system of claim 22, wherein the first mobile device includes any one of: a SMS server and a SMS gateway.
2,600
9,998
9,998
15,286,161
2,618
Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more mages of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.
1.-22. (canceled) 23. A user-carried or user-worn apparatus for tracking objects, the apparatus comprising: one or more optical sensors for capturing images of a scene, wherein the images include a wide field of view image of the scene and a narrow field of view image of the scene; a display for displaying a displayed view of the scene; and a processor coupled to a storage medium, the storage medium storing processor-executable instructions, which when executed by the processor, performs a method comprising: geolocating the one or more optical sensors and a location of one or more objects depicted in the captured images based on a location of the one or more optical sensors; recognizing an object of interest in the wide field of view image; recognizing the object of interest in the narrow field of view image; correlating a location of the recognized object of interest in the wide field of view image and the narrow field of view image; and enhancing the displayed view of the scene on the display based on the geolocating of the one or more objects, wherein the enhanced, displayed view comprises overlay content for the recognized object of interest. 24. The apparatus of claim 23, wherein the method performed by the processor based on the processor-executable instructions further comprises: broadcasting the tracked location of the one or more objects and receiving tracked location information of objects outside a field of view of the one or more optical sensors, the received, tracked location information being received from other apparatuses. 25. The apparatus of claim 23, wherein the user-carried or user-worn apparatus is configured to exchange tracked location information of the one or more objects with at least one other user-carried or user-worn apparatus and enhance the displayed view of the scene on the display based on the geolocating of the one or more objects by the apparatus and also based on tracked location information received from the at least one other user-carried or user-worn apparatus. 26. The apparatus of claim 23, wherein the apparatus is in communication with one or more similar apparatuses such that users of the apparatuses utilize the apparatuses in a gaming activity in a physical area. 27. The apparatus of claim 23, wherein when there are multiple instances of the apparatus in remote locations, geolocation is performed with respect to all of the sensors included in the multiple instances of the apparatus. 28. The apparatus of claim 23, wherein the enhancing of the displayed view further comprises augmenting the displayed view to indicate location of the one or more objects outside the field of view of the one or more optical sensors. 29. The apparatus of claim 28, wherein the augmentation of the displayed view includes insertion of one or more markers of objects of interest. 30. The apparatus of claim 28, wherein the augmentation of the displayed view includes insertion of one or more indicators associated with one or more objects of interest. 31. The apparatus of claim 30, wherein the one or more indicators provide direction to the one or more objects of interest. 32. The apparatus of claim 31, wherein the one or more indicators provide direction to the one or more objects of interest outside of the current field of view outside a field of view of the one or more optical sensors. 33. The apparatus of claim 23, wherein the correlating is performed by tracking the location of the recognized object of interest during transition from the wide field of view image to the narrow field of view image. 34. The apparatus of claim 33, wherein the overlay content for the recognized object of interest is scaled in accordance with the tracking of the location of the recognized object of interest from the wide field of view to the narrow field of view. 35. The apparatus of claim 23, wherein the method performed by the processor based on the processor-executable instructions further comprises: maintaining the enhanced, displayed view consistent in real-time with determined location of the recognized object of interest as a user of the apparatus relocates the apparatus. 36. The apparatus of claim 23, wherein the method performed by the processor based on the processor-executable instructions further comprises: inserting objects into the enhanced, displayed view based on geographic data that indicates that the inserted object is to be occluded by another object in the enhanced, displayed view. 37. The apparatus of claim 23, wherein enhancing the displayed view comprises overlaying geographically located information from external sources onto the displayed view. 38. A method for tracking objects by a user-carried or user-worn apparatus, the method comprising: capturing, using one or more optical sensors, images of a scene, wherein the images include a wide field of view image of the scene and a narrow field of view image of the scene; geolocating the one or more optical sensors, and a location of one or more objects depicted in the captured images based on a location of the one or more optical sensors; recognizing an object of interest in the wide field of view image; recognizing the object of interest in the narrow field of view image; correlating a location of the recognized object of interest in the wide field of view image and the narrow field of view image; and enhancing a displayed view of the scene on a display of the user-carried or user-worn apparatus based on the geolocating of the one or more objects, wherein the enhanced, displayed view comprises overlay content for the recognized object of interest. 39. The method of claim 38, further comprising broadcasting the tracked location of the one or more objects and receiving tracked location information of objects outside a field of view of the one or more optical sensors, the received, tracked location information being received from other apparatuses. 40. The method of claim 38, wherein the user-carried or user-worn apparatus is configured to exchange tracked location information of the one or more objects with at least one other user-carried or user-worn apparatus and enhance the displayed view of the scene on the display based on the geolocating of the one or more objects by the apparatus and also based on tracked location information received from the at least one other user-carried or user-worn apparatus. 41. The method of claim 38, wherein when there are multiple instances of the apparatus in remote locations, geolocation is performed with respect to all of the sensors included in the multiple instances of the apparatus. 42. The method of claim 38, wherein the enhancing of the displayed view further comprises augmenting the displayed view to indicate location of the one or more objects outside the field of view of the one or more optical sensors. 43. The method of claim 42, wherein the augmentation of the displayed view includes insertion of one or more markers of objects of interest. 44. The method of claim 42, wherein the augmentation of the displayed view includes insertion of one or more indicators associated with one or more objects of interest. 45. The method of claim 44, wherein the one or more indicators provide direction to the one or more objects of interest. 46. The method of claim 44, wherein the one or more indicators provide direction to the one or more objects of interest outside of the current field of view outside a field of view of the one or more optical sensors. 47. The method of claim 38, wherein the correlating is performed by tracking the location of the recognized object of interest during transition from the wide field of view image to the narrow field of view image. 47. The apparatus of claim 47, wherein the overlay content for the recognized object of interest is scaled in accordance with the tracking of the location of the recognized object of interest from the wide field of view to the narrow field of view. 48. The method of claim 38, further comprising maintaining the enhanced, displayed view consistent in real-time with determined location of the recognized object of interest as a user of the apparatus relocates the apparatus. 49. The method of claim 38, further comprising inserting objects into the enhanced, displayed view based on geographic data that indicates that the inserted object is to be occluded by another object in the enhanced, displayed view. 50. The method of claim 38, wherein enhancing the displayed view comprises overlaying geographically located information from external sources onto the displayed view. 51. The method of claim 38, further comprising: capturing, using a first lens of the one or more optical sensors, the wide field of view image; and capturing, using a second lens the one or more optical sensors, the narrow field of view image.
Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more mages of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.1.-22. (canceled) 23. A user-carried or user-worn apparatus for tracking objects, the apparatus comprising: one or more optical sensors for capturing images of a scene, wherein the images include a wide field of view image of the scene and a narrow field of view image of the scene; a display for displaying a displayed view of the scene; and a processor coupled to a storage medium, the storage medium storing processor-executable instructions, which when executed by the processor, performs a method comprising: geolocating the one or more optical sensors and a location of one or more objects depicted in the captured images based on a location of the one or more optical sensors; recognizing an object of interest in the wide field of view image; recognizing the object of interest in the narrow field of view image; correlating a location of the recognized object of interest in the wide field of view image and the narrow field of view image; and enhancing the displayed view of the scene on the display based on the geolocating of the one or more objects, wherein the enhanced, displayed view comprises overlay content for the recognized object of interest. 24. The apparatus of claim 23, wherein the method performed by the processor based on the processor-executable instructions further comprises: broadcasting the tracked location of the one or more objects and receiving tracked location information of objects outside a field of view of the one or more optical sensors, the received, tracked location information being received from other apparatuses. 25. The apparatus of claim 23, wherein the user-carried or user-worn apparatus is configured to exchange tracked location information of the one or more objects with at least one other user-carried or user-worn apparatus and enhance the displayed view of the scene on the display based on the geolocating of the one or more objects by the apparatus and also based on tracked location information received from the at least one other user-carried or user-worn apparatus. 26. The apparatus of claim 23, wherein the apparatus is in communication with one or more similar apparatuses such that users of the apparatuses utilize the apparatuses in a gaming activity in a physical area. 27. The apparatus of claim 23, wherein when there are multiple instances of the apparatus in remote locations, geolocation is performed with respect to all of the sensors included in the multiple instances of the apparatus. 28. The apparatus of claim 23, wherein the enhancing of the displayed view further comprises augmenting the displayed view to indicate location of the one or more objects outside the field of view of the one or more optical sensors. 29. The apparatus of claim 28, wherein the augmentation of the displayed view includes insertion of one or more markers of objects of interest. 30. The apparatus of claim 28, wherein the augmentation of the displayed view includes insertion of one or more indicators associated with one or more objects of interest. 31. The apparatus of claim 30, wherein the one or more indicators provide direction to the one or more objects of interest. 32. The apparatus of claim 31, wherein the one or more indicators provide direction to the one or more objects of interest outside of the current field of view outside a field of view of the one or more optical sensors. 33. The apparatus of claim 23, wherein the correlating is performed by tracking the location of the recognized object of interest during transition from the wide field of view image to the narrow field of view image. 34. The apparatus of claim 33, wherein the overlay content for the recognized object of interest is scaled in accordance with the tracking of the location of the recognized object of interest from the wide field of view to the narrow field of view. 35. The apparatus of claim 23, wherein the method performed by the processor based on the processor-executable instructions further comprises: maintaining the enhanced, displayed view consistent in real-time with determined location of the recognized object of interest as a user of the apparatus relocates the apparatus. 36. The apparatus of claim 23, wherein the method performed by the processor based on the processor-executable instructions further comprises: inserting objects into the enhanced, displayed view based on geographic data that indicates that the inserted object is to be occluded by another object in the enhanced, displayed view. 37. The apparatus of claim 23, wherein enhancing the displayed view comprises overlaying geographically located information from external sources onto the displayed view. 38. A method for tracking objects by a user-carried or user-worn apparatus, the method comprising: capturing, using one or more optical sensors, images of a scene, wherein the images include a wide field of view image of the scene and a narrow field of view image of the scene; geolocating the one or more optical sensors, and a location of one or more objects depicted in the captured images based on a location of the one or more optical sensors; recognizing an object of interest in the wide field of view image; recognizing the object of interest in the narrow field of view image; correlating a location of the recognized object of interest in the wide field of view image and the narrow field of view image; and enhancing a displayed view of the scene on a display of the user-carried or user-worn apparatus based on the geolocating of the one or more objects, wherein the enhanced, displayed view comprises overlay content for the recognized object of interest. 39. The method of claim 38, further comprising broadcasting the tracked location of the one or more objects and receiving tracked location information of objects outside a field of view of the one or more optical sensors, the received, tracked location information being received from other apparatuses. 40. The method of claim 38, wherein the user-carried or user-worn apparatus is configured to exchange tracked location information of the one or more objects with at least one other user-carried or user-worn apparatus and enhance the displayed view of the scene on the display based on the geolocating of the one or more objects by the apparatus and also based on tracked location information received from the at least one other user-carried or user-worn apparatus. 41. The method of claim 38, wherein when there are multiple instances of the apparatus in remote locations, geolocation is performed with respect to all of the sensors included in the multiple instances of the apparatus. 42. The method of claim 38, wherein the enhancing of the displayed view further comprises augmenting the displayed view to indicate location of the one or more objects outside the field of view of the one or more optical sensors. 43. The method of claim 42, wherein the augmentation of the displayed view includes insertion of one or more markers of objects of interest. 44. The method of claim 42, wherein the augmentation of the displayed view includes insertion of one or more indicators associated with one or more objects of interest. 45. The method of claim 44, wherein the one or more indicators provide direction to the one or more objects of interest. 46. The method of claim 44, wherein the one or more indicators provide direction to the one or more objects of interest outside of the current field of view outside a field of view of the one or more optical sensors. 47. The method of claim 38, wherein the correlating is performed by tracking the location of the recognized object of interest during transition from the wide field of view image to the narrow field of view image. 47. The apparatus of claim 47, wherein the overlay content for the recognized object of interest is scaled in accordance with the tracking of the location of the recognized object of interest from the wide field of view to the narrow field of view. 48. The method of claim 38, further comprising maintaining the enhanced, displayed view consistent in real-time with determined location of the recognized object of interest as a user of the apparatus relocates the apparatus. 49. The method of claim 38, further comprising inserting objects into the enhanced, displayed view based on geographic data that indicates that the inserted object is to be occluded by another object in the enhanced, displayed view. 50. The method of claim 38, wherein enhancing the displayed view comprises overlaying geographically located information from external sources onto the displayed view. 51. The method of claim 38, further comprising: capturing, using a first lens of the one or more optical sensors, the wide field of view image; and capturing, using a second lens the one or more optical sensors, the narrow field of view image.
2,600
9,999
9,999
14,585,535
2,698
Techniques describe tagging visual data with wireless and sensor measurement information by a mobile device. In some implementations, additional metadata fields may be used for tagging visual data with wireless and/or sensor measurement information. Embodiments also describe expanding the current format dictated by the standards for image (e.g., Exif) and video formats (mediaObject metadata) to include wireless and sensor measurements in the metadata. Furthermore, sensor information may include barometer, magnetometer and motion sensor (e.g., accelerometer, gyroscope, etc.) information. The mobile device may transmit the tagged visual data to a crowdsourcing server.
1. A method for tagging visual data, the method comprising: acquiring visual data using a camera coupled to a device; receiving at least one wireless signal from an at least one signal emitting device; deriving information comprising source identifying information associated with the at least one signal emitting device from the at least one wireless signal; and tagging the visual data with the information associated with the at least one signal emitting device. 2. The method of claim 1, further comprising wirelessly transmitting the tagged visual data to a remote server. 3. The method of claim 1, wherein deriving information further comprises deriving signal strength of the at least one wireless signal from the at least one signal emitting device measured at the device. 4. The method of claim 1, wherein deriving information further comprises deriving a round-trip time using the at least one wireless signal from the at least one signal emitting device. 5. The method of claim 1, wherein the at least one signal emitting device is a wireless access point. 6. The method of claim 1, wherein the source identifying information is a media access control (MAC) address. 7. The method of claim 1, wherein the at least one wireless signal is one of a Wi-Fi signal, non-audible sound, ultra-sound signal and non-visible light ray. 8. The method of claim 1, wherein the visual data is acquired by the camera in a GNSS-denied environment. 9. The method of claim 1, wherein the at least one signal emitting device is stationary. 10. The method of claim 1, wherein a location of the signal emitting device is unknown at least at a time the at least one wireless signal is received. 11. The method of claim 1, further comprising tagging the visual data with barometer information. 12. The method of claim 1, further comprising tagging the visual data with magnetometer information. 13. The method of claim 1, further comprising tagging the visual data with motion sensor information. 14. The method of claim 1, wherein the visual data is an image. 15. The method of claim 1, wherein the visual data is included in an image file formatted according to Exchangeable Image File format (Exif) and tagging the visual data comprises including the information associated with the at least one signal emitting device as part of metadata for the image file. 16. The method of claim 1, wherein the visual data is a video. 17. A device comprising: a camera configured to acquire visual data; a memory configured to store the visual data; a transceiver configured to receive at least one wireless signal from an at least one signal emitting device; and a processor configured to: derive information comprising identifying information associated with the at least one signal emitting device from the at least one wireless signal; and tag the visual data with the information associated with the at least one signal emitting device. 18. The device of claim 17, wherein the transceiver is further configured to wirelessly transmit the tagged visual data to a remote server. 19. The device of claim 17, wherein the information associated with the at least one signal emitting device further comprises one or more of signal strength of the at least one wireless signal from the at least one signal emitting device and round-trip time using the at least one wireless signal from the at least one signal emitting device. 20. The device of claim 17, wherein the at least one signal emitting device is a wireless access point. 21. The device of claim 17, wherein the at least one wireless signal is one of a Wi-Fi signal, non-audible sound, ultra-sound signal and non-visible light ray. 22. The device of claim 17, wherein the visual data is acquired by the camera in a GNSS-denied environment. 23. The device of claim 17, wherein a location of the signal emitting device is unknown at least at a time the at least one wireless signal is received. 24. The device of claim 17, wherein the processor is further configured to tag the visual data with one or more of barometer information, magnetometer information and motion sensor information. 25. The device of claim 17, wherein the visual data is included in an image file formatted according to Exchangeable Image File format (Exif) and tagging the visual data comprises including the information associated with the at least one signal emitting device as part of metadata for the image file. 26. An apparatus comprising: means for acquiring visual data using a camera coupled to the apparatus; means for receiving at least one wireless signal from an at least one signal emitting device; means for deriving information comprising source identifying information associated with the at least one signal emitting device from the at least one wireless signal; and means for tagging the visual data with the information associated with the at least one signal emitting device. 27. The apparatus of claim 26, further comprising means for wirelessly transmitting the tagged visual data to a remote server. 28. The apparatus of claim 26, wherein the visual data is acquired by the camera in a GNSS-denied environment. 29. A non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium comprises instructions executable by a processor, the instructions comprising instructions to: acquire visual data using a camera coupled to a device; receive at least one wireless signal from an at least one signal emitting device; derive information comprising source identifying information associated with the at least one signal emitting device from the at least one wireless signal; and tag the visual data with the information associated with the at least one signal emitting device. 30. The non-transitory computer-readable storage medium of claim 29, further comprising instructions for wirelessly transmitting the tagged visual data to a remote server.
Techniques describe tagging visual data with wireless and sensor measurement information by a mobile device. In some implementations, additional metadata fields may be used for tagging visual data with wireless and/or sensor measurement information. Embodiments also describe expanding the current format dictated by the standards for image (e.g., Exif) and video formats (mediaObject metadata) to include wireless and sensor measurements in the metadata. Furthermore, sensor information may include barometer, magnetometer and motion sensor (e.g., accelerometer, gyroscope, etc.) information. The mobile device may transmit the tagged visual data to a crowdsourcing server.1. A method for tagging visual data, the method comprising: acquiring visual data using a camera coupled to a device; receiving at least one wireless signal from an at least one signal emitting device; deriving information comprising source identifying information associated with the at least one signal emitting device from the at least one wireless signal; and tagging the visual data with the information associated with the at least one signal emitting device. 2. The method of claim 1, further comprising wirelessly transmitting the tagged visual data to a remote server. 3. The method of claim 1, wherein deriving information further comprises deriving signal strength of the at least one wireless signal from the at least one signal emitting device measured at the device. 4. The method of claim 1, wherein deriving information further comprises deriving a round-trip time using the at least one wireless signal from the at least one signal emitting device. 5. The method of claim 1, wherein the at least one signal emitting device is a wireless access point. 6. The method of claim 1, wherein the source identifying information is a media access control (MAC) address. 7. The method of claim 1, wherein the at least one wireless signal is one of a Wi-Fi signal, non-audible sound, ultra-sound signal and non-visible light ray. 8. The method of claim 1, wherein the visual data is acquired by the camera in a GNSS-denied environment. 9. The method of claim 1, wherein the at least one signal emitting device is stationary. 10. The method of claim 1, wherein a location of the signal emitting device is unknown at least at a time the at least one wireless signal is received. 11. The method of claim 1, further comprising tagging the visual data with barometer information. 12. The method of claim 1, further comprising tagging the visual data with magnetometer information. 13. The method of claim 1, further comprising tagging the visual data with motion sensor information. 14. The method of claim 1, wherein the visual data is an image. 15. The method of claim 1, wherein the visual data is included in an image file formatted according to Exchangeable Image File format (Exif) and tagging the visual data comprises including the information associated with the at least one signal emitting device as part of metadata for the image file. 16. The method of claim 1, wherein the visual data is a video. 17. A device comprising: a camera configured to acquire visual data; a memory configured to store the visual data; a transceiver configured to receive at least one wireless signal from an at least one signal emitting device; and a processor configured to: derive information comprising identifying information associated with the at least one signal emitting device from the at least one wireless signal; and tag the visual data with the information associated with the at least one signal emitting device. 18. The device of claim 17, wherein the transceiver is further configured to wirelessly transmit the tagged visual data to a remote server. 19. The device of claim 17, wherein the information associated with the at least one signal emitting device further comprises one or more of signal strength of the at least one wireless signal from the at least one signal emitting device and round-trip time using the at least one wireless signal from the at least one signal emitting device. 20. The device of claim 17, wherein the at least one signal emitting device is a wireless access point. 21. The device of claim 17, wherein the at least one wireless signal is one of a Wi-Fi signal, non-audible sound, ultra-sound signal and non-visible light ray. 22. The device of claim 17, wherein the visual data is acquired by the camera in a GNSS-denied environment. 23. The device of claim 17, wherein a location of the signal emitting device is unknown at least at a time the at least one wireless signal is received. 24. The device of claim 17, wherein the processor is further configured to tag the visual data with one or more of barometer information, magnetometer information and motion sensor information. 25. The device of claim 17, wherein the visual data is included in an image file formatted according to Exchangeable Image File format (Exif) and tagging the visual data comprises including the information associated with the at least one signal emitting device as part of metadata for the image file. 26. An apparatus comprising: means for acquiring visual data using a camera coupled to the apparatus; means for receiving at least one wireless signal from an at least one signal emitting device; means for deriving information comprising source identifying information associated with the at least one signal emitting device from the at least one wireless signal; and means for tagging the visual data with the information associated with the at least one signal emitting device. 27. The apparatus of claim 26, further comprising means for wirelessly transmitting the tagged visual data to a remote server. 28. The apparatus of claim 26, wherein the visual data is acquired by the camera in a GNSS-denied environment. 29. A non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium comprises instructions executable by a processor, the instructions comprising instructions to: acquire visual data using a camera coupled to a device; receive at least one wireless signal from an at least one signal emitting device; derive information comprising source identifying information associated with the at least one signal emitting device from the at least one wireless signal; and tag the visual data with the information associated with the at least one signal emitting device. 30. The non-transitory computer-readable storage medium of claim 29, further comprising instructions for wirelessly transmitting the tagged visual data to a remote server.
2,600