Unnamed: 0 int64 0 350k | level_0 int64 0 351k | ApplicationNumber int64 9.75M 96.1M | ArtUnit int64 1.6k 3.99k | Abstract stringlengths 1 8.37k | Claims stringlengths 3 292k | abstract-claims stringlengths 68 293k | TechCenter int64 1.6k 3.9k |
|---|---|---|---|---|---|---|---|
10,700 | 10,700 | 15,635,428 | 2,636 | In some examples, an optical node includes transition logic to: receive an indication of a data channel to be added across an optical medium, the data channel to occupy a portion of an optical spectrum; in response to a receipt of the indication, divide the data channel into a plurality of sub-channels; and sequentially add each of the plurality of sub-channels across the optical medium in a particular order. | 1. An optical node comprising:
transition logic configured to:
receive an indication of a data channel to be added across an optical medium, the data channel to occupy a portion of an optical spectrum;
in response to a receipt of the indication, divide the data channel into a plurality of sub-channels; and
sequentially add each of the plurality of sub-channels across the optical medium in a particular order. 2. The optical node of claim 1, wherein the transition logic is configured to sequentially remove each of the plurality of sub-channels across the optical medium in a second order. 3. The optical node of claim 1, wherein the transition logic is configured to determine a count of the plurality of sub-channels based on an available margin of power perturbation. 4. The optical node of claim 1, wherein the transition logic is configured to determine a count of the plurality of sub-channels based on a bandwidth size of the data channel. 5. The optical node of claim 1, wherein the transition logic is configured to determine the particular order based on an increasing perturbation impact of the plurality of sub-channels. 6. The optical node of claim 1, wherein the transition logic is configured to determine the particular order based on an increasing wavelength of the plurality of sub-channels. 7. The optical node of claim 1, wherein the transition logic is configured to:
determine a ramp rate for at least one sub-channel of the plurality of sub-channels; and ramp transmission of the at least one sub-channel using the determined ramp rate. 8. The optical node of claim 7, wherein the transition logic is configured to determine the ramp rate based on a maximum allowed perturbation level. 9. The optical node of claim 7, further comprising a variable power combiner. 10. The optical node of claim 1, wherein the transition logic is configured to sequentially replace a plurality of dummy channels by the plurality of sub-channels according to the particular order. 11. The optical node of claim 1, wherein the optical node is a reconfigurable optical add-drop multiplexer (ROADM). 12. A method comprising:
receiving, by an optical node, an indication of a data channel to be added across an optical medium; responsive to a receipt of the indication, dividing, by the optical node, the data channel into a plurality of sub-channels; determining, by the optical node, a sequential order in which to add the plurality of sub-channels; and adding, by the optical node, the plurality of sub-channels according to the sequential order. 13. The method of claim 12, further comprising removing the plurality of sub-channels according to a second sequential order. 14. The method of claim 12, wherein determining the sequential order is based on an order of increasing perturbation impact. 15. The method of claim 12, further comprising determining a count of the plurality of sub-channels based on a bandwidth size of the data channel. 16. The method of claim 12, further comprising:
determining a ramp rate for at least one sub-channel of the plurality of sub-channels; and ramping transmission of the at least one sub-channel based on the determined ramp rate. 17. A non-transitory machine-readable storage medium storing instructions that upon execution cause a processor to:
receive an indication of a new data channel to be added by an optical node across an optical medium; in response to a receipt of the indication:
divide the data channel into a plurality of sub-channels;
determine a sequential order in which to add the plurality of sub-channels; and
add the plurality of sub-channels across the optical medium in the determined sequential order. 18. The non-transitory machine-readable storage medium 17, wherein the instructions cause the processor to determine a count of the plurality of sub-channels based on an available margin of power perturbation. 19. The non-transitory machine-readable storage medium 17, wherein the instructions cause the processor to determine a count of the plurality of sub-channels based on a bandwidth size of the data channel. 20. The non-transitory machine-readable storage medium 17, wherein the instructions cause the processor to determine the sequential order based on an increasing perturbation impact of the plurality of sub-channels. | In some examples, an optical node includes transition logic to: receive an indication of a data channel to be added across an optical medium, the data channel to occupy a portion of an optical spectrum; in response to a receipt of the indication, divide the data channel into a plurality of sub-channels; and sequentially add each of the plurality of sub-channels across the optical medium in a particular order.1. An optical node comprising:
transition logic configured to:
receive an indication of a data channel to be added across an optical medium, the data channel to occupy a portion of an optical spectrum;
in response to a receipt of the indication, divide the data channel into a plurality of sub-channels; and
sequentially add each of the plurality of sub-channels across the optical medium in a particular order. 2. The optical node of claim 1, wherein the transition logic is configured to sequentially remove each of the plurality of sub-channels across the optical medium in a second order. 3. The optical node of claim 1, wherein the transition logic is configured to determine a count of the plurality of sub-channels based on an available margin of power perturbation. 4. The optical node of claim 1, wherein the transition logic is configured to determine a count of the plurality of sub-channels based on a bandwidth size of the data channel. 5. The optical node of claim 1, wherein the transition logic is configured to determine the particular order based on an increasing perturbation impact of the plurality of sub-channels. 6. The optical node of claim 1, wherein the transition logic is configured to determine the particular order based on an increasing wavelength of the plurality of sub-channels. 7. The optical node of claim 1, wherein the transition logic is configured to:
determine a ramp rate for at least one sub-channel of the plurality of sub-channels; and ramp transmission of the at least one sub-channel using the determined ramp rate. 8. The optical node of claim 7, wherein the transition logic is configured to determine the ramp rate based on a maximum allowed perturbation level. 9. The optical node of claim 7, further comprising a variable power combiner. 10. The optical node of claim 1, wherein the transition logic is configured to sequentially replace a plurality of dummy channels by the plurality of sub-channels according to the particular order. 11. The optical node of claim 1, wherein the optical node is a reconfigurable optical add-drop multiplexer (ROADM). 12. A method comprising:
receiving, by an optical node, an indication of a data channel to be added across an optical medium; responsive to a receipt of the indication, dividing, by the optical node, the data channel into a plurality of sub-channels; determining, by the optical node, a sequential order in which to add the plurality of sub-channels; and adding, by the optical node, the plurality of sub-channels according to the sequential order. 13. The method of claim 12, further comprising removing the plurality of sub-channels according to a second sequential order. 14. The method of claim 12, wherein determining the sequential order is based on an order of increasing perturbation impact. 15. The method of claim 12, further comprising determining a count of the plurality of sub-channels based on a bandwidth size of the data channel. 16. The method of claim 12, further comprising:
determining a ramp rate for at least one sub-channel of the plurality of sub-channels; and ramping transmission of the at least one sub-channel based on the determined ramp rate. 17. A non-transitory machine-readable storage medium storing instructions that upon execution cause a processor to:
receive an indication of a new data channel to be added by an optical node across an optical medium; in response to a receipt of the indication:
divide the data channel into a plurality of sub-channels;
determine a sequential order in which to add the plurality of sub-channels; and
add the plurality of sub-channels across the optical medium in the determined sequential order. 18. The non-transitory machine-readable storage medium 17, wherein the instructions cause the processor to determine a count of the plurality of sub-channels based on an available margin of power perturbation. 19. The non-transitory machine-readable storage medium 17, wherein the instructions cause the processor to determine a count of the plurality of sub-channels based on a bandwidth size of the data channel. 20. The non-transitory machine-readable storage medium 17, wherein the instructions cause the processor to determine the sequential order based on an increasing perturbation impact of the plurality of sub-channels. | 2,600 |
10,701 | 10,701 | 15,377,083 | 2,683 | A device and system for managing medication delivery devices includes a case with a housing having an opening for receiving a medication delivery device. A cover is configured and arranged to cover the opening. A medication delivery device such as: an auto injector is disposed in the housing. A sensor to detect the position of the cover may be provided. An electronic control system is operatively associated with the housing and disposed in wireless communication with a gateway device. The electronic control system is configured and arranged to provide a signal to the gateway device when the case is within, proximity of the gateway device. The electronic control system is also configured and arranged to transmit corresponding signals to the gateway device based on the position of the cover, the presence of the medication delivery device in the case, and/or the activation of the delivery device. | 1. A case for use with a medication delivery device, the case comprising:
a housing having an opening bordering a cavity defined therein, the cavity sized to receive the medication delivery device; a cover disposed adjacent to the opening in the housing and capable of moving between an open position and a closed position where it covers the opening in the housing; a sensor for detecting the presence of the medication delivery device in the case; and, an electronic control system disposed in wireless communication with a gateway device for connecting to a communications network, wherein the electronic control system is configured and arranged to provide a signal indicating that the case is in proximity to the gateway device. 2. The case of claim 1, further comprising an environmental sensor selected from the group consisting of temperature, light, vibration, pressure, motion, pollution and humidity. 3. The case of claim 1, further comprising an emergency standby button configured and arranged such that a user holds the button while assessing an emergency situation and in the event that the user releases the button without following a predetermined routine, an alarm signal will automatically be sent to the gateway device to trigger an auditory alarm perceptible to bystanders. 4. The case of claim 1, wherein the gateway device is configured and arranged to send a wireless signal to the housing to provide a paging feature. 5. The case of claim 1, wherein the gateway device automatically receives a signal from the electronic control system when the medication delivery device is removed from the case and the gateway device automatically sends a corresponding alert via a communication network to one or more support groups. 6. The case of claim 5, wherein the corresponding alert includes information regarding the location of the user of the case. 7. The case of claim 1, wherein the gateway device, automatically receives a communication from the electronic control system when the case is opened and the gateway device automatically sends a corresponding alert via a communication network to one or more support groups. 8. (canceled) 9. (canceled) 10. (canceled) 11. The case of claim 1, wherein the medication delivery device further comprises an electronic tag identifying the medication delivery device contained in the case. 12. (canceled) 13. (canceled) 14. (canceled) 15. A medication delivery device, comprising:
an electronic control system operatively associated with the medication delivery device and disposed in wireless communication with a gateway device for connecting to a communications network, wherein the electronic control system is configured and arranged to provide a signal indicating that the medication delivery device is in proximity to the gateway device; wherein when the medication delivery device is activated, the electronic control system automatically transmits a signal to the gateway device. 16. The medication delivery device of claim 15, further comprising an environmental sensor selected from the group consisting of temperature, light, vibration, pressure, motion, pollution and humidity. 17. (canceled) 18. (canceled) 19. The medication delivery device of claim 15, wherein the gateway device sends a signal that triggers an audible alarm on the medication delivery device to provide an alert to persons in the vicinity of the medication delivery device. 20. (canceled) 21. The medication delivery device of claim 15, further comprising an emergency standby button configured and arranged such that a user holds the button while assessing an emergency situation and in the event that the user releases the button without following a predetermined routine, an alarm signal will automatically be sent to the gateway device to trigger an auditory alarm on the medication delivery device and to send an alert to one or more support groups. 22. The medication delivery device of claim 15, wherein the corresponding alert includes information regarding the location of the user of the case. 23. A medication delivery device management system, comprising:
a medication delivery device; an electronic control system operatively associated with the medication delivery device, the electronic control system disposed in wireless communication with a gateway device for connecting with a communications network, wherein the electronic control system is configured and arranged to provide a signal to the gateway device indicating that the medication delivery device is in proximity to the gateway device; and, wherein the electronic control system automatically sends a signal to the gateway device when the medication delivery device is activated. 24. The medication delivery device management system of claim 23, further comprising an environmental sensor selected from the group consisting of temperature, light, vibration, pressure, motion, pollution and humidity. 25. The medication delivery device management system of claim 23, wherein upon receipt of an alert from the electronic control system, the gateway device automatically transmits a corresponding alert to one or more support groups. 26. The medication delivery device management system of claim 23, wherein the gateway device is configured and arranged to send a wireless signal to the medication delivery device to provide a paging feature. 27. The medication delivery device management system of claim 23, wherein upon receipt of a signal indicating that the medication delivery device has been removed from a case, the gateway device sends a polling signal to the user which requires an active response from the user within a predetermined time. 28. The medication delivery device management system of claim 27, wherein the gateway device triggers an audible alarm on one of the case and the medication delivery device when the user does not respond to the polling signal. 29. The medication delivery device management system of claim 23, wherein the gateway device sends a signal that triggers an audible alarm on the medication delivery device to provide an alert to persons in the vicinity of the medication delivery device. 30. (canceled) 31. (canceled) 32. The medication delivery device management system of claim 23, further comprising an alert sent via the gateway device containing information regarding the location of the user of the case. 33. The medication delivery device management system of claim 24, wherein the system provides a notification when the environmental sensor registers a reading that is outside a predetermined range. 34. (canceled) 35. (canceled) 36. (canceled) | A device and system for managing medication delivery devices includes a case with a housing having an opening for receiving a medication delivery device. A cover is configured and arranged to cover the opening. A medication delivery device such as: an auto injector is disposed in the housing. A sensor to detect the position of the cover may be provided. An electronic control system is operatively associated with the housing and disposed in wireless communication with a gateway device. The electronic control system is configured and arranged to provide a signal to the gateway device when the case is within, proximity of the gateway device. The electronic control system is also configured and arranged to transmit corresponding signals to the gateway device based on the position of the cover, the presence of the medication delivery device in the case, and/or the activation of the delivery device.1. A case for use with a medication delivery device, the case comprising:
a housing having an opening bordering a cavity defined therein, the cavity sized to receive the medication delivery device; a cover disposed adjacent to the opening in the housing and capable of moving between an open position and a closed position where it covers the opening in the housing; a sensor for detecting the presence of the medication delivery device in the case; and, an electronic control system disposed in wireless communication with a gateway device for connecting to a communications network, wherein the electronic control system is configured and arranged to provide a signal indicating that the case is in proximity to the gateway device. 2. The case of claim 1, further comprising an environmental sensor selected from the group consisting of temperature, light, vibration, pressure, motion, pollution and humidity. 3. The case of claim 1, further comprising an emergency standby button configured and arranged such that a user holds the button while assessing an emergency situation and in the event that the user releases the button without following a predetermined routine, an alarm signal will automatically be sent to the gateway device to trigger an auditory alarm perceptible to bystanders. 4. The case of claim 1, wherein the gateway device is configured and arranged to send a wireless signal to the housing to provide a paging feature. 5. The case of claim 1, wherein the gateway device automatically receives a signal from the electronic control system when the medication delivery device is removed from the case and the gateway device automatically sends a corresponding alert via a communication network to one or more support groups. 6. The case of claim 5, wherein the corresponding alert includes information regarding the location of the user of the case. 7. The case of claim 1, wherein the gateway device, automatically receives a communication from the electronic control system when the case is opened and the gateway device automatically sends a corresponding alert via a communication network to one or more support groups. 8. (canceled) 9. (canceled) 10. (canceled) 11. The case of claim 1, wherein the medication delivery device further comprises an electronic tag identifying the medication delivery device contained in the case. 12. (canceled) 13. (canceled) 14. (canceled) 15. A medication delivery device, comprising:
an electronic control system operatively associated with the medication delivery device and disposed in wireless communication with a gateway device for connecting to a communications network, wherein the electronic control system is configured and arranged to provide a signal indicating that the medication delivery device is in proximity to the gateway device; wherein when the medication delivery device is activated, the electronic control system automatically transmits a signal to the gateway device. 16. The medication delivery device of claim 15, further comprising an environmental sensor selected from the group consisting of temperature, light, vibration, pressure, motion, pollution and humidity. 17. (canceled) 18. (canceled) 19. The medication delivery device of claim 15, wherein the gateway device sends a signal that triggers an audible alarm on the medication delivery device to provide an alert to persons in the vicinity of the medication delivery device. 20. (canceled) 21. The medication delivery device of claim 15, further comprising an emergency standby button configured and arranged such that a user holds the button while assessing an emergency situation and in the event that the user releases the button without following a predetermined routine, an alarm signal will automatically be sent to the gateway device to trigger an auditory alarm on the medication delivery device and to send an alert to one or more support groups. 22. The medication delivery device of claim 15, wherein the corresponding alert includes information regarding the location of the user of the case. 23. A medication delivery device management system, comprising:
a medication delivery device; an electronic control system operatively associated with the medication delivery device, the electronic control system disposed in wireless communication with a gateway device for connecting with a communications network, wherein the electronic control system is configured and arranged to provide a signal to the gateway device indicating that the medication delivery device is in proximity to the gateway device; and, wherein the electronic control system automatically sends a signal to the gateway device when the medication delivery device is activated. 24. The medication delivery device management system of claim 23, further comprising an environmental sensor selected from the group consisting of temperature, light, vibration, pressure, motion, pollution and humidity. 25. The medication delivery device management system of claim 23, wherein upon receipt of an alert from the electronic control system, the gateway device automatically transmits a corresponding alert to one or more support groups. 26. The medication delivery device management system of claim 23, wherein the gateway device is configured and arranged to send a wireless signal to the medication delivery device to provide a paging feature. 27. The medication delivery device management system of claim 23, wherein upon receipt of a signal indicating that the medication delivery device has been removed from a case, the gateway device sends a polling signal to the user which requires an active response from the user within a predetermined time. 28. The medication delivery device management system of claim 27, wherein the gateway device triggers an audible alarm on one of the case and the medication delivery device when the user does not respond to the polling signal. 29. The medication delivery device management system of claim 23, wherein the gateway device sends a signal that triggers an audible alarm on the medication delivery device to provide an alert to persons in the vicinity of the medication delivery device. 30. (canceled) 31. (canceled) 32. The medication delivery device management system of claim 23, further comprising an alert sent via the gateway device containing information regarding the location of the user of the case. 33. The medication delivery device management system of claim 24, wherein the system provides a notification when the environmental sensor registers a reading that is outside a predetermined range. 34. (canceled) 35. (canceled) 36. (canceled) | 2,600 |
10,702 | 10,702 | 15,726,864 | 2,628 | A driving method an electro-optic display having a plurality of display pixels, the method include applying a first set of waveform to a first display pixel, the first set of waveform having at least one active portion configured to affect the optical state of the first display pixel and at least one non-active portion configured not to substantially affect the optical state of the first display pixel. The method also include applying a second set of waveform to a second display pixel, the second set of waveform having at least one active portion configured to affect the optical state of the second display pixel and at least one non-active portion configured not to substantially affect the optical state of the second display pixel, where the at least one active portions of the first and second set of waveforms do not overlap in time. | 1. A method for driving an electro-optic display having a plurality of display pixels, the method comprising:
applying a first set of waveform to a first display pixel, the first set of waveform having at least one active portion configured to affect the optical state of the first display pixel and at least one non-active portion configured not to substantially affect the optical state of the first display pixel; and applying a second set of waveform to a second display pixel, the second set of waveform having at least one active portion configured to affect the optical state of the second display pixel and at least one non-active portion configured not to substantially affect the optical state of the second display pixel;
wherein the at least one active portions of the first and second set of waveforms do not overlap in time. 2. The method of claim 1, wherein the first and second display pixels are positioned adjacent to one another. 3. The method of claim 1, wherein the at least one active portions of the first and second set of waveform have opposite voltage values. 4. The method of claim 1, wherein the at least one non-active portion of the first set of waveform is a zero volt segment. 5. The method of claim 1, wherein the at least one non-active portion of the second set of waveform is a zero volt segment. 6. The method of claim 1 further comprising applying a third set of waveform to the first and second display pixels, wherein the third set of wave form having at least one active portion configured to affect the optical state of the first and second display pixels and at least one non-active portion configured not to substantially affect the optical state of the first and second display pixels. 7. The method of claim 6 wherein the at least one active portions of the first, second and third set of waveforms do not overlap in time. | A driving method an electro-optic display having a plurality of display pixels, the method include applying a first set of waveform to a first display pixel, the first set of waveform having at least one active portion configured to affect the optical state of the first display pixel and at least one non-active portion configured not to substantially affect the optical state of the first display pixel. The method also include applying a second set of waveform to a second display pixel, the second set of waveform having at least one active portion configured to affect the optical state of the second display pixel and at least one non-active portion configured not to substantially affect the optical state of the second display pixel, where the at least one active portions of the first and second set of waveforms do not overlap in time.1. A method for driving an electro-optic display having a plurality of display pixels, the method comprising:
applying a first set of waveform to a first display pixel, the first set of waveform having at least one active portion configured to affect the optical state of the first display pixel and at least one non-active portion configured not to substantially affect the optical state of the first display pixel; and applying a second set of waveform to a second display pixel, the second set of waveform having at least one active portion configured to affect the optical state of the second display pixel and at least one non-active portion configured not to substantially affect the optical state of the second display pixel;
wherein the at least one active portions of the first and second set of waveforms do not overlap in time. 2. The method of claim 1, wherein the first and second display pixels are positioned adjacent to one another. 3. The method of claim 1, wherein the at least one active portions of the first and second set of waveform have opposite voltage values. 4. The method of claim 1, wherein the at least one non-active portion of the first set of waveform is a zero volt segment. 5. The method of claim 1, wherein the at least one non-active portion of the second set of waveform is a zero volt segment. 6. The method of claim 1 further comprising applying a third set of waveform to the first and second display pixels, wherein the third set of wave form having at least one active portion configured to affect the optical state of the first and second display pixels and at least one non-active portion configured not to substantially affect the optical state of the first and second display pixels. 7. The method of claim 6 wherein the at least one active portions of the first, second and third set of waveforms do not overlap in time. | 2,600 |
10,703 | 10,703 | 13,784,177 | 2,642 | The exemplary embodiments described herein relate to systems and methods for identifying and authenticating a mobile platform. One embodiment relates to a method comprising receiving, by a mobile platform, a digital certificate from an integrated circuit card (“ICC”) via close-proximity radio communication, verifying the digital certificate with a digital signature stored on the mobile platform, and booting the mobile platform upon verification of the digital certificate of the ICC. A further embodiment relates to a mobile platform, comprising a non-transitory computer readable storage medium storing a digital signature, and a processor receiving a digital certificate from an integrated circuit card (“ICC”) via close- proximity radio communication between the ICC and the mobile platform, verifying the digital certificate with the digital signature, booting the mobile platform upon verification of the digital certificate of the ICC. | 1. A method, comprising:
receiving, by a mobile platform, a digital certificate from an integrated circuit card (“ICC”) via close-proximity radio communication; verifying the digital certificate with a digital signature stored on the mobile platform; and booting the mobile platform upon verification of the digital certificate of the ICC. 2. The method of claim 1, further including:
disabling the mobile platform when the digital signature fails to verify the digital certificate. 3. The method of claim 1, further including:
establishing a secure communication channel between the mobile platform and a network. 4. The method of claim 1, further including:
obtaining measured boot values of the mobile platform; and providing the measured boot values to a mobile device management (“MDM”) server. 5. The method of claim 4, further including:
verifying a validity of the measured boot values at the MDM server. 6. The method of claim 1, further including:
receiving a device-based user security credential via a user interface (“UI”) on the mobile platform; and verifying the device-based user security credential. 7. The method of claim 6, wherein the device-based user security credential includes at least one of a personal identification number (“PIN”), a password, a swipe pattern, a motion pattern, voice recognition and facial recognition. 8. A non-transitory computer readable storage medium including a set of instructions executable by a processor, the set of instructions, when executed, resulting in a performance of the following:
receive a digital certificate from an integrated circuit card (“ICC”) via close-proximity radio communication; verify the digital certificate with a digital signature stored on a mobile platform; and boot the mobile platform upon verification of the digital certificate of the ICC. 9. The non-transitory computer readable storage medium of claim 8, wherein the execution of the set of instructions further results in the performance of the following:
disable the mobile platform when the digital signature fails to verify the digital certificate. 10. The non-transitory computer readable storage medium of claim 8, wherein the execution of the set of instructions further results in the performance of the following:
establish a secure communication channel between the mobile platform and a network. 11. The non-transitory computer readable storage medium of claim 8, wherein the execution of the set of instructions further results in the performance of the following:
obtains measured boot values of the mobile platform; and provide the measured boot values to a mobile device management (“MDM”) server. 12. The non-transitory computer readable storage medium of claim 11, wherein the execution of the set of instructions further results in the performance of the following:
verify a validity of the measured boot values at the MDM server. 13. The non-transitory computer readable storage medium of claim 8, wherein the execution of the set of instructions further results in the performance of the following:
receive a device-based user security credential via a user interface (“UI”) on the mobile platform; and verify the device-based user security credential. 14. The non-transitory computer readable storage medium of claim 13, wherein the device-based user security credential includes at least one of a personal identification number (“PIN”), a password, a swipe pattern, a motion pattern, voice recognition and facial recognition. 15. A mobile platform, comprising:
a non-transitory computer readable storage medium storing a digital signature; and a processor receiving a digital certificate from an integrated circuit card (“ICC”) via close-proximity radio communication between the ICC and the mobile platform, verifying the digital certificate with the digital signature, booting the mobile platform upon verification of the digital certificate of the ICC. 16. The system of claim 15, wherein the processor disables the mobile platform when the digital signature fails to verify the digital certificate. 17. The system of claim 15, wherein the processor establishes a secure communication channel between the mobile platform and a network. 18. The system of claim 15, wherein the processor obtains measured boot values of the mobile platform and provides the measured boot values to a mobile device management (“MDM”) server. 19. The system of claim 15, wherein the processor receives a device-based user security credential via a user interface (“UI”) on the mobile platform and verifies the device-based user security credential. 20. The system of claim 19, wherein the device-based user security credential includes at least one of a personal identification number (“PIN”), a password, a swipe pattern, a motion pattern, voice recognition and facial recognition. | The exemplary embodiments described herein relate to systems and methods for identifying and authenticating a mobile platform. One embodiment relates to a method comprising receiving, by a mobile platform, a digital certificate from an integrated circuit card (“ICC”) via close-proximity radio communication, verifying the digital certificate with a digital signature stored on the mobile platform, and booting the mobile platform upon verification of the digital certificate of the ICC. A further embodiment relates to a mobile platform, comprising a non-transitory computer readable storage medium storing a digital signature, and a processor receiving a digital certificate from an integrated circuit card (“ICC”) via close- proximity radio communication between the ICC and the mobile platform, verifying the digital certificate with the digital signature, booting the mobile platform upon verification of the digital certificate of the ICC.1. A method, comprising:
receiving, by a mobile platform, a digital certificate from an integrated circuit card (“ICC”) via close-proximity radio communication; verifying the digital certificate with a digital signature stored on the mobile platform; and booting the mobile platform upon verification of the digital certificate of the ICC. 2. The method of claim 1, further including:
disabling the mobile platform when the digital signature fails to verify the digital certificate. 3. The method of claim 1, further including:
establishing a secure communication channel between the mobile platform and a network. 4. The method of claim 1, further including:
obtaining measured boot values of the mobile platform; and providing the measured boot values to a mobile device management (“MDM”) server. 5. The method of claim 4, further including:
verifying a validity of the measured boot values at the MDM server. 6. The method of claim 1, further including:
receiving a device-based user security credential via a user interface (“UI”) on the mobile platform; and verifying the device-based user security credential. 7. The method of claim 6, wherein the device-based user security credential includes at least one of a personal identification number (“PIN”), a password, a swipe pattern, a motion pattern, voice recognition and facial recognition. 8. A non-transitory computer readable storage medium including a set of instructions executable by a processor, the set of instructions, when executed, resulting in a performance of the following:
receive a digital certificate from an integrated circuit card (“ICC”) via close-proximity radio communication; verify the digital certificate with a digital signature stored on a mobile platform; and boot the mobile platform upon verification of the digital certificate of the ICC. 9. The non-transitory computer readable storage medium of claim 8, wherein the execution of the set of instructions further results in the performance of the following:
disable the mobile platform when the digital signature fails to verify the digital certificate. 10. The non-transitory computer readable storage medium of claim 8, wherein the execution of the set of instructions further results in the performance of the following:
establish a secure communication channel between the mobile platform and a network. 11. The non-transitory computer readable storage medium of claim 8, wherein the execution of the set of instructions further results in the performance of the following:
obtains measured boot values of the mobile platform; and provide the measured boot values to a mobile device management (“MDM”) server. 12. The non-transitory computer readable storage medium of claim 11, wherein the execution of the set of instructions further results in the performance of the following:
verify a validity of the measured boot values at the MDM server. 13. The non-transitory computer readable storage medium of claim 8, wherein the execution of the set of instructions further results in the performance of the following:
receive a device-based user security credential via a user interface (“UI”) on the mobile platform; and verify the device-based user security credential. 14. The non-transitory computer readable storage medium of claim 13, wherein the device-based user security credential includes at least one of a personal identification number (“PIN”), a password, a swipe pattern, a motion pattern, voice recognition and facial recognition. 15. A mobile platform, comprising:
a non-transitory computer readable storage medium storing a digital signature; and a processor receiving a digital certificate from an integrated circuit card (“ICC”) via close-proximity radio communication between the ICC and the mobile platform, verifying the digital certificate with the digital signature, booting the mobile platform upon verification of the digital certificate of the ICC. 16. The system of claim 15, wherein the processor disables the mobile platform when the digital signature fails to verify the digital certificate. 17. The system of claim 15, wherein the processor establishes a secure communication channel between the mobile platform and a network. 18. The system of claim 15, wherein the processor obtains measured boot values of the mobile platform and provides the measured boot values to a mobile device management (“MDM”) server. 19. The system of claim 15, wherein the processor receives a device-based user security credential via a user interface (“UI”) on the mobile platform and verifies the device-based user security credential. 20. The system of claim 19, wherein the device-based user security credential includes at least one of a personal identification number (“PIN”), a password, a swipe pattern, a motion pattern, voice recognition and facial recognition. | 2,600 |
10,704 | 10,704 | 16,098,190 | 2,672 | Examples of an apparatus and method for use with a printing system are described herein. A correction to be applied to a printing system during print calibration is obtained. A distortion is applied to the correction. A relationship between an expected and measured output of the printing system is determined based on the print performed at least in part on the distorted correction. The printing system is calibrated on the basis of the determined correction. | 1. A method for use with a printing system, the method comprising:
obtaining a correction to be applied to a printing system during print calibration, the correction having been determined on the basis of a first expected output and a first measured output of the printing system; applying a distortion to the correction; causing a print on the printing system based at least in part on the distorted correction; determining a relationship between a second expected output and a second measured output of the printing system based on the print performed at least in part on the distorted correction; and calibrating the printing system on the basis of the determined relationship. 2. The method of claim 1 comprising:
obtaining a subsequent correction to be applied on the basis of an expected output and a measured output of the printing system after the calibration. 3. The method according to claim 2, comprising performing successive iterations of the steps of claim 1 until an output of the printing system synchronizes to an expected output. 4. The method according to claim 1, wherein calibrating the printing system comprises instructing a modification of one or more parameters of the printing system. 5. The method according to claim 1 wherein the correction applied in relation to one or more of color uniformity, solid uniformity and color plane registration. 6. The method according to claim 1 wherein
the printing system is a laser printing system comprising one or more lasers and an encoder, and
the relationship between the second expected output and second measured output is determined from the expected position and measured position of ink deposited on a print substrate. 7. The method according to claim 6 wherein applying the distortion to the correction comprises varying application of the one or more lasers over a sub-region of the print substrate. 8. The method of claim 7, wherein varying application of the one or more laser comprises varying the power of the one or more lasers to generate a printed output in the sub-region which is detectable using image processing. 9. The method according to claim 6 wherein applying the distortion to the correction comprises varying application of a developer voltage to the encoder on the basis of the encoder position. 10. An apparatus comprising:
a printing device; a measurement device arranged to perform measurements on a printed output of the printing device; a print controller communicatively coupled to the printing device and measurement device arranged to:
receive a first measurement from the measurement device;
determine a correction to be applied to the printing device on the basis of a first expected output of the printing device and the first measurement;
apply a distortion to the correction;
instruct the printing device to perform a print based at least in part on the distorted correction;
calculate a relationship between a second expected output and a second measured output of the printing device and the measurement device based on the print performed at least in part on the distorted correction; and
calibrate the printing device on the basis of the calculated relationship. 11. The apparatus according to claim 10 wherein the print controller is arranged to determine a subsequent correction to be applied on the basis of an expected output and a measured output of the print device after the calibration. 12. The apparatus according to claim 11 wherein the print controller is arranged to perform successive iterations of the calibration process of claim 10 until the output of the print device synchronizes to the expected output of the print device. 13. The apparatus of claim 9 wherein the print device is a laser printing device comprising at least one laser and an encoder and wherein a calibration of the print device is a calibration of the at least one laser and/or the encoder. 14. The apparatus of claim 9 wherein the measurement device comprises one of an electrometer, a spectrophotometer, an image capturing device and an inline scanner. 15. A non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors, cause the one or more processors to:
obtain a correction to be applied to a printing system during print calibration, the correction having been determined on the basis of a first expected output and a first measured output of the printing system; apply a distortion to the correction; cause a print on the printing system based at least in part on the distorted correction; determine a relationship between an expected position and measured position of print agent deposited on a print substrate from a second expected output and a second measured output of the printing system based on the print performed at least in part on the distorted correction; and calibrate the printing system on the basis of the determined relationship. | Examples of an apparatus and method for use with a printing system are described herein. A correction to be applied to a printing system during print calibration is obtained. A distortion is applied to the correction. A relationship between an expected and measured output of the printing system is determined based on the print performed at least in part on the distorted correction. The printing system is calibrated on the basis of the determined correction.1. A method for use with a printing system, the method comprising:
obtaining a correction to be applied to a printing system during print calibration, the correction having been determined on the basis of a first expected output and a first measured output of the printing system; applying a distortion to the correction; causing a print on the printing system based at least in part on the distorted correction; determining a relationship between a second expected output and a second measured output of the printing system based on the print performed at least in part on the distorted correction; and calibrating the printing system on the basis of the determined relationship. 2. The method of claim 1 comprising:
obtaining a subsequent correction to be applied on the basis of an expected output and a measured output of the printing system after the calibration. 3. The method according to claim 2, comprising performing successive iterations of the steps of claim 1 until an output of the printing system synchronizes to an expected output. 4. The method according to claim 1, wherein calibrating the printing system comprises instructing a modification of one or more parameters of the printing system. 5. The method according to claim 1 wherein the correction applied in relation to one or more of color uniformity, solid uniformity and color plane registration. 6. The method according to claim 1 wherein
the printing system is a laser printing system comprising one or more lasers and an encoder, and
the relationship between the second expected output and second measured output is determined from the expected position and measured position of ink deposited on a print substrate. 7. The method according to claim 6 wherein applying the distortion to the correction comprises varying application of the one or more lasers over a sub-region of the print substrate. 8. The method of claim 7, wherein varying application of the one or more laser comprises varying the power of the one or more lasers to generate a printed output in the sub-region which is detectable using image processing. 9. The method according to claim 6 wherein applying the distortion to the correction comprises varying application of a developer voltage to the encoder on the basis of the encoder position. 10. An apparatus comprising:
a printing device; a measurement device arranged to perform measurements on a printed output of the printing device; a print controller communicatively coupled to the printing device and measurement device arranged to:
receive a first measurement from the measurement device;
determine a correction to be applied to the printing device on the basis of a first expected output of the printing device and the first measurement;
apply a distortion to the correction;
instruct the printing device to perform a print based at least in part on the distorted correction;
calculate a relationship between a second expected output and a second measured output of the printing device and the measurement device based on the print performed at least in part on the distorted correction; and
calibrate the printing device on the basis of the calculated relationship. 11. The apparatus according to claim 10 wherein the print controller is arranged to determine a subsequent correction to be applied on the basis of an expected output and a measured output of the print device after the calibration. 12. The apparatus according to claim 11 wherein the print controller is arranged to perform successive iterations of the calibration process of claim 10 until the output of the print device synchronizes to the expected output of the print device. 13. The apparatus of claim 9 wherein the print device is a laser printing device comprising at least one laser and an encoder and wherein a calibration of the print device is a calibration of the at least one laser and/or the encoder. 14. The apparatus of claim 9 wherein the measurement device comprises one of an electrometer, a spectrophotometer, an image capturing device and an inline scanner. 15. A non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors, cause the one or more processors to:
obtain a correction to be applied to a printing system during print calibration, the correction having been determined on the basis of a first expected output and a first measured output of the printing system; apply a distortion to the correction; cause a print on the printing system based at least in part on the distorted correction; determine a relationship between an expected position and measured position of print agent deposited on a print substrate from a second expected output and a second measured output of the printing system based on the print performed at least in part on the distorted correction; and calibrate the printing system on the basis of the determined relationship. | 2,600 |
10,705 | 10,705 | 13,841,443 | 2,613 | Methods, apparatuses, computer program products, devices and systems are described that carry out presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query;
receiving response data relating to the location history query from the data source; and presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device. | 1. A system comprising:
circuitry for presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query; circuitry for receiving response data relating to the location history query from the data source; and circuitry for presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device. 2. The system of claim 1 wherein the circuitry for presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query comprises:
circuitry for presenting a location history query that includes at least one of a current geographic location of an augmented reality device, a current geographic location of a user of the augmented reality device, a geographic location history of the augmented reality device, or a geographic location history of the user of the augmented reality device. 3. The system of claim 1 wherein the circuitry for presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query comprises:
circuitry for presenting a location history query to a data source, wherein the location history query relates at least in part to at least one of an augmented reality device or a user of the augmented reality device. 4. The system of claim 1 wherein the circuitry for presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query comprises:
circuitry for presenting a location history query to a data source, wherein the data source includes field-of-view data about one or more video cameras. 5. The system of claim 1 wherein the circuitry for presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query comprises:
circuitry for presenting a location history query to a data source, wherein the data source includes time-of-use data about one or more video cameras. 6. The system of claim 1 wherein the circuitry for presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query comprises:
circuitry for presenting a location history query to a data source, wherein the data source includes eye-tracking data relating to one or more individuals. 7. The system of claim 6 wherein the circuitry for presenting a location history query to a data source, wherein the data source includes eye-tracking data relating to one or more individuals comprises:
circuitry for presenting a location history query to a data source, wherein the data source includes eye-tracking data relating to one or more individuals, including at least one of dwell times for at least one object or location, saccade times, or closed eyelid times relating to the one or more individuals. 8. The system of claim 1 wherein the circuitry for presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query comprises:
circuitry for presenting a location history query to a data source, wherein the circuitry for presenting a location history query and the data source are present on a single augmented reality device. 9. The system of claim 1 wherein the circuitry for receiving response data relating to the location history query from the data source comprises:
circuitry for receiving response data including data relating to at least one fixed recording device having a specified field of view within a twenty-five meter radius of an augmented reality device of the location history query at a first time period, a first mobile recording device having a variable field of view within a five meter radius of a user of the augmented reality device during a second time period, and a second mobile recording device having a variable field of view within a five meter radius of a user of the augmented reality device during the second time period. 10. The system of claim 1 wherein the circuitry for presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device comprises:
circuitry for presenting an auditory or visual augmented reality representation on an augmented reality device of a user, wherein the representation indicates that at least one individual or camera of the scene is presently looking at the user of the augmented reality device. 11. The system of claim 1 wherein the circuitry for presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device comprises:
circuitry for presenting an auditory or visual augmented reality representation on an augmented reality device of a user, wherein the representation indicates that the user of the augmented reality device is currently visible to one or more recording devices or individuals. 12. The system of claim 1 wherein the circuitry for presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device comprises:
circuitry for presenting an auditory or visual augmented reality representation on an augmented reality device of a user, wherein the representation indicates that the user of the augmented reality device was visible to one or more recording devices or individuals during a previous time period. 13. The system of claim 1 wherein the circuitry for presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device comprises:
circuitry for presenting an auditory or visual augmented reality representation on an augmented reality device of a user, wherein the representation indicates that the user of the augmented reality device may be visible to one or more recording devices or individuals during a future time period. 14. The system of claim 1 wherein the circuitry for presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device comprises:
circuitry for presenting an augmented reality representation in association with at least one affordance by which a user may filter the response data. 15. The system of claim 14 wherein the circuitry for presenting an augmented reality representation in association with at least one affordance by which a user may filter the response data comprises:
circuitry for presenting an augmented reality representation in association with at least one slider bar by which a user may filter the response data according to a number of minutes of direct observation by an individual of the user or the user's augmented reality device based on eye tracking data or other image data. 16. A computer-implemented method comprising:
presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query; receiving response data relating to the location history query from the data source; and presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device. 17.-30. (canceled) 31. A computer program product comprising:
an article of manufacture including a signal-bearing medium bearing: (1) one or more instructions for presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query; (2) one or more instructions for receiving response data relating to the location history query from the data source; and (3) one or more instructions for presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device. 32.-34. (canceled) 35. A system comprising:
a computing device; and instructions that when executed on the computing device cause the computing device to (1) present a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query; (2) receive response data relating to the location history query from the data source; and (3) present an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device. 36. (canceled) 37. A system comprising:
means for presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query; means for receiving response data relating to the location history query from the data source; and means for presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device. 38. A system comprising:
accepting location history data relating to a location history query, wherein the location history data include at least one of a data from a fixed recording device within a defined radius of a component of the location history query, data from a mobile recording device within a defined radius of a component of the location history query, or data relating to an individual present within a defined radius of a component of the location history query; and presenting an augmented reality representation of a scene at least partly based on the location history data, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device. | Methods, apparatuses, computer program products, devices and systems are described that carry out presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query;
receiving response data relating to the location history query from the data source; and presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device.1. A system comprising:
circuitry for presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query; circuitry for receiving response data relating to the location history query from the data source; and circuitry for presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device. 2. The system of claim 1 wherein the circuitry for presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query comprises:
circuitry for presenting a location history query that includes at least one of a current geographic location of an augmented reality device, a current geographic location of a user of the augmented reality device, a geographic location history of the augmented reality device, or a geographic location history of the user of the augmented reality device. 3. The system of claim 1 wherein the circuitry for presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query comprises:
circuitry for presenting a location history query to a data source, wherein the location history query relates at least in part to at least one of an augmented reality device or a user of the augmented reality device. 4. The system of claim 1 wherein the circuitry for presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query comprises:
circuitry for presenting a location history query to a data source, wherein the data source includes field-of-view data about one or more video cameras. 5. The system of claim 1 wherein the circuitry for presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query comprises:
circuitry for presenting a location history query to a data source, wherein the data source includes time-of-use data about one or more video cameras. 6. The system of claim 1 wherein the circuitry for presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query comprises:
circuitry for presenting a location history query to a data source, wherein the data source includes eye-tracking data relating to one or more individuals. 7. The system of claim 6 wherein the circuitry for presenting a location history query to a data source, wherein the data source includes eye-tracking data relating to one or more individuals comprises:
circuitry for presenting a location history query to a data source, wherein the data source includes eye-tracking data relating to one or more individuals, including at least one of dwell times for at least one object or location, saccade times, or closed eyelid times relating to the one or more individuals. 8. The system of claim 1 wherein the circuitry for presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query comprises:
circuitry for presenting a location history query to a data source, wherein the circuitry for presenting a location history query and the data source are present on a single augmented reality device. 9. The system of claim 1 wherein the circuitry for receiving response data relating to the location history query from the data source comprises:
circuitry for receiving response data including data relating to at least one fixed recording device having a specified field of view within a twenty-five meter radius of an augmented reality device of the location history query at a first time period, a first mobile recording device having a variable field of view within a five meter radius of a user of the augmented reality device during a second time period, and a second mobile recording device having a variable field of view within a five meter radius of a user of the augmented reality device during the second time period. 10. The system of claim 1 wherein the circuitry for presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device comprises:
circuitry for presenting an auditory or visual augmented reality representation on an augmented reality device of a user, wherein the representation indicates that at least one individual or camera of the scene is presently looking at the user of the augmented reality device. 11. The system of claim 1 wherein the circuitry for presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device comprises:
circuitry for presenting an auditory or visual augmented reality representation on an augmented reality device of a user, wherein the representation indicates that the user of the augmented reality device is currently visible to one or more recording devices or individuals. 12. The system of claim 1 wherein the circuitry for presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device comprises:
circuitry for presenting an auditory or visual augmented reality representation on an augmented reality device of a user, wherein the representation indicates that the user of the augmented reality device was visible to one or more recording devices or individuals during a previous time period. 13. The system of claim 1 wherein the circuitry for presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device comprises:
circuitry for presenting an auditory or visual augmented reality representation on an augmented reality device of a user, wherein the representation indicates that the user of the augmented reality device may be visible to one or more recording devices or individuals during a future time period. 14. The system of claim 1 wherein the circuitry for presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device comprises:
circuitry for presenting an augmented reality representation in association with at least one affordance by which a user may filter the response data. 15. The system of claim 14 wherein the circuitry for presenting an augmented reality representation in association with at least one affordance by which a user may filter the response data comprises:
circuitry for presenting an augmented reality representation in association with at least one slider bar by which a user may filter the response data according to a number of minutes of direct observation by an individual of the user or the user's augmented reality device based on eye tracking data or other image data. 16. A computer-implemented method comprising:
presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query; receiving response data relating to the location history query from the data source; and presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device. 17.-30. (canceled) 31. A computer program product comprising:
an article of manufacture including a signal-bearing medium bearing: (1) one or more instructions for presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query; (2) one or more instructions for receiving response data relating to the location history query from the data source; and (3) one or more instructions for presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device. 32.-34. (canceled) 35. A system comprising:
a computing device; and instructions that when executed on the computing device cause the computing device to (1) present a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query; (2) receive response data relating to the location history query from the data source; and (3) present an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device. 36. (canceled) 37. A system comprising:
means for presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query; means for receiving response data relating to the location history query from the data source; and means for presenting an augmented reality representation of a scene at least partly based on the response data relating to the location history query, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device. 38. A system comprising:
accepting location history data relating to a location history query, wherein the location history data include at least one of a data from a fixed recording device within a defined radius of a component of the location history query, data from a mobile recording device within a defined radius of a component of the location history query, or data relating to an individual present within a defined radius of a component of the location history query; and presenting an augmented reality representation of a scene at least partly based on the location history data, wherein the augmented reality representation includes at least one of observation information about at least one element of the scene, or visibility information about at least one of an augmented reality device or a user of a device. | 2,600 |
10,706 | 10,706 | 15,964,687 | 2,613 | A mechanism is provided for implementing an augmented reality display via a head mounted display (HMD) system that indicates areas of a patient's body corresponding to a medical condition and/or treatment of the patient overlayed on the actual view of the patient. A real-time image of an area of a patient's body being viewed by a medical professional is captured via the HMD system. One or more body parts of the patient are identified within the real-time image. The one or more identified body parts are correlated with the patient's electronic medical records (EMRs) indicating the medical condition and/or treatments associated with the patient. An augmented reality display is then generated in the HMD system of one or more areas of the patient's body corresponding to the medical condition and/or treatment of the patient overlaying the real-time image of the area of the patient's body. | 1. A method, in a data processing system comprising at least one processor and at least one memory, the at least one memory comprising instructions executed by the at least one processor to cause the at least one processor to implement a cognitive healthcare system, wherein the cognitive healthcare system operates to:
capturing, by a capturing mechanism of the cognitive healthcare system, a real-time image of an area of a patient's body being viewed by a medical professional via a head mounted display (HMD) system; identifying, by the cognitive healthcare system, one or more body parts of the patient within the real-time image; correlating, by the cognitive healthcare system, the one or more identified body parts with the patient's electronic medical records (EMRs) indicating the medical condition and/or treatments associated with the patient; and generating, by the cognitive healthcare system, an augmented reality display, in the HMD system, of one or more areas of the patient's body corresponding to the medical condition and/or treatment of the patient overlaying the real-time image of the area of the patient's body, wherein a level of information displayed in the augmented reality display is based on a schedule of the medical professional such that the cognitive healthcare system:
accesses a schedule of the medical professional through a medical professional corpus or corpora of data;
determines an amount of time the medical professional has to spend with the patient; and
displays the level of information in the augmented reality display commensurate with the amount of time the medical professional has to spend with the patient. 2. The method of claim 1, wherein the patient's electronic medical records (EMRs) are correlated to the patient by the capturing mechanism capturing an image of the patient's face and the cognitive healthcare system utilizing facial recognition to identify the patient. 3. The method of claim 1, wherein the patient's electronic medical records (EMRs) are correlated to the patient by the capturing mechanism capturing an audible utterance from the patient and the cognitive healthcare system utilizing voice recognition to identify the patient. 4. The method of claim 1, wherein the augmented reality display displays one or more of a basic organ model, a current x-ray, a current computerized axial tomography (CAT) scan (CT), a current magnetic resonance imaging (MRI) scan, one or more of dissection models, overlapping organ systems, previous x-rays, previous CT scans, previous MRI scans, or points of surgery or pressure. 5. The method of claim 1, wherein the augmented reality display further highlights portions of the patients' body affecting or needing to be further investigated with regard to the medical condition. 6. The method of claim 1, wherein the augmented reality display further displays textual data representing lab results, treatment options, medical codes, latest medical research studies, or available organs for transplant. 7. The method of claim 1, wherein the cognitive healthcare system further:
captures a facial expression of the patient; captures one or more audible utterances of the patient; identifies a mood of the patient using the captured facial expression and the one or more audible utterances; and displays via the augmented reality display an indication of how the medical professional should be presenting information to the patient mood. 8. The method of claim 1, wherein the medical professional treating the patient is identified by the capturing mechanism capturing an image of the medical professional's face and the cognitive healthcare system utilizing facial recognition to identify the medical professional. 9. The method of claim 1, wherein the medical professional treating the patient is identified by the capturing mechanism capturing an audible utterance from the medical professional and the cognitive healthcare system utilizing voice recognition to identify the medical professional. 10. (canceled) 11. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device, causes the computing device to:
capture, by a capturing mechanism, a real-time image of an area of a patient's body being viewed by a medical professional via a head mounted display (HMD) system; identify one or more body parts of the patient within the real-time image; correlate the one or more identified body parts with the patient's electronic medical records (EMRs) indicating the medical condition and/or treatments associated with the patient; and generate an augmented reality display, in the HMD system, of one or more areas of the patient's body corresponding to the medical condition and/or treatment of the patient overlaying the real-time image of the area of the patient's body, wherein a level of information displayed in the augmented reality display is based on a schedule of the medical professional such that the computer readable program causes the computing device to:
access a schedule of the medical professional through a medical professional corpus or corpora of data;
determine an amount of time the medical professional has to spend with the patient; and
display the level of information in the augmented reality display commensurate with the amount of time the medical professional has to spend with the patient. 12. The computer program product of claim 11, wherein the augmented reality display displays one or more of a basic organ model, a current x-ray, a current computerized axial tomography (CAT) scan (CT), a current magnetic resonance imaging (MRI) scan, one or more of dissection models, overlapping organ systems, previous x-rays, previous CT scans, previous MRI scans, or points of surgery or pressure. 13. The computer program product of claim 11, wherein the augmented reality display further highlights portions of the patients' body affecting or needing to be further investigated with regard to the medical condition. 14. The computer program product of claim 11, wherein the augmented reality display further displays textual data representing lab results, treatment options, medical codes, latest medical research studies, or available organs for transplant. 15. The computer program product of claim 11, wherein the computer readable program further causes the computing device to:
capture a facial expression of the patient; capture one or more audible utterances of the patient; identify a mood of the patient using the captured facial expression and the one or more audible utterances; and display via the augmented reality display an indication of how the medical professional should be presenting information to the patient mood. 16. An apparatus comprising:
a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: capture, by a capturing mechanism, a real-time image of an area of a patient's body being viewed by a medical professional via a head mounted display (HMD) system; identify one or more body parts of the patient within the real-time image; correlate the one or more identified body parts with the patient's electronic medical records (EMRs) indicating the medical condition and/or treatments associated with the patient; and generate an augmented reality display, in the HMD system, of one or more areas of the patient's body corresponding to the medical condition and/or treatment of the patient overlaying the real-time image of the area of the patient's body, wherein a level of information displayed in the augmented reality display is based on a schedule of the medical professional such that the instructions cause the processor to:
access a schedule of the medical professional through a medical professional corpus or corpora of data;
determine an amount of time the medical professional has to spend with the patient; and
display the level of information in the augmented reality display commensurate with the amount of time the medical professional has to spend with the patient. 17. The apparatus of claim 16, wherein the augmented reality display displays one or more of a basic organ model, a current x-ray, a current computerized axial tomography (CAT) scan (CT), a current magnetic resonance imaging (MRI) scan, one or more of dissection models, overlapping organ systems, previous x-rays, previous CT scans, previous MRI scans, or points of surgery or pressure. 18. The apparatus of claim 16, wherein the augmented reality display further highlights portions of the patients' body affecting or needing to be further investigated with regard to the medical condition. 19. The apparatus of claim 16, wherein the augmented reality display further displays textual data representing lab results, treatment options, medical codes, latest medical research studies, or available organs for transplant. 20. The apparatus of claim 16, wherein the instructions further cause the processor to:
capture a facial expression of the patient; capture one or more audible utterances of the patient; identify a mood of the patient using the captured facial expression and the one or more audible utterances; and display via the augmented reality display an indication of how the medical professional should be presenting information to the patient mood. | A mechanism is provided for implementing an augmented reality display via a head mounted display (HMD) system that indicates areas of a patient's body corresponding to a medical condition and/or treatment of the patient overlayed on the actual view of the patient. A real-time image of an area of a patient's body being viewed by a medical professional is captured via the HMD system. One or more body parts of the patient are identified within the real-time image. The one or more identified body parts are correlated with the patient's electronic medical records (EMRs) indicating the medical condition and/or treatments associated with the patient. An augmented reality display is then generated in the HMD system of one or more areas of the patient's body corresponding to the medical condition and/or treatment of the patient overlaying the real-time image of the area of the patient's body.1. A method, in a data processing system comprising at least one processor and at least one memory, the at least one memory comprising instructions executed by the at least one processor to cause the at least one processor to implement a cognitive healthcare system, wherein the cognitive healthcare system operates to:
capturing, by a capturing mechanism of the cognitive healthcare system, a real-time image of an area of a patient's body being viewed by a medical professional via a head mounted display (HMD) system; identifying, by the cognitive healthcare system, one or more body parts of the patient within the real-time image; correlating, by the cognitive healthcare system, the one or more identified body parts with the patient's electronic medical records (EMRs) indicating the medical condition and/or treatments associated with the patient; and generating, by the cognitive healthcare system, an augmented reality display, in the HMD system, of one or more areas of the patient's body corresponding to the medical condition and/or treatment of the patient overlaying the real-time image of the area of the patient's body, wherein a level of information displayed in the augmented reality display is based on a schedule of the medical professional such that the cognitive healthcare system:
accesses a schedule of the medical professional through a medical professional corpus or corpora of data;
determines an amount of time the medical professional has to spend with the patient; and
displays the level of information in the augmented reality display commensurate with the amount of time the medical professional has to spend with the patient. 2. The method of claim 1, wherein the patient's electronic medical records (EMRs) are correlated to the patient by the capturing mechanism capturing an image of the patient's face and the cognitive healthcare system utilizing facial recognition to identify the patient. 3. The method of claim 1, wherein the patient's electronic medical records (EMRs) are correlated to the patient by the capturing mechanism capturing an audible utterance from the patient and the cognitive healthcare system utilizing voice recognition to identify the patient. 4. The method of claim 1, wherein the augmented reality display displays one or more of a basic organ model, a current x-ray, a current computerized axial tomography (CAT) scan (CT), a current magnetic resonance imaging (MRI) scan, one or more of dissection models, overlapping organ systems, previous x-rays, previous CT scans, previous MRI scans, or points of surgery or pressure. 5. The method of claim 1, wherein the augmented reality display further highlights portions of the patients' body affecting or needing to be further investigated with regard to the medical condition. 6. The method of claim 1, wherein the augmented reality display further displays textual data representing lab results, treatment options, medical codes, latest medical research studies, or available organs for transplant. 7. The method of claim 1, wherein the cognitive healthcare system further:
captures a facial expression of the patient; captures one or more audible utterances of the patient; identifies a mood of the patient using the captured facial expression and the one or more audible utterances; and displays via the augmented reality display an indication of how the medical professional should be presenting information to the patient mood. 8. The method of claim 1, wherein the medical professional treating the patient is identified by the capturing mechanism capturing an image of the medical professional's face and the cognitive healthcare system utilizing facial recognition to identify the medical professional. 9. The method of claim 1, wherein the medical professional treating the patient is identified by the capturing mechanism capturing an audible utterance from the medical professional and the cognitive healthcare system utilizing voice recognition to identify the medical professional. 10. (canceled) 11. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device, causes the computing device to:
capture, by a capturing mechanism, a real-time image of an area of a patient's body being viewed by a medical professional via a head mounted display (HMD) system; identify one or more body parts of the patient within the real-time image; correlate the one or more identified body parts with the patient's electronic medical records (EMRs) indicating the medical condition and/or treatments associated with the patient; and generate an augmented reality display, in the HMD system, of one or more areas of the patient's body corresponding to the medical condition and/or treatment of the patient overlaying the real-time image of the area of the patient's body, wherein a level of information displayed in the augmented reality display is based on a schedule of the medical professional such that the computer readable program causes the computing device to:
access a schedule of the medical professional through a medical professional corpus or corpora of data;
determine an amount of time the medical professional has to spend with the patient; and
display the level of information in the augmented reality display commensurate with the amount of time the medical professional has to spend with the patient. 12. The computer program product of claim 11, wherein the augmented reality display displays one or more of a basic organ model, a current x-ray, a current computerized axial tomography (CAT) scan (CT), a current magnetic resonance imaging (MRI) scan, one or more of dissection models, overlapping organ systems, previous x-rays, previous CT scans, previous MRI scans, or points of surgery or pressure. 13. The computer program product of claim 11, wherein the augmented reality display further highlights portions of the patients' body affecting or needing to be further investigated with regard to the medical condition. 14. The computer program product of claim 11, wherein the augmented reality display further displays textual data representing lab results, treatment options, medical codes, latest medical research studies, or available organs for transplant. 15. The computer program product of claim 11, wherein the computer readable program further causes the computing device to:
capture a facial expression of the patient; capture one or more audible utterances of the patient; identify a mood of the patient using the captured facial expression and the one or more audible utterances; and display via the augmented reality display an indication of how the medical professional should be presenting information to the patient mood. 16. An apparatus comprising:
a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: capture, by a capturing mechanism, a real-time image of an area of a patient's body being viewed by a medical professional via a head mounted display (HMD) system; identify one or more body parts of the patient within the real-time image; correlate the one or more identified body parts with the patient's electronic medical records (EMRs) indicating the medical condition and/or treatments associated with the patient; and generate an augmented reality display, in the HMD system, of one or more areas of the patient's body corresponding to the medical condition and/or treatment of the patient overlaying the real-time image of the area of the patient's body, wherein a level of information displayed in the augmented reality display is based on a schedule of the medical professional such that the instructions cause the processor to:
access a schedule of the medical professional through a medical professional corpus or corpora of data;
determine an amount of time the medical professional has to spend with the patient; and
display the level of information in the augmented reality display commensurate with the amount of time the medical professional has to spend with the patient. 17. The apparatus of claim 16, wherein the augmented reality display displays one or more of a basic organ model, a current x-ray, a current computerized axial tomography (CAT) scan (CT), a current magnetic resonance imaging (MRI) scan, one or more of dissection models, overlapping organ systems, previous x-rays, previous CT scans, previous MRI scans, or points of surgery or pressure. 18. The apparatus of claim 16, wherein the augmented reality display further highlights portions of the patients' body affecting or needing to be further investigated with regard to the medical condition. 19. The apparatus of claim 16, wherein the augmented reality display further displays textual data representing lab results, treatment options, medical codes, latest medical research studies, or available organs for transplant. 20. The apparatus of claim 16, wherein the instructions further cause the processor to:
capture a facial expression of the patient; capture one or more audible utterances of the patient; identify a mood of the patient using the captured facial expression and the one or more audible utterances; and display via the augmented reality display an indication of how the medical professional should be presenting information to the patient mood. | 2,600 |
10,707 | 10,707 | 15,189,897 | 2,625 | A display system of an aircraft, able to display a localization marking of a zone of location of an approach light ramp and related method are provided. The display system includes a display unit and an assembly generating a display on the display unit. The display generator is able to display, on approach to a landing strip, a localization marking of a presence zone of an approach light ramp toward the landing strip. | 1. A display system of an aircraft, comprising:
a display unit; and a display generator for generating a display on the display unit, the display generator being configured to display, on approach to a landing strip, a localization marking of a presence zone of an approach light ramp toward the landing strip. 2. The system according to claim 1 wherein the localization marking includes at least one lateral localization symbol of the presence zone of the approach light ramp. 3. The system according to claim 2 wherein the localization marking includes at least two opposite lateral localization symbols of the presence zone of the approach light ramp, delimiting the presence zone of the approach light ramp to the left and right. 4. The system according to claim 2 wherein the localization marking includes at least one series of lateral localization symbols on one side of the presence zone of the approach light ramp, the series of lateral localization symbols converging toward a longitudinal axis of the runway. 5. The system according to claim 4 wherein the localization marking includes two series of lateral localization symbols respectively delimiting the left side and right side of the presence zone of the approach light ramp. 6. The system according to claim 1 wherein the localization marking includes at least one symbol identifying, on the display unit, a position corresponding to a predetermined distance on the ground from the runway threshold at which the approach light ramp comprises a transverse row of lights. 7. The system according to claim 6 wherein the predetermined distance on the ground is comprised between 275 meters and 335 meters. 8. The system according to claim 6 wherein the localization marking comprises two opposite symbols situated in the position corresponding to a predetermined distance on the ground from the runway threshold at which the approach light ramp comprises a transverse row of lights, the lateral separation between the opposite symbols corresponding to a distance on the ground strictly greater than a width of the transverse row. 9. The system according to claim 1 wherein the display generator is configured to display a position marking of the landing strip, situated above the localization marking of the presence zone of the approach light ramp toward the landing strip. 10. The system according to claim 1, wherein the display generator is configured to display a runway axis symbol, situated below the localization marking of the presence zone of an approach light ramp toward the landing strip. 11. The system according to claim 1 further comprising a database of approach ramps, containing at least one piece of information characteristic of each approach ramp associated with a landing strip targeted by the aircraft, the display generator being configured to create and display the localization marking by using the at least one piece of information characteristic of the approach ramp approached by the aircraft contained in the database of approach ramps. 12. The system according to claim 1 wherein the display generator is configured to dynamically display, on the display unit, at least one horizon line and a slope scale relative to the horizon line. 13. The system according to claim 1 wherein the display generator is configured to display, before the display of the localization marking of the presence zone of an approach light ramp toward the landing strip, an identification symbol of the landing strip, pointing to a position of the landing strip on the display unit. 14. The system according to claim 1 wherein the display unit is an at least partially transparent display unit, a projector for projecting images on the windshield of the cockpit, a semitransparent sunshade, a helmet visor, or a semitransparent glass close to the eye. 15. The system according to claim 14 wherein the at least partially transparent display unit is a semitransparent screen placed in front of a windshield of the cockpit. 16. A display method in an aircraft comprising:
providing the system according to claim 1; upon approaching the landing strip, displaying, via the display generator, of the localization marking of the presence zone of the approach light ramp toward the landing strip. | A display system of an aircraft, able to display a localization marking of a zone of location of an approach light ramp and related method are provided. The display system includes a display unit and an assembly generating a display on the display unit. The display generator is able to display, on approach to a landing strip, a localization marking of a presence zone of an approach light ramp toward the landing strip.1. A display system of an aircraft, comprising:
a display unit; and a display generator for generating a display on the display unit, the display generator being configured to display, on approach to a landing strip, a localization marking of a presence zone of an approach light ramp toward the landing strip. 2. The system according to claim 1 wherein the localization marking includes at least one lateral localization symbol of the presence zone of the approach light ramp. 3. The system according to claim 2 wherein the localization marking includes at least two opposite lateral localization symbols of the presence zone of the approach light ramp, delimiting the presence zone of the approach light ramp to the left and right. 4. The system according to claim 2 wherein the localization marking includes at least one series of lateral localization symbols on one side of the presence zone of the approach light ramp, the series of lateral localization symbols converging toward a longitudinal axis of the runway. 5. The system according to claim 4 wherein the localization marking includes two series of lateral localization symbols respectively delimiting the left side and right side of the presence zone of the approach light ramp. 6. The system according to claim 1 wherein the localization marking includes at least one symbol identifying, on the display unit, a position corresponding to a predetermined distance on the ground from the runway threshold at which the approach light ramp comprises a transverse row of lights. 7. The system according to claim 6 wherein the predetermined distance on the ground is comprised between 275 meters and 335 meters. 8. The system according to claim 6 wherein the localization marking comprises two opposite symbols situated in the position corresponding to a predetermined distance on the ground from the runway threshold at which the approach light ramp comprises a transverse row of lights, the lateral separation between the opposite symbols corresponding to a distance on the ground strictly greater than a width of the transverse row. 9. The system according to claim 1 wherein the display generator is configured to display a position marking of the landing strip, situated above the localization marking of the presence zone of the approach light ramp toward the landing strip. 10. The system according to claim 1, wherein the display generator is configured to display a runway axis symbol, situated below the localization marking of the presence zone of an approach light ramp toward the landing strip. 11. The system according to claim 1 further comprising a database of approach ramps, containing at least one piece of information characteristic of each approach ramp associated with a landing strip targeted by the aircraft, the display generator being configured to create and display the localization marking by using the at least one piece of information characteristic of the approach ramp approached by the aircraft contained in the database of approach ramps. 12. The system according to claim 1 wherein the display generator is configured to dynamically display, on the display unit, at least one horizon line and a slope scale relative to the horizon line. 13. The system according to claim 1 wherein the display generator is configured to display, before the display of the localization marking of the presence zone of an approach light ramp toward the landing strip, an identification symbol of the landing strip, pointing to a position of the landing strip on the display unit. 14. The system according to claim 1 wherein the display unit is an at least partially transparent display unit, a projector for projecting images on the windshield of the cockpit, a semitransparent sunshade, a helmet visor, or a semitransparent glass close to the eye. 15. The system according to claim 14 wherein the at least partially transparent display unit is a semitransparent screen placed in front of a windshield of the cockpit. 16. A display method in an aircraft comprising:
providing the system according to claim 1; upon approaching the landing strip, displaying, via the display generator, of the localization marking of the presence zone of the approach light ramp toward the landing strip. | 2,600 |
10,708 | 10,708 | 14,061,410 | 2,653 | A hearing prosthesis system, including a sound capture device configured to capture a sound and generate a signal based on the captured sound, and a vibratory portion configured to vibrate in response to the signal to evoke a hearing percept via bone conduction, wherein the system is configured to capture the sound on a first side of a recipient where the sound capture device is located and transfer the signal to a second side of the recipient where the vibratory portion is located. | 1. A hearing prosthesis system, comprising:
a sound capture device configured to capture a sound and generate a signal based on the captured sound; and a vibratory portion configured to vibrate in response to the signal to evoke a hearing percept via bone conduction, wherein the system is configured to capture the sound on a first side of a recipient where the sound capture device is located and transfer the signal to a second side of the recipient where the vibratory portion is located. 2. The hearing prosthesis system of claim 1, wherein:
the first side is one of a right side or a left side of a recipient's head; and the second side is the other of a right side or a left side of a recipient's head. 3. The hearing prosthesis system of claim 1, wherein:
the system includes a behind-the-ear device; and the vibratory portion is part of the behind-the-ear device. 4. The hearing prosthesis system of claim 1, wherein:
the behind-the-ear device includes an adhesive configured to adhere to skin of a recipient. 5. The hearing prosthesis system of claim 1, wherein:
the system is a totally external hearing prosthesis system. 6. The hearing prosthesis system of claim 1, wherein:
the system includes a percutaneous bone conduction device; and the vibratory portion is part of the percutaneous bone conduction device. 7. The hearing prosthesis system of claim 1, wherein:
the system includes a passive transcutaneous bone conduction device; and the vibratory portion is part of the passive transcutaneous bone conduction device. 8. The hearing prosthesis system of claim 1, wherein:
the system includes an active transcutaneous bone conduction device; and the vibratory portion is part of the active transcutaneous bone conduction device. 9. The hearing prosthesis system of claim 1, further comprising:
a speaker portion configured to evoke a hearing percept via an acoustic pressure wave based on the signal. 10. The hearing prosthesis system of claim 9, wherein:
a speaker portion is located in a non-in-the-ear component of the hearing prosthesis. 12. A method, comprising:
capturing sound at a first side of a recipient; and evoking a hearing percept via bone conduction with energy originating on an opposite side of the recipient based on the captured sound. 13. The method of claim 12, further comprising:
evoking a hearing percept via acoustic conduction on the opposite side of the recipient based on the captured sound. 14. The method of claim 12, further comprising:
evoking a hearing percept via vibrations travelling through at least one of skin and cartilage to the tympanic membrane on the opposite side of the recipient based on the captured sound. 15. The method of claim 12, wherein:
the evoked hearing percept via bone conduction is evoked utilizing a vibrator; and the evoked hearing percept via acoustic conduction results from the vibrator. 16. The method of claim 12, wherein:
the hearing percept evoked via bone conduction is a high-frequency hearing percept. 17. The method of claim 13, wherein:
the hearing percept evoked via acoustic conduction is evoked without any prosthetic component in the outer ear canal of the opposite side of the recipient. 18. The method of claim 12, further comprising:
evoking a hearing percept by vibrating the tympanic membrane of the recipient of the side of the recipient having the at least partially functioning cochlea. 26. A behind-the-ear device, comprising:
a vibratory portion configured to vibrate in response to an audio signal to evoke a hearing percept via bone conduction; and a speaker portion configured to evoke a hearing percept via an acoustic pressure wave, wherein the behind-the-ear device is a totally external device. 27. The behind-the-ear device of claim 26, wherein:
the speaker portion is located on a non-in-the-ear component of the device. 28. The behind-the-ear device of claim 26, wherein:
the speaker portion is located on a temple mount of the behind-the-ear device. | A hearing prosthesis system, including a sound capture device configured to capture a sound and generate a signal based on the captured sound, and a vibratory portion configured to vibrate in response to the signal to evoke a hearing percept via bone conduction, wherein the system is configured to capture the sound on a first side of a recipient where the sound capture device is located and transfer the signal to a second side of the recipient where the vibratory portion is located.1. A hearing prosthesis system, comprising:
a sound capture device configured to capture a sound and generate a signal based on the captured sound; and a vibratory portion configured to vibrate in response to the signal to evoke a hearing percept via bone conduction, wherein the system is configured to capture the sound on a first side of a recipient where the sound capture device is located and transfer the signal to a second side of the recipient where the vibratory portion is located. 2. The hearing prosthesis system of claim 1, wherein:
the first side is one of a right side or a left side of a recipient's head; and the second side is the other of a right side or a left side of a recipient's head. 3. The hearing prosthesis system of claim 1, wherein:
the system includes a behind-the-ear device; and the vibratory portion is part of the behind-the-ear device. 4. The hearing prosthesis system of claim 1, wherein:
the behind-the-ear device includes an adhesive configured to adhere to skin of a recipient. 5. The hearing prosthesis system of claim 1, wherein:
the system is a totally external hearing prosthesis system. 6. The hearing prosthesis system of claim 1, wherein:
the system includes a percutaneous bone conduction device; and the vibratory portion is part of the percutaneous bone conduction device. 7. The hearing prosthesis system of claim 1, wherein:
the system includes a passive transcutaneous bone conduction device; and the vibratory portion is part of the passive transcutaneous bone conduction device. 8. The hearing prosthesis system of claim 1, wherein:
the system includes an active transcutaneous bone conduction device; and the vibratory portion is part of the active transcutaneous bone conduction device. 9. The hearing prosthesis system of claim 1, further comprising:
a speaker portion configured to evoke a hearing percept via an acoustic pressure wave based on the signal. 10. The hearing prosthesis system of claim 9, wherein:
a speaker portion is located in a non-in-the-ear component of the hearing prosthesis. 12. A method, comprising:
capturing sound at a first side of a recipient; and evoking a hearing percept via bone conduction with energy originating on an opposite side of the recipient based on the captured sound. 13. The method of claim 12, further comprising:
evoking a hearing percept via acoustic conduction on the opposite side of the recipient based on the captured sound. 14. The method of claim 12, further comprising:
evoking a hearing percept via vibrations travelling through at least one of skin and cartilage to the tympanic membrane on the opposite side of the recipient based on the captured sound. 15. The method of claim 12, wherein:
the evoked hearing percept via bone conduction is evoked utilizing a vibrator; and the evoked hearing percept via acoustic conduction results from the vibrator. 16. The method of claim 12, wherein:
the hearing percept evoked via bone conduction is a high-frequency hearing percept. 17. The method of claim 13, wherein:
the hearing percept evoked via acoustic conduction is evoked without any prosthetic component in the outer ear canal of the opposite side of the recipient. 18. The method of claim 12, further comprising:
evoking a hearing percept by vibrating the tympanic membrane of the recipient of the side of the recipient having the at least partially functioning cochlea. 26. A behind-the-ear device, comprising:
a vibratory portion configured to vibrate in response to an audio signal to evoke a hearing percept via bone conduction; and a speaker portion configured to evoke a hearing percept via an acoustic pressure wave, wherein the behind-the-ear device is a totally external device. 27. The behind-the-ear device of claim 26, wherein:
the speaker portion is located on a non-in-the-ear component of the device. 28. The behind-the-ear device of claim 26, wherein:
the speaker portion is located on a temple mount of the behind-the-ear device. | 2,600 |
10,709 | 10,709 | 15,620,462 | 2,647 | The present disclosure is directed toward systems and methods for managing a digital survey over voice-capable devices. In particular, the systems and methods described herein create a digital survey question from a verbal input. Additionally, the systems and methods described herein provide the digital survey question to respondents by way of voice-capable devices. The systems and methods also receive verbal survey responses, generate survey results from the verbal responses, and provide the survey results to a survey administrator. | 1. A method comprising:
receiving, from a client device associated with a survey administrator, a digital survey question corresponding to a digital survey to be administered to at least one respondent; analyzing, by at least one processor, the digital survey question to generate a text-based natural language survey question corresponding to the digital survey question; distributing the text-based natural language survey question to a respondent by sending the text-based natural language survey question to a voice transcription service, wherein sending the text-based natural language survey question to the voice transcription service causes the voice transcription service to convert the text-based natural language survey question into an audio survey question and provide the audio survey question to a voice-capable device associated with the respondent; receiving, from the voice transcription service, a transcription of a verbal response of the respondent; and analyzing, by the at least one processor, the transcription of the verbal response to generate a survey result for the digital survey question. 2. The method of claim 1, wherein receiving the transcription of the verbal response of the respondent from the voice transcription service is based on the voice transcription service transcribing the verbal response received from the voice-capable device associated with the respondent. 3. The method of claim 1, further comprising identifying one or more survey question attributes associated with the digital survey question, wherein the one or more survey question attributes comprise one or more of a target audience, a question type, a question identifier, an answer format, or a survey identifier. 4. The method of claim 3, further comprising analyzing the one or more survey question attributes to identify, from a plurality of natural language phrases, at least one natural language phrase that corresponds to the one or more survey question attributes. 5. The method of claim 3, further comprising identifying a target audience based on the one or more survey question attributes, wherein the respondent is a member of the target audience. 6. The method of claim 1, wherein the voice transcription service is a third-party voice transcription service. 7. The method of claim 1, wherein analyzing the transcription of the verbal response comprises implementing a natural language processing technique to identify a key phrase. 8. The method of claim 7, further comprising correlating the key phrase with a particular survey result from among a plurality of possible survey results corresponding to the digital survey question, wherein each of the plurality of possible survey results correspond to different key phrases. 9. A system comprising:
at least one processor; and a non-transitory storage medium comprising instructions thereon that, when executed by the at least one processor, cause the server device to: access a digital survey question corresponding to a digital survey to be administered to at least one respondent; analyze the digital survey question to generate a text-based natural language survey question corresponding to the digital survey question; send the text-based natural language survey question to a voice transcription service to cause the voice transcription service to convert the text-based natural language survey question into an audio survey question and provide the audio survey question to a voice-capable device associated with a respondent; receive, from the voice transcription service, a transcription of a verbal response of the respondent; and analyze the transcription of the verbal response to generate a survey result for the digital survey question. 10. The system of claim 9, further comprising instructions that, when executed by the at least one processor, cause the server device to:
store the survey result in a digital survey database; and upon receiving a request from a client device associated with a survey administrator, provide the survey result to the client device for presentation to the survey administrator. 11. The system of claim 10, wherein analyzing the transcription of the verbal response to generate a survey result comprises:
comparing a key word within the transcription of the verbal response to a set of potential survey results; and identifying that the key word matches a potential survey result within the set of potential survey results to generate the survey result for the digital question. 12. A method comprising:
receiving, from a voice transcription service, a transcription of a verbal survey question captured by a voice-capable device associated with a survey administrator; analyzing, by at least one processor, the transcription of the verbal survey question to identify speech elements within the transcription of the verbal survey question; based on the speech elements, generating a digital survey question corresponding to the verbal survey question; and providing the digital survey question to a respondent. 13. The method of claim 12, wherein analyzing the transcription of the verbal survey question to identify speech elements comprises utilizing a natural language processing algorithm to identify the speech elements within the transcription of the verbal survey question. 14. The method of claim 12, further comprising identifying the respondent for the digital survey question by determining that a portion of the transcription of the verbal survey question indicates a target audience comprising a plurality of respondents, wherein the respondent is from the plurality of respondents. 15. The method of claim 14, further comprising administering the digital survey question to each respondent of the plurality of respondents within the target audience by causing a voice transcription service to generate and send an audio survey question to a voice-enabled smart device associated with each respondent of the plurality of respondents. 16. The method of claim 12, further comprising:
identifying a question type of the digital survey question; and based on the question type, determining a result format for the digital survey question. 17. The method of claim 16, further comprising:
receiving, from a voice transcription service, a transcription of a verbal response from the respondent; analyzing the transcription of the verbal response to identify response elements corresponding to the result format; and based on the response elements, generating survey results according to the result format. 18. The method of claim 17, wherein the response elements comprise a key phrase that corresponds to a potential survey result of the result format. 19. The method of claim 18, further comprising:
comparing the key phrase to the potential survey result; and wherein generating the survey results comprises determining the key phrase or a derivation of the key phrase matches the potential survey result. 20. The method of claim 12, further comprising providing the survey results to the voice-capable device associated with the survey administrator. | The present disclosure is directed toward systems and methods for managing a digital survey over voice-capable devices. In particular, the systems and methods described herein create a digital survey question from a verbal input. Additionally, the systems and methods described herein provide the digital survey question to respondents by way of voice-capable devices. The systems and methods also receive verbal survey responses, generate survey results from the verbal responses, and provide the survey results to a survey administrator.1. A method comprising:
receiving, from a client device associated with a survey administrator, a digital survey question corresponding to a digital survey to be administered to at least one respondent; analyzing, by at least one processor, the digital survey question to generate a text-based natural language survey question corresponding to the digital survey question; distributing the text-based natural language survey question to a respondent by sending the text-based natural language survey question to a voice transcription service, wherein sending the text-based natural language survey question to the voice transcription service causes the voice transcription service to convert the text-based natural language survey question into an audio survey question and provide the audio survey question to a voice-capable device associated with the respondent; receiving, from the voice transcription service, a transcription of a verbal response of the respondent; and analyzing, by the at least one processor, the transcription of the verbal response to generate a survey result for the digital survey question. 2. The method of claim 1, wherein receiving the transcription of the verbal response of the respondent from the voice transcription service is based on the voice transcription service transcribing the verbal response received from the voice-capable device associated with the respondent. 3. The method of claim 1, further comprising identifying one or more survey question attributes associated with the digital survey question, wherein the one or more survey question attributes comprise one or more of a target audience, a question type, a question identifier, an answer format, or a survey identifier. 4. The method of claim 3, further comprising analyzing the one or more survey question attributes to identify, from a plurality of natural language phrases, at least one natural language phrase that corresponds to the one or more survey question attributes. 5. The method of claim 3, further comprising identifying a target audience based on the one or more survey question attributes, wherein the respondent is a member of the target audience. 6. The method of claim 1, wherein the voice transcription service is a third-party voice transcription service. 7. The method of claim 1, wherein analyzing the transcription of the verbal response comprises implementing a natural language processing technique to identify a key phrase. 8. The method of claim 7, further comprising correlating the key phrase with a particular survey result from among a plurality of possible survey results corresponding to the digital survey question, wherein each of the plurality of possible survey results correspond to different key phrases. 9. A system comprising:
at least one processor; and a non-transitory storage medium comprising instructions thereon that, when executed by the at least one processor, cause the server device to: access a digital survey question corresponding to a digital survey to be administered to at least one respondent; analyze the digital survey question to generate a text-based natural language survey question corresponding to the digital survey question; send the text-based natural language survey question to a voice transcription service to cause the voice transcription service to convert the text-based natural language survey question into an audio survey question and provide the audio survey question to a voice-capable device associated with a respondent; receive, from the voice transcription service, a transcription of a verbal response of the respondent; and analyze the transcription of the verbal response to generate a survey result for the digital survey question. 10. The system of claim 9, further comprising instructions that, when executed by the at least one processor, cause the server device to:
store the survey result in a digital survey database; and upon receiving a request from a client device associated with a survey administrator, provide the survey result to the client device for presentation to the survey administrator. 11. The system of claim 10, wherein analyzing the transcription of the verbal response to generate a survey result comprises:
comparing a key word within the transcription of the verbal response to a set of potential survey results; and identifying that the key word matches a potential survey result within the set of potential survey results to generate the survey result for the digital question. 12. A method comprising:
receiving, from a voice transcription service, a transcription of a verbal survey question captured by a voice-capable device associated with a survey administrator; analyzing, by at least one processor, the transcription of the verbal survey question to identify speech elements within the transcription of the verbal survey question; based on the speech elements, generating a digital survey question corresponding to the verbal survey question; and providing the digital survey question to a respondent. 13. The method of claim 12, wherein analyzing the transcription of the verbal survey question to identify speech elements comprises utilizing a natural language processing algorithm to identify the speech elements within the transcription of the verbal survey question. 14. The method of claim 12, further comprising identifying the respondent for the digital survey question by determining that a portion of the transcription of the verbal survey question indicates a target audience comprising a plurality of respondents, wherein the respondent is from the plurality of respondents. 15. The method of claim 14, further comprising administering the digital survey question to each respondent of the plurality of respondents within the target audience by causing a voice transcription service to generate and send an audio survey question to a voice-enabled smart device associated with each respondent of the plurality of respondents. 16. The method of claim 12, further comprising:
identifying a question type of the digital survey question; and based on the question type, determining a result format for the digital survey question. 17. The method of claim 16, further comprising:
receiving, from a voice transcription service, a transcription of a verbal response from the respondent; analyzing the transcription of the verbal response to identify response elements corresponding to the result format; and based on the response elements, generating survey results according to the result format. 18. The method of claim 17, wherein the response elements comprise a key phrase that corresponds to a potential survey result of the result format. 19. The method of claim 18, further comprising:
comparing the key phrase to the potential survey result; and wherein generating the survey results comprises determining the key phrase or a derivation of the key phrase matches the potential survey result. 20. The method of claim 12, further comprising providing the survey results to the voice-capable device associated with the survey administrator. | 2,600 |
10,710 | 10,710 | 15,415,823 | 2,674 | Improvements in the graphics processing pipeline are disclosed. More specifically, a new primitive shader stage performs tasks of the vertex shader stage or a domain shader stage if tessellation is enabled, a geometry shader if enabled, and a fixed function primitive assembler. The primitive shader stage is compiled by a driver from user-provided vertex or domain shader code, geometry shader code, and from code that performs functions of the primitive assembler. Moving tasks of the fixed function primitive assembler to a primitive shader that executes in programmable hardware provides many benefits, such as removal of a fixed function crossbar, removal of dedicated parameter and position buffers that are unusable in general compute mode, and other benefits. | 1. A method for performing three-dimensional graphics rendering, the method comprising:
performing per-vertex operations on a set of vertices with a primitive shader program executing in parallel processing units; performing culling operations on a set of primitives associated with the set of vertices, to generate a set of culled primitives, the culling operations being performed with the primitive shader; identifying one or more screen subdivisions for the set of culled primitives, with the primitive shader; and transmitting the set of culled primitives to a set of screen-space pipelines based on the identified screen subdivisions of the set of culled primitives. 2. The method of claim 1, wherein:
tessellation is enabled and the per-vertex operations comprise domain shader operations for evaluating barycentric coordinates produced by a tessellator stage of a graphics processing pipeline. 3. The method of claim 1, wherein:
tessellation is disabled and the per-vertex operations comprise vertex shader operations for transforming vertex positions for a vertex shader stage of a graphics processing pipeline. 4. The method of claim 1, further comprising:
performing operations for determining non-position attributes for vertices associated with the set of culled primitives, the operations for determining the non-position attributes being derived from vertex shader code for a vertex shader stage of a graphics processing pipeline. 5. The method of claim 1, wherein:
geometry shading is enabled and the method further comprises performing geometry shading operations on the set of primitives associated with the set of vertices, the geometry shading operations being derived from geometry shader code for a geometry shader stage of a graphics processing pipeline. 6. The method of claim 1, wherein:
transmitting the set of culled primitives to the set of screen-space pipelines is performed via a general purpose local data store memory and not via a fixed function crossbar or via a dedicated position buffer and parameter buffer. 7. The method of claim 6, wherein transmitting the set of culled primitives to the set of screen-space pipelines comprises:
transmitting the set of culled primitives to the local data store memory; and transmitting the set of culled primitives from the local data store memory to the set of screen-space pipelines. 8. The method of claim 1, wherein identifying one or more screen subdivisions comprises:
for each primitive in the set of culled primitive, identifying one or more screen subdivisions covered by that primitive. 9. The method of claim 8, wherein transmitting the set of culled primitives to the set of screen-space pipelines based on the identified screen subdivisions comprises:
for each primitive in the set of culled primitives, identifying one or more screen-space pipelines associated with the screen subdivisions covered by that primitive; and transmitting the primitive to the identified one or more screen-space pipelines. 10. An accelerated processing device (APD), comprising:
a graphics processing pipeline; and a plurality of parallel processing units, wherein the graphics processing pipeline includes a primitive shader stage configured to execute a primitive shader program on the plurality of parallel processing units, the primitive shader program configured to:
perform per-vertex operations on a set of vertices;
perform culling operations on a set of primitives associated with the set of vertices, to generate a set of culled primitives;
identifying one or more screen subdivisions for the set of culled primitives, with the primitive shader; and
transmitting the set of culled primitives to a set of screen-space pipelines of the graphics processing pipeline based on the identified screen subdivisions of the set of culled primitives. 11. The APD of claim 10, wherein the graphics processing pipeline is in a state where tessellation is enabled and the per-vertex operations comprise:
domain shader operations for evaluating barycentric coordinates produced by a tessellator stage of a graphics processing pipeline. 12. The APD of claim 10, wherein the graphics processing pipeline is in a state where tessellation is disabled and the per-vertex operations comprise:
vertex shader operations for transforming vertex positions for a vertex shader stage of a graphics processing pipeline. 13. The APD of claim 10, wherein the primitive shader program is further configured to:
perform operations for determining non-position attributes for vertices associated with the set of culled primitives, the operations for determining the non-position attributes being derived from vertex shader code for a vertex shader stage of the graphics processing pipeline. 14. The APD of claim 10, wherein the graphics processing pipeline is in a state where geometry shading is enabled and the primitive shader program is further configured to perform geometry shading operations on the set of primitives associated with the set of vertices, the geometry shading operations being derived from geometry shader code for a geometry shader stage of the graphics processing pipeline. 15. The APD of claim 10, further comprising:
a general purpose local data store, wherein the primitive shader program is configured to transmit the set of culled primitives to the set of screen-space pipelines via the general purpose local data store and not via a fixed function crossbar or via a dedicated position buffer and parameter buffer. 16. The APD of claim 10, wherein the primitive shader program is configured to identify one or more screen subdivisions by:
for each primitive in the set of culled primitive, identifying one or more screen subdivisions covered by that primitive. 17. The method of claim 16, wherein the primitive shader program is configured to transmit the set of culled primitives to the set of screen-space pipelines based on the identified screen subdivisions by:
for each primitive in the set of culled primitives, identifying one or more screen-space pipelines associated with the screen subdivisions covered by that primitive; and transmitting the primitive to the identified one or more screen-space pipelines. 18. A computing device, comprising:
a central processing unit, and an accelerated processing device (APD), the APD comprising:
a graphics processing pipeline; and
a plurality of parallel processing units,
wherein the graphics processing pipeline includes a primitive shader stage configured to execute a primitive shader program on the plurality of parallel processing units, the primitive shader program configured to:
perform per-vertex operations on a set of vertices received from the central processing unit;
perform culling operations on a set of primitives associated with the set of vertices, to generate a set of culled primitives;
identifying one or more screen subdivisions for the set of culled primitives, with the primitive shader; and
transmitting the set of culled primitives to a set of screen-space pipelines of the graphics processing pipeline based on the identified screen subdivisions of the set of culled primitives. 19. The computing device of claim 18, wherein the graphics processing pipeline is in a state where tessellation is enabled and the per-vertex operations comprise:
domain shader operations for evaluating barycentric coordinates produced by a tessellator stage of a graphics processing pipeline, the domain shader operations being derived from a domain shader program provided by the central processing unit. 20. The computing device of claim 18, wherein the graphics processing pipeline is in a state where tessellation is disabled and the per-vertex operations comprise:
vertex shader operations for transforming vertex positions for a vertex shader stage of a graphics processing pipeline, the vertex shader operations being derived from a vertex shader program provided by the central processing unit. | Improvements in the graphics processing pipeline are disclosed. More specifically, a new primitive shader stage performs tasks of the vertex shader stage or a domain shader stage if tessellation is enabled, a geometry shader if enabled, and a fixed function primitive assembler. The primitive shader stage is compiled by a driver from user-provided vertex or domain shader code, geometry shader code, and from code that performs functions of the primitive assembler. Moving tasks of the fixed function primitive assembler to a primitive shader that executes in programmable hardware provides many benefits, such as removal of a fixed function crossbar, removal of dedicated parameter and position buffers that are unusable in general compute mode, and other benefits.1. A method for performing three-dimensional graphics rendering, the method comprising:
performing per-vertex operations on a set of vertices with a primitive shader program executing in parallel processing units; performing culling operations on a set of primitives associated with the set of vertices, to generate a set of culled primitives, the culling operations being performed with the primitive shader; identifying one or more screen subdivisions for the set of culled primitives, with the primitive shader; and transmitting the set of culled primitives to a set of screen-space pipelines based on the identified screen subdivisions of the set of culled primitives. 2. The method of claim 1, wherein:
tessellation is enabled and the per-vertex operations comprise domain shader operations for evaluating barycentric coordinates produced by a tessellator stage of a graphics processing pipeline. 3. The method of claim 1, wherein:
tessellation is disabled and the per-vertex operations comprise vertex shader operations for transforming vertex positions for a vertex shader stage of a graphics processing pipeline. 4. The method of claim 1, further comprising:
performing operations for determining non-position attributes for vertices associated with the set of culled primitives, the operations for determining the non-position attributes being derived from vertex shader code for a vertex shader stage of a graphics processing pipeline. 5. The method of claim 1, wherein:
geometry shading is enabled and the method further comprises performing geometry shading operations on the set of primitives associated with the set of vertices, the geometry shading operations being derived from geometry shader code for a geometry shader stage of a graphics processing pipeline. 6. The method of claim 1, wherein:
transmitting the set of culled primitives to the set of screen-space pipelines is performed via a general purpose local data store memory and not via a fixed function crossbar or via a dedicated position buffer and parameter buffer. 7. The method of claim 6, wherein transmitting the set of culled primitives to the set of screen-space pipelines comprises:
transmitting the set of culled primitives to the local data store memory; and transmitting the set of culled primitives from the local data store memory to the set of screen-space pipelines. 8. The method of claim 1, wherein identifying one or more screen subdivisions comprises:
for each primitive in the set of culled primitive, identifying one or more screen subdivisions covered by that primitive. 9. The method of claim 8, wherein transmitting the set of culled primitives to the set of screen-space pipelines based on the identified screen subdivisions comprises:
for each primitive in the set of culled primitives, identifying one or more screen-space pipelines associated with the screen subdivisions covered by that primitive; and transmitting the primitive to the identified one or more screen-space pipelines. 10. An accelerated processing device (APD), comprising:
a graphics processing pipeline; and a plurality of parallel processing units, wherein the graphics processing pipeline includes a primitive shader stage configured to execute a primitive shader program on the plurality of parallel processing units, the primitive shader program configured to:
perform per-vertex operations on a set of vertices;
perform culling operations on a set of primitives associated with the set of vertices, to generate a set of culled primitives;
identifying one or more screen subdivisions for the set of culled primitives, with the primitive shader; and
transmitting the set of culled primitives to a set of screen-space pipelines of the graphics processing pipeline based on the identified screen subdivisions of the set of culled primitives. 11. The APD of claim 10, wherein the graphics processing pipeline is in a state where tessellation is enabled and the per-vertex operations comprise:
domain shader operations for evaluating barycentric coordinates produced by a tessellator stage of a graphics processing pipeline. 12. The APD of claim 10, wherein the graphics processing pipeline is in a state where tessellation is disabled and the per-vertex operations comprise:
vertex shader operations for transforming vertex positions for a vertex shader stage of a graphics processing pipeline. 13. The APD of claim 10, wherein the primitive shader program is further configured to:
perform operations for determining non-position attributes for vertices associated with the set of culled primitives, the operations for determining the non-position attributes being derived from vertex shader code for a vertex shader stage of the graphics processing pipeline. 14. The APD of claim 10, wherein the graphics processing pipeline is in a state where geometry shading is enabled and the primitive shader program is further configured to perform geometry shading operations on the set of primitives associated with the set of vertices, the geometry shading operations being derived from geometry shader code for a geometry shader stage of the graphics processing pipeline. 15. The APD of claim 10, further comprising:
a general purpose local data store, wherein the primitive shader program is configured to transmit the set of culled primitives to the set of screen-space pipelines via the general purpose local data store and not via a fixed function crossbar or via a dedicated position buffer and parameter buffer. 16. The APD of claim 10, wherein the primitive shader program is configured to identify one or more screen subdivisions by:
for each primitive in the set of culled primitive, identifying one or more screen subdivisions covered by that primitive. 17. The method of claim 16, wherein the primitive shader program is configured to transmit the set of culled primitives to the set of screen-space pipelines based on the identified screen subdivisions by:
for each primitive in the set of culled primitives, identifying one or more screen-space pipelines associated with the screen subdivisions covered by that primitive; and transmitting the primitive to the identified one or more screen-space pipelines. 18. A computing device, comprising:
a central processing unit, and an accelerated processing device (APD), the APD comprising:
a graphics processing pipeline; and
a plurality of parallel processing units,
wherein the graphics processing pipeline includes a primitive shader stage configured to execute a primitive shader program on the plurality of parallel processing units, the primitive shader program configured to:
perform per-vertex operations on a set of vertices received from the central processing unit;
perform culling operations on a set of primitives associated with the set of vertices, to generate a set of culled primitives;
identifying one or more screen subdivisions for the set of culled primitives, with the primitive shader; and
transmitting the set of culled primitives to a set of screen-space pipelines of the graphics processing pipeline based on the identified screen subdivisions of the set of culled primitives. 19. The computing device of claim 18, wherein the graphics processing pipeline is in a state where tessellation is enabled and the per-vertex operations comprise:
domain shader operations for evaluating barycentric coordinates produced by a tessellator stage of a graphics processing pipeline, the domain shader operations being derived from a domain shader program provided by the central processing unit. 20. The computing device of claim 18, wherein the graphics processing pipeline is in a state where tessellation is disabled and the per-vertex operations comprise:
vertex shader operations for transforming vertex positions for a vertex shader stage of a graphics processing pipeline, the vertex shader operations being derived from a vertex shader program provided by the central processing unit. | 2,600 |
10,711 | 10,711 | 15,648,752 | 2,664 | In a switching circuit, an inductance of an inductor of a shunt circuit is such that off capacitance of a second switching device that is in the off state when a first switching device is in the on state is used to define, in the shunt circuit, a series resonance circuit with a desired resonant frequency. Therefore, the frequency of an unnecessary signal to be attenuated is set to the resonant frequency of the series resonance circuit. Thus, the switching circuit achieves improved isolation characteristics with other circuits by attenuating the unnecessary signal. | 1. A switching circuit comprising:
a first terminal; a plurality of second terminals; first switching devices, each of the first switching devices being connected in series to a corresponding one of signal paths coupling the first terminal to a corresponding one of the plurality of second terminals; and shunt circuits, each of the shunt circuits being disposed between a corresponding one of the plurality of second terminals and a ground terminal; wherein each of the shunt circuits includes a second switching device and an inductor, the second switching device and the inductor being coupled to each other in series. 2. The switching circuit according to claim 1, wherein the inductors of the shunt circuits have inductances different from one another. 3. The switching circuit according to claim 2, further comprising:
a third terminal; and third switching devices, each of the third switching devices being connected in series to a corresponding one of signal paths coupling the third terminal to a corresponding one of the plurality of second terminals. 4. The switching circuit according to claim 1, wherein each of the shunt circuits defines a series resonance circuit including a capacitance and the inductor, the capacitance being produced when the second switching device is in an off state, and the inductor of the series resonance circuit has an inductance value such that a resonant frequency of the series resonance circuit is equal or substantially equal to a frequency of a signal passing through one of the signal paths, the one of the signal paths being coupled to a different shunt circuit of the shunt circuits, the different shunt circuit being a shunt circuit in which the second switching device is in an on state. 5. The switching circuit according to claim 1, wherein each of the first switching devices and the second switches devices is one of a field-effect transistor, a circuit including a PIN diode, a bipolar transistor, and an electrostatic induction transistor. 6. The switching circuit according to claim 3, wherein each of the third switching devices is one of a field-effect transistor, a circuit including a PIN diode, a bipolar transistor, and an electrostatic induction transistor. 7. The switching circuit according to claim 1, wherein the inductor is a chip component. 8. A high frequency module comprising:
the switching circuit according to claim 1; and a multi-layer substrate including a first principal surface on which the first, second, and third switching devices are mounted. 9. The high frequency module according to claim 8, wherein the inductor is a chip component mounted on the first principal surface of the multi-layer substrate or a wiring electrode in the multi-layer substrate. 10. The high frequency module according to claim 8, wherein the inductors of the shunt circuits have inductances different from one another. 11. The high frequency module according to claim 10, further comprising:
a third terminal; and third switching devices, each of the third switching devices being connected in series to a corresponding one of signal paths coupling the third terminal to a corresponding one of the plurality of second terminals. 12. The high frequency module according to claim 8, wherein each of the shunt circuits defines a series resonance circuit including a capacitance and the inductor, the capacitance being produced when the second switching device is in an off state, and the inductor of the series resonance circuit has an inductance value such that a resonant frequency of the series resonance circuit is equal or substantially equal to a frequency of a signal passing through one of the signal paths, the one of the signal paths being coupled to a different shunt circuit of the shunt circuits, the different shunt circuit being a shunt circuit in which the second switching device is in an on state. 13. The high frequency module according to claim 8, wherein each of the first switching devices and the second switches devices is one of a field-effect transistor, a circuit including a PIN diode, a bipolar transistor, and an electrostatic induction transistor. 14. The high frequency module according to claim 11, wherein each of the third switching devices is one of a field-effect transistor, a circuit including a PIN diode, a bipolar transistor, and an electrostatic induction transistor. 15. The high frequency module according to claim 8, wherein the switching circuit is a switch IC. 16. The high frequency module according to claim 15, further comprising an antenna connected to the switch IC. 17. The high frequency module according to claim 16, wherein the antenna includes multi-band antennas or multiple single-band antennas. 18. A communication device comprising the high frequency module according to claim 8. 19. The communication device according to claim 18, wherein the communication device performs communication in multiple frequency bands and supports multiple communication systems. | In a switching circuit, an inductance of an inductor of a shunt circuit is such that off capacitance of a second switching device that is in the off state when a first switching device is in the on state is used to define, in the shunt circuit, a series resonance circuit with a desired resonant frequency. Therefore, the frequency of an unnecessary signal to be attenuated is set to the resonant frequency of the series resonance circuit. Thus, the switching circuit achieves improved isolation characteristics with other circuits by attenuating the unnecessary signal.1. A switching circuit comprising:
a first terminal; a plurality of second terminals; first switching devices, each of the first switching devices being connected in series to a corresponding one of signal paths coupling the first terminal to a corresponding one of the plurality of second terminals; and shunt circuits, each of the shunt circuits being disposed between a corresponding one of the plurality of second terminals and a ground terminal; wherein each of the shunt circuits includes a second switching device and an inductor, the second switching device and the inductor being coupled to each other in series. 2. The switching circuit according to claim 1, wherein the inductors of the shunt circuits have inductances different from one another. 3. The switching circuit according to claim 2, further comprising:
a third terminal; and third switching devices, each of the third switching devices being connected in series to a corresponding one of signal paths coupling the third terminal to a corresponding one of the plurality of second terminals. 4. The switching circuit according to claim 1, wherein each of the shunt circuits defines a series resonance circuit including a capacitance and the inductor, the capacitance being produced when the second switching device is in an off state, and the inductor of the series resonance circuit has an inductance value such that a resonant frequency of the series resonance circuit is equal or substantially equal to a frequency of a signal passing through one of the signal paths, the one of the signal paths being coupled to a different shunt circuit of the shunt circuits, the different shunt circuit being a shunt circuit in which the second switching device is in an on state. 5. The switching circuit according to claim 1, wherein each of the first switching devices and the second switches devices is one of a field-effect transistor, a circuit including a PIN diode, a bipolar transistor, and an electrostatic induction transistor. 6. The switching circuit according to claim 3, wherein each of the third switching devices is one of a field-effect transistor, a circuit including a PIN diode, a bipolar transistor, and an electrostatic induction transistor. 7. The switching circuit according to claim 1, wherein the inductor is a chip component. 8. A high frequency module comprising:
the switching circuit according to claim 1; and a multi-layer substrate including a first principal surface on which the first, second, and third switching devices are mounted. 9. The high frequency module according to claim 8, wherein the inductor is a chip component mounted on the first principal surface of the multi-layer substrate or a wiring electrode in the multi-layer substrate. 10. The high frequency module according to claim 8, wherein the inductors of the shunt circuits have inductances different from one another. 11. The high frequency module according to claim 10, further comprising:
a third terminal; and third switching devices, each of the third switching devices being connected in series to a corresponding one of signal paths coupling the third terminal to a corresponding one of the plurality of second terminals. 12. The high frequency module according to claim 8, wherein each of the shunt circuits defines a series resonance circuit including a capacitance and the inductor, the capacitance being produced when the second switching device is in an off state, and the inductor of the series resonance circuit has an inductance value such that a resonant frequency of the series resonance circuit is equal or substantially equal to a frequency of a signal passing through one of the signal paths, the one of the signal paths being coupled to a different shunt circuit of the shunt circuits, the different shunt circuit being a shunt circuit in which the second switching device is in an on state. 13. The high frequency module according to claim 8, wherein each of the first switching devices and the second switches devices is one of a field-effect transistor, a circuit including a PIN diode, a bipolar transistor, and an electrostatic induction transistor. 14. The high frequency module according to claim 11, wherein each of the third switching devices is one of a field-effect transistor, a circuit including a PIN diode, a bipolar transistor, and an electrostatic induction transistor. 15. The high frequency module according to claim 8, wherein the switching circuit is a switch IC. 16. The high frequency module according to claim 15, further comprising an antenna connected to the switch IC. 17. The high frequency module according to claim 16, wherein the antenna includes multi-band antennas or multiple single-band antennas. 18. A communication device comprising the high frequency module according to claim 8. 19. The communication device according to claim 18, wherein the communication device performs communication in multiple frequency bands and supports multiple communication systems. | 2,600 |
10,712 | 10,712 | 15,938,526 | 2,612 | A method for generating graphics of a three-dimensional (3D) virtual environment includes: receiving, with a processor, a first camera position in the 3D virtual environment and a first viewing direction in the 3D virtual environment; receiving, with the processor, weather data including first precipitation information corresponding to a first geographic region corresponding to the first camera position in the 3D virtual environment; defining, with the processor, a bounding geometry at first position that is a first distance from the first camera position in the first viewing direction, the bounding geometry being dimensioned so as to cover a field of view from the first camera position in the first viewing direction; and rendering, with the processor, a 3D particle system in the 3D virtual environment depicting precipitation only within the bounding geometry, the 3D particle system having features depending on the first precipitation information. | 1. A method for generating graphics of a three-dimensional (3D) virtual environment comprising:
receiving, with a processor, a first value for a camera position of a virtual camera in the 3D virtual environment and a first value for a viewing direction of the virtual camera; receiving, with the processor, weather data including first precipitation information corresponding to a first geographic region corresponding to the first value for the camera position of the virtual camera; defining, with the processor, a closed 3D bounding geometry having a position that is defined relative to the camera position of the virtual camera and the viewing direction of the virtual camera such that the closed 3D bounding geometry moves with the virtual camera, the position of the closed 3D bounding geometry being defined at a first distance from the camera position of the virtual camera in the viewing direction of the virtual camera, the closed 3D bounding geometry being dimensioned so as to cover a field of view of the virtual camera in the viewing direction of the virtual camera; and rendering, with the processor, a 3D particle system in the 3D virtual environment depicting precipitation only within the closed 3D bounding geometry, the 3D particle system having features depending on the first precipitation information. 2. The method according to claim 1 further comprising:
receiving, with the processor, a second value for the camera position of the virtual camera and a second value for the viewing direction of the virtual camera;
moving, with the processor, the closed 3D bounding geometry from a first position to a second position such that the position of the closed 3D bounding geometry remains at the first distance from the camera position of the virtual camera in the viewing direction of the virtual camera; and
updating, with the processor, the rendering of the 3D particle system depicting the precipitation such that particles of the 3D particle system are only rendered within the moved closed 3D bounding geometry at the second position. 3. The method according to claim 2, the updating the rendering of the 3D particle system comprising:
removing particles of the 3D particle system that were within the closed 3D bounding geometry at the first position but are outside the closed 3D bounding geometry after the closed 3D bounding geometry is moved to the second position; continuing to render particles of the 3D particle system that were within the closed 3D bounding geometry at the first position and remain within the closed 3D bounding geometry after the closed 3D bounding geometry is moved to the second position; and spawning new particles of the of the 3D particle system at positions that are within the closed 3D bounding geometry after the closed 3D bounding geometry is moved to the second position but were outside the closed 3D bounding geometry at the first position. 4. The method according to claim 1, the rendering the 3D particle system further comprising:
rendering at least one of (i) a shape, (ii) a color, and (iii) opacity of particles of the 3D particle system based on a type of precipitation indicated by the first precipitation information. 5. The method according to claim 1, the rendering the 3D particle system further comprising:
rendering at least one of (i) a size of particles of the 3D particles system and (ii) a particle density of the 3D particle system based on a precipitation intensity indicated by the first precipitation information. 6. The method according to claim 1, the defining the closed 3D bounding geometry further comprising:
defining the closed 3D bounding geometry as a sphere centered at the first distance from the camera position of the virtual camera in the viewing direction of the virtual camera, the sphere having a diameter configured to cover the field of view from the camera position of the virtual camera in the viewing direction of the virtual camera. 7. The method according to claim 1 further comprising:
receiving, with the processor, at least one of a viewing range of the virtual camera and a viewing angle of the virtual camera; and
adjusting, with the processor, at least one of the position of the closed 3D bounding geometry and a dimension of the closed 3D bounding geometry based on the at least one of the viewing range of the virtual camera and the viewing angle of the virtual camera. 8. The method according to claim 1, wherein the weather data includes wind information, the rendering the 3D particle system further comprising:
rendering a motion of particles of the 3D particles system based on at least one of a wind speed and a wind direction indicated by the wind information. 9. The method according to claim 1, wherein the particles of the 3D particles system are configured to depict snowflakes. 10. A system for generating graphics of a three-dimensional (3D) virtual environment comprising:
a display device configured to display the graphics of the 3D virtual environment; a networking device; a memory configured to store programmed instructions; and a processor operatively connected to the display device, the networking device, and the memory, the processor being configured to execute the programmed instructions to:
receive a first value for a camera position of a virtual camera in the 3D virtual environment and a first value for a viewing direction of the virtual camera;
receive, via the networking device, weather data including first precipitation information corresponding to a first geographic region corresponding to the first value for the camera position of the virtual camera;
define a closed 3D bounding geometry having a position that is defined relative to the camera position of the virtual camera and the viewing direction of the virtual camera such that the closed 3D bounding geometry moves with the virtual camera, the position of the closed 3D bounding geometry being defined at a first distance from the camera position of the virtual camera in the viewing direction of the virtual camera, the closed 3D bounding geometry being dimensioned so as to cover a field of view of the virtual camera in the viewing direction of the virtual camera; and
render a 3D particle system in the 3D virtual environment depicting precipitation only within the closed 3D bounding geometry, the 3D particle system having features depending on the first precipitation information. 11. The system of claim 10, the processor being further configured to execute the programmed instructions to:
receive a second value for the camera position of the virtual camera and a second value for the viewing direction of the virtual camera; move the closed 3D bounding geometry from a first position to a second position such that the position of the closed 3D bounding geometry remains at the first distance from the camera position of the virtual camera in the viewing direction of the virtual camera; and update the rendering of the 3D particle system depicting the precipitation such that particles of the 3D particle system are only rendered within the moved closed 3D bounding geometry at the second position. 12. The system of claim 11, the processor being further configured to execute the programmed instructions to:
remove particles of the 3D particle system that were within the closed 3D bounding geometry at the first position but are outside the closed 3D bounding geometry after the closed 3D bounding geometry is moved to the second position; continue to render particles of the 3D particle system that were within the closed 3D bounding geometry at the first position and remain within the closed 3D bounding geometry after the closed 3D bounding geometry is moved to the second position; and spawn new particles of the of the 3D particle system at positions that are within the closed 3D bounding geometry after the closed 3D bounding geometry is moved to the second position but were outside the closed 3D bounding geometry at the first position. 13. The system of claim 10, the processor being further configured to execute the programmed instructions to:
render at least one of (i) a shape, (ii) a color, and (iii) opacity of particles of the 3D particle system based on a type of precipitation indicated by the first precipitation information. 14. The system of claim 10, the processor being further configured to execute the programmed instructions to:
render at least one of (i) a size of particles of the 3D particles system and (ii) a particle density of the 3D particle system based on a precipitation intensity indicated by the first precipitation information. 15. The system of claim 10, the processor being further configured to execute the programmed instructions to:
define the closed 3D bounding geometry as a sphere centered at the first distance from the camera position of the virtual camera in the viewing direction of the virtual camera, the sphere having a diameter configured to cover the field of view from the camera position of the virtual camera in the viewing direction of the virtual camera. 16. The system of claim 10, the processor being further configured to execute the programmed instructions to:
receive at least one of a viewing range of the virtual camera and a viewing angle of the virtual camera; and adjust at least one of the position of the closed 3D bounding geometry and a dimension of the closed 3D bounding geometry based on the at least one of the viewing range of the virtual camera and the viewing angle of the virtual camera. 17. The system of claim 10, the processor being further configured to execute the programmed instructions to:
rendering a motion of particles of the 3D particles system based on at least one of a wind speed and a wind direction indicated by the wind information. 18. The system of claim 10, wherein the particles of the 3D particles system are configured to depict snowflakes. | A method for generating graphics of a three-dimensional (3D) virtual environment includes: receiving, with a processor, a first camera position in the 3D virtual environment and a first viewing direction in the 3D virtual environment; receiving, with the processor, weather data including first precipitation information corresponding to a first geographic region corresponding to the first camera position in the 3D virtual environment; defining, with the processor, a bounding geometry at first position that is a first distance from the first camera position in the first viewing direction, the bounding geometry being dimensioned so as to cover a field of view from the first camera position in the first viewing direction; and rendering, with the processor, a 3D particle system in the 3D virtual environment depicting precipitation only within the bounding geometry, the 3D particle system having features depending on the first precipitation information.1. A method for generating graphics of a three-dimensional (3D) virtual environment comprising:
receiving, with a processor, a first value for a camera position of a virtual camera in the 3D virtual environment and a first value for a viewing direction of the virtual camera; receiving, with the processor, weather data including first precipitation information corresponding to a first geographic region corresponding to the first value for the camera position of the virtual camera; defining, with the processor, a closed 3D bounding geometry having a position that is defined relative to the camera position of the virtual camera and the viewing direction of the virtual camera such that the closed 3D bounding geometry moves with the virtual camera, the position of the closed 3D bounding geometry being defined at a first distance from the camera position of the virtual camera in the viewing direction of the virtual camera, the closed 3D bounding geometry being dimensioned so as to cover a field of view of the virtual camera in the viewing direction of the virtual camera; and rendering, with the processor, a 3D particle system in the 3D virtual environment depicting precipitation only within the closed 3D bounding geometry, the 3D particle system having features depending on the first precipitation information. 2. The method according to claim 1 further comprising:
receiving, with the processor, a second value for the camera position of the virtual camera and a second value for the viewing direction of the virtual camera;
moving, with the processor, the closed 3D bounding geometry from a first position to a second position such that the position of the closed 3D bounding geometry remains at the first distance from the camera position of the virtual camera in the viewing direction of the virtual camera; and
updating, with the processor, the rendering of the 3D particle system depicting the precipitation such that particles of the 3D particle system are only rendered within the moved closed 3D bounding geometry at the second position. 3. The method according to claim 2, the updating the rendering of the 3D particle system comprising:
removing particles of the 3D particle system that were within the closed 3D bounding geometry at the first position but are outside the closed 3D bounding geometry after the closed 3D bounding geometry is moved to the second position; continuing to render particles of the 3D particle system that were within the closed 3D bounding geometry at the first position and remain within the closed 3D bounding geometry after the closed 3D bounding geometry is moved to the second position; and spawning new particles of the of the 3D particle system at positions that are within the closed 3D bounding geometry after the closed 3D bounding geometry is moved to the second position but were outside the closed 3D bounding geometry at the first position. 4. The method according to claim 1, the rendering the 3D particle system further comprising:
rendering at least one of (i) a shape, (ii) a color, and (iii) opacity of particles of the 3D particle system based on a type of precipitation indicated by the first precipitation information. 5. The method according to claim 1, the rendering the 3D particle system further comprising:
rendering at least one of (i) a size of particles of the 3D particles system and (ii) a particle density of the 3D particle system based on a precipitation intensity indicated by the first precipitation information. 6. The method according to claim 1, the defining the closed 3D bounding geometry further comprising:
defining the closed 3D bounding geometry as a sphere centered at the first distance from the camera position of the virtual camera in the viewing direction of the virtual camera, the sphere having a diameter configured to cover the field of view from the camera position of the virtual camera in the viewing direction of the virtual camera. 7. The method according to claim 1 further comprising:
receiving, with the processor, at least one of a viewing range of the virtual camera and a viewing angle of the virtual camera; and
adjusting, with the processor, at least one of the position of the closed 3D bounding geometry and a dimension of the closed 3D bounding geometry based on the at least one of the viewing range of the virtual camera and the viewing angle of the virtual camera. 8. The method according to claim 1, wherein the weather data includes wind information, the rendering the 3D particle system further comprising:
rendering a motion of particles of the 3D particles system based on at least one of a wind speed and a wind direction indicated by the wind information. 9. The method according to claim 1, wherein the particles of the 3D particles system are configured to depict snowflakes. 10. A system for generating graphics of a three-dimensional (3D) virtual environment comprising:
a display device configured to display the graphics of the 3D virtual environment; a networking device; a memory configured to store programmed instructions; and a processor operatively connected to the display device, the networking device, and the memory, the processor being configured to execute the programmed instructions to:
receive a first value for a camera position of a virtual camera in the 3D virtual environment and a first value for a viewing direction of the virtual camera;
receive, via the networking device, weather data including first precipitation information corresponding to a first geographic region corresponding to the first value for the camera position of the virtual camera;
define a closed 3D bounding geometry having a position that is defined relative to the camera position of the virtual camera and the viewing direction of the virtual camera such that the closed 3D bounding geometry moves with the virtual camera, the position of the closed 3D bounding geometry being defined at a first distance from the camera position of the virtual camera in the viewing direction of the virtual camera, the closed 3D bounding geometry being dimensioned so as to cover a field of view of the virtual camera in the viewing direction of the virtual camera; and
render a 3D particle system in the 3D virtual environment depicting precipitation only within the closed 3D bounding geometry, the 3D particle system having features depending on the first precipitation information. 11. The system of claim 10, the processor being further configured to execute the programmed instructions to:
receive a second value for the camera position of the virtual camera and a second value for the viewing direction of the virtual camera; move the closed 3D bounding geometry from a first position to a second position such that the position of the closed 3D bounding geometry remains at the first distance from the camera position of the virtual camera in the viewing direction of the virtual camera; and update the rendering of the 3D particle system depicting the precipitation such that particles of the 3D particle system are only rendered within the moved closed 3D bounding geometry at the second position. 12. The system of claim 11, the processor being further configured to execute the programmed instructions to:
remove particles of the 3D particle system that were within the closed 3D bounding geometry at the first position but are outside the closed 3D bounding geometry after the closed 3D bounding geometry is moved to the second position; continue to render particles of the 3D particle system that were within the closed 3D bounding geometry at the first position and remain within the closed 3D bounding geometry after the closed 3D bounding geometry is moved to the second position; and spawn new particles of the of the 3D particle system at positions that are within the closed 3D bounding geometry after the closed 3D bounding geometry is moved to the second position but were outside the closed 3D bounding geometry at the first position. 13. The system of claim 10, the processor being further configured to execute the programmed instructions to:
render at least one of (i) a shape, (ii) a color, and (iii) opacity of particles of the 3D particle system based on a type of precipitation indicated by the first precipitation information. 14. The system of claim 10, the processor being further configured to execute the programmed instructions to:
render at least one of (i) a size of particles of the 3D particles system and (ii) a particle density of the 3D particle system based on a precipitation intensity indicated by the first precipitation information. 15. The system of claim 10, the processor being further configured to execute the programmed instructions to:
define the closed 3D bounding geometry as a sphere centered at the first distance from the camera position of the virtual camera in the viewing direction of the virtual camera, the sphere having a diameter configured to cover the field of view from the camera position of the virtual camera in the viewing direction of the virtual camera. 16. The system of claim 10, the processor being further configured to execute the programmed instructions to:
receive at least one of a viewing range of the virtual camera and a viewing angle of the virtual camera; and adjust at least one of the position of the closed 3D bounding geometry and a dimension of the closed 3D bounding geometry based on the at least one of the viewing range of the virtual camera and the viewing angle of the virtual camera. 17. The system of claim 10, the processor being further configured to execute the programmed instructions to:
rendering a motion of particles of the 3D particles system based on at least one of a wind speed and a wind direction indicated by the wind information. 18. The system of claim 10, wherein the particles of the 3D particles system are configured to depict snowflakes. | 2,600 |
10,713 | 10,713 | 15,720,597 | 2,684 | Methods of assessing driver behavior include monitoring vehicle systems and driver monitoring systems to accommodate for a driver's slow reaction time, attention lapse and/or alertness. When it is determined that a driver is drowsy, for example, the response system may modify the operation of one or more vehicle systems. The systems that may be modified include: visual devices, audio devices, tactile devices, antilock brake systems, automatic brake prefill systems, brake assist systems, auto cruise control systems, electronic stability control systems, collision warning systems, lane keep assist systems, blind spot indicator systems, electronic pretensioning systems and climate control systems. | 1.-25. (canceled) 26. A method of controlling a vehicle system in a vehicle, comprising:
receiving monitoring information about a state of a driver from a monitoring system of the vehicle; receiving vehicle operating information about the vehicle system from the vehicle system; calculating a driver state index of the driver using the monitoring information about the state of the driver and the vehicle operating information about the vehicle system; changing a control parameter of the vehicle system based on the driver state index, wherein the control parameter is a system status of the vehicle system; and modifying control of the vehicle system using the control parameter. 27. The method of claim 26, wherein changing the control parameter of the vehicle system based on the driver state index includes changing the system status of the vehicle system to increase alertness of the driver. 28. The method of claim 26, wherein the vehicle system provides driving assistance to the driver and changing the control parameter of the vehicle system based on the driver state index includes changing a level of driving assistance to increase alertness of the driver. 29. The method of claim 26, including determining if the driver is distracted based on the driver state index and changing the control parameter of the vehicle system based on the driver state index to increase alertness of the driver when the driver is distracted. 30. The method of claim 29, wherein changing the control parameter of the vehicle system based on the driver state index includes changing the system status from ON to OFF when the driver is distracted. 31. The method of claim 30, wherein changing the system status of the vehicle system and modifying control of the vehicle system using the system status causes an increase in effort into driving the vehicle by the driver. 32. The method of claim 26, including determining an initial system status of the vehicle system based on the vehicle operating information, and wherein changing the control parameter of the vehicle system based on the driver state index includes changing the system status from the initial system status based on the driver state index. 33. The method of claim 32, wherein the initial system status of the vehicle system is ON and changing the system status from the initial system status based on the driver state index includes setting the system status to OFF. 34. The method of claim 26, wherein the vehicle system is an electronic power steering system, and changing the control parameter of the vehicle system based on the driver state index includes changing the system status of the electronic power steering system to a low system status, and modifying the vehicle system based on the control parameter includes modifying the electronic power steering system according to the low system status thereby decreasing an amount of power assistance. 35. The method of claim 26, wherein the vehicle system is a low speed follow system, and changing the control parameter of the vehicle system based on the driver state index includes changing the system status of the low speed follow system to OFF, and modifying the vehicle system based on the control parameter includes deactivating the low speed follow system. 36. The method of claim 26, wherein the vehicle system is auto cruise control system, and changing the control parameter of the vehicle system based on the driver state index includes changing the system status of the auto cruise control system to OFF, and modifying the vehicle system based on the control parameter includes switching the auto cruise control system to OFF. 37. The method of claim 26, wherein the monitoring system includes a portable sensor that is worn by the driver that monitors information about the state of the driver, and calculating the driver state index of the driver includes calculating the driver state index of the driver using information about the state of the driver from the portable sensor. 38. A response system for controlling a vehicle system in a vehicle, comprising:
a monitoring system including one or more sensors that detect monitoring information, wherein the monitoring information is information about a state of a driver; and an electronic control unit that receives the monitoring information from the monitoring system, determines a driver state index of the driver using the monitoring information, and modifies a control parameter of the vehicle system based on the driver state index, wherein the control parameter is a value that defines an operational state of the vehicle system, and wherein the electronic control unit controls the vehicle system using the control parameter. 39. The response system of claim 38, wherein the electronic control unit further determines if the driver is distracted based on the driver state index and controls the vehicle system using the control parameter to increase alertness of the driver when the driver is distracted. 40. The response system of claim 38, wherein the vehicle system provides driving assistance to the driver, and the electronic control unit further determines if the driver is distracted based on the driver state index and controls the vehicle system using the control parameter to reduce driving assistance when the driver is distracted. 41. The response system of claim 38, wherein the value that defines the operational state of the vehicle system is an amount of driving assistance provided by the vehicle system, and the electronic control unit modifies the control parameter of the vehicle system based on the driver state index by decreasing the amount of driving assistance provided by the vehicle system according to the driver state index. 42. The response system of claim 38, wherein the electronic control unit further determines an initial control parameter of the vehicle system, wherein the initial control parameter is an ON operational state or an OFF operational state of the vehicle system, and the electronic control unit modifies the control parameter of the vehicle system based on the driver state index by setting the control parameter to an operational state opposite of the initial control parameter. 43. The response system of claim 38, including a portable sensor that is worn by the driver that detects monitoring information, wherein the monitoring information detected by the portable sensor is information about the state of the driver, and the electronic control unit further receives the monitoring information from the portable sensor and further determines the driver state index of the driver using the monitoring information from the portable sensor. 44. The response system of claim 38, wherein the vehicle system is an electronic power steering system and the control parameter is an amount of steering assistance applied by the electronic power steering system, and wherein the electronic control unit modifies the control parameter of the electronic power steering system by decreasing the amount of steering assistance applied by the electronic power steering system when the driver is distracted. 45. The response system of claim 38, wherein the vehicle system is a low speed follow system, and wherein the electronic control unit modifies the control parameter of the low speed follow system to OFF when the driver is distracted. | Methods of assessing driver behavior include monitoring vehicle systems and driver monitoring systems to accommodate for a driver's slow reaction time, attention lapse and/or alertness. When it is determined that a driver is drowsy, for example, the response system may modify the operation of one or more vehicle systems. The systems that may be modified include: visual devices, audio devices, tactile devices, antilock brake systems, automatic brake prefill systems, brake assist systems, auto cruise control systems, electronic stability control systems, collision warning systems, lane keep assist systems, blind spot indicator systems, electronic pretensioning systems and climate control systems.1.-25. (canceled) 26. A method of controlling a vehicle system in a vehicle, comprising:
receiving monitoring information about a state of a driver from a monitoring system of the vehicle; receiving vehicle operating information about the vehicle system from the vehicle system; calculating a driver state index of the driver using the monitoring information about the state of the driver and the vehicle operating information about the vehicle system; changing a control parameter of the vehicle system based on the driver state index, wherein the control parameter is a system status of the vehicle system; and modifying control of the vehicle system using the control parameter. 27. The method of claim 26, wherein changing the control parameter of the vehicle system based on the driver state index includes changing the system status of the vehicle system to increase alertness of the driver. 28. The method of claim 26, wherein the vehicle system provides driving assistance to the driver and changing the control parameter of the vehicle system based on the driver state index includes changing a level of driving assistance to increase alertness of the driver. 29. The method of claim 26, including determining if the driver is distracted based on the driver state index and changing the control parameter of the vehicle system based on the driver state index to increase alertness of the driver when the driver is distracted. 30. The method of claim 29, wherein changing the control parameter of the vehicle system based on the driver state index includes changing the system status from ON to OFF when the driver is distracted. 31. The method of claim 30, wherein changing the system status of the vehicle system and modifying control of the vehicle system using the system status causes an increase in effort into driving the vehicle by the driver. 32. The method of claim 26, including determining an initial system status of the vehicle system based on the vehicle operating information, and wherein changing the control parameter of the vehicle system based on the driver state index includes changing the system status from the initial system status based on the driver state index. 33. The method of claim 32, wherein the initial system status of the vehicle system is ON and changing the system status from the initial system status based on the driver state index includes setting the system status to OFF. 34. The method of claim 26, wherein the vehicle system is an electronic power steering system, and changing the control parameter of the vehicle system based on the driver state index includes changing the system status of the electronic power steering system to a low system status, and modifying the vehicle system based on the control parameter includes modifying the electronic power steering system according to the low system status thereby decreasing an amount of power assistance. 35. The method of claim 26, wherein the vehicle system is a low speed follow system, and changing the control parameter of the vehicle system based on the driver state index includes changing the system status of the low speed follow system to OFF, and modifying the vehicle system based on the control parameter includes deactivating the low speed follow system. 36. The method of claim 26, wherein the vehicle system is auto cruise control system, and changing the control parameter of the vehicle system based on the driver state index includes changing the system status of the auto cruise control system to OFF, and modifying the vehicle system based on the control parameter includes switching the auto cruise control system to OFF. 37. The method of claim 26, wherein the monitoring system includes a portable sensor that is worn by the driver that monitors information about the state of the driver, and calculating the driver state index of the driver includes calculating the driver state index of the driver using information about the state of the driver from the portable sensor. 38. A response system for controlling a vehicle system in a vehicle, comprising:
a monitoring system including one or more sensors that detect monitoring information, wherein the monitoring information is information about a state of a driver; and an electronic control unit that receives the monitoring information from the monitoring system, determines a driver state index of the driver using the monitoring information, and modifies a control parameter of the vehicle system based on the driver state index, wherein the control parameter is a value that defines an operational state of the vehicle system, and wherein the electronic control unit controls the vehicle system using the control parameter. 39. The response system of claim 38, wherein the electronic control unit further determines if the driver is distracted based on the driver state index and controls the vehicle system using the control parameter to increase alertness of the driver when the driver is distracted. 40. The response system of claim 38, wherein the vehicle system provides driving assistance to the driver, and the electronic control unit further determines if the driver is distracted based on the driver state index and controls the vehicle system using the control parameter to reduce driving assistance when the driver is distracted. 41. The response system of claim 38, wherein the value that defines the operational state of the vehicle system is an amount of driving assistance provided by the vehicle system, and the electronic control unit modifies the control parameter of the vehicle system based on the driver state index by decreasing the amount of driving assistance provided by the vehicle system according to the driver state index. 42. The response system of claim 38, wherein the electronic control unit further determines an initial control parameter of the vehicle system, wherein the initial control parameter is an ON operational state or an OFF operational state of the vehicle system, and the electronic control unit modifies the control parameter of the vehicle system based on the driver state index by setting the control parameter to an operational state opposite of the initial control parameter. 43. The response system of claim 38, including a portable sensor that is worn by the driver that detects monitoring information, wherein the monitoring information detected by the portable sensor is information about the state of the driver, and the electronic control unit further receives the monitoring information from the portable sensor and further determines the driver state index of the driver using the monitoring information from the portable sensor. 44. The response system of claim 38, wherein the vehicle system is an electronic power steering system and the control parameter is an amount of steering assistance applied by the electronic power steering system, and wherein the electronic control unit modifies the control parameter of the electronic power steering system by decreasing the amount of steering assistance applied by the electronic power steering system when the driver is distracted. 45. The response system of claim 38, wherein the vehicle system is a low speed follow system, and wherein the electronic control unit modifies the control parameter of the low speed follow system to OFF when the driver is distracted. | 2,600 |
10,714 | 10,714 | 15,073,411 | 2,683 | Devices having an air bearing surface (ABS), the device including a write pole; a near field transducer (NFT) that includes a peg and a disc, wherein the peg is at the ABS of the device; an overcoat that includes a low thermal conductivity layer, the low thermal conductivity layer including a material that has a thermal conductivity of not greater than 5 W/mK. | 1. A device having an air bearing surface (ABS), the device comprising:
a write pole; a near field transducer (NFT) comprising a peg and a disc, wherein the peg is at the ABS of the device; an overcoat, the overcoat comprising: a low thermal conductivity layer, the low thermal conductivity layer comprising a material that has a thermal conductivity of not greater than 5 W/mK. 2. The device according to claim 1, wherein the low thermal conductivity layer comprises a material that has a thermal conductivity of not greater than 2 W/mK. 3. The device according to claim 1, wherein the low thermal conductivity layer comprises fused silica (SiO2), yttria stabilized zirconia (YSZ), cerium oxide (CeO2), nickel oxide (NiO), thorium oxide (ThO2), tantalum oxide (TaO), tantalum silicate (TaSiO), zirconium oxide (ZrO2), or combinations thereof. 4. The device according to claim 1, wherein the low thermal conductivity layer comprises tantalum silicate (TaSiO). 5. The device according to claim 1, wherein the low thermal conductivity layer comprises SiO2, YSZ, CeO2, NiO, ThO2, TaSiO, ZrO2, MgAl2O4, Mullite, Gd2Zr2O7, LaMgAl11O19, Monazite, Sm2Zr2O7, La2Zr2O7, Nd2Zr2O7, Zr3Y4O12, 0.1WO3-0.9 Nb2O5, WNb12O33, W4Nb26O77, W3Nb14O44, (3.5Eu-3.5Tm-7Y)SZ, (3.5Eu-3.5Yb-7Y)SZ, (Zr, Hf)3Y4O12, Bi3Ti3O12, Sr2Nb2O7, La5/6Yb1/6Zr2O7., TaZrO, NbZrO, or combinations thereof. 6. The device according to claim 1, wherein the low thermal conductivity layer comprises LaPO4, Dy2SrAl2O7, SrZrO3, 7YSZ, Yb2Sn2O7, La(Mg1/4Al1/2Ta1/4)O3, Gd2Zr2O7, Ba2ErAlO5, BaNd2Ti3O10, (Eu,Tm,Y)ZrO2, W3Nb14O44, (Zr,Hf)3Y4O12, (Zr0.5Hf0.5)0.87Y0.13O2, Yb0.2Ta0.2Zr0.6O2, (La5/6Yb1/6)Zr2O7, Sr2Nb2O7, Bi4Ti3O12, Gd6Ca4(SiO4)6O, La2Mo2O9, 7YSZ+3.5EuO1.5+3.5TmO1.5, 7YSZ+3.5EuO0.15+3.5YbO1.5, 8YSZ, Zr3Y4O12, W3Nb14O44, WNb12O33, W4Nb26O77, tri-doped YSZ (Zr,Hf)0.87Y0.13O1.93, YPO4, WSe2, or combinations thereof. 7. The device according to claim 1, wherein the low thermal conductivity layer has a refractive index of not less than 1.5, an extinction coefficient of not greater than 0.5, or both. 8. The device according to claim 1 further comprising a diamond like carbon (DLC) layer disposed on at least a portion of the low thermal conductivity layer. 9. The device according to claim 1 further comprising a corrosion resistant layer, a gas barrier layer, an adhesion layer, or any combination thereof. 10. The device according to claim 1, wherein the low thermal conductivity layer is in contact with at least the peg of the NFT. 11. The device according to claim 10 further comprising a corrosion resistant layer, a gas barrier layer, an adhesion layer, or any combination thereof in contact with the low thermal conductivity layer on the side of the low thermal conductivity layer opposite the peg, and an overcoat layer in contact with the gas barrier layer, an adhesion layer, or any combination thereof. 12. The device according to claim 1, wherein the low thermal conductivity layer comprises a multilayer structure comprising at least two layers of low thermal conductivity material. 13. A device having an air bearing surface (ABS), the device comprising:
a write pole; a near field transducer (NFT) comprising a peg and a disc, wherein the peg is at the ABS of the device; an overcoat, the overcoat comprising: a low thermal conductivity layer in contact with at least the peg of the NFT, the low thermal conductivity layer comprising a material that has a thermal conductivity of not greater than 5 W/mK. 14. The device according to claim 13, wherein the low thermal conductivity layer comprises:
fused silica (SiO2), yttria stabilized zirconia (YSZ), cerium oxide (CeO2), nickel oxide (NiO), thorium oxide (ThO2), tantalum oxide (TaO), tantalum silicate (TaSiO), zirconium oxide (ZrO2), or combinations thereof; YSZ, CeO2, NiO, ThO2, TaSiO, MgAl2O4, Mullite, Gd2Zr2O7, LaMgAl11O19, Monazite, Sm2Zr2O7, La2Zr2O7, Nd2Zr2O7, Zr3Y4O12, 0.1WO3-0.9 Nb2O5, WNb12O33, W4Nb26O77, W3Nb14O44, (3.5Eu-3.5Tm-7Y)SZ, (3.5Eu-3.5Yb-7Y)SZ, (Zr, Hf)3Y4O12, Bi3Ti3O12, Sr2Nb2O7, La5/6Yb1/6Zr2O7., TaZrO, NbZrO, or combinations thereof; LaPO4, Dy2SrAl2O7, SrZrO3, 7YSZ, Yb2Sn2O7, La(Mg1/4Al1/2Ta1/4)O3, Gd2Zr2O7, Ba2ErAlO5, BaNd2Ti3O10, (Eu,Tm,Y)ZrO2, W3Nb14O44, (Zr,Hf)3Y4O12, (Zr0.5Hf0.5)0.87Y0.13O2, Yb0.2Ta0.2Zr0.6O2, (La5/6Yb1/6)Zr2O7, Sr2Nb2O7, Bi4Ti3O12, Gd6Ca4(SiO4)6O, La2Mo2O9, 7YSZ+3.5EuO1.5+3.5TmO1.5, 7YSZ+3.5EuO0.15+3.5YbO1.5, 8YSZ, Zr3Y4O12, W3Nb14O44, WNb12O33, W4Nb26O77, tri-doped YSZ (Zr,Hf)0.87Y0.13O1.93, YPO4, WSe2, or combinations thereof; or combinations thereof. 15. The device according to claim 13 further comprising a corrosion resistant layer, a gas barrier layer, an adhesion layer, or any combination thereof in contact with the low thermal conductivity layer on the side of the low thermal conductivity layer opposite the peg, and an overcoat layer in contact with the gas barrier layer, an adhesion layer, or any combination thereof. 16. The device according to claim 15, wherein the protective layer comprises diamond like carbon (DLC). 17. The device according to claim 13, wherein the low thermal conductivity layer comprises a multilayer structure comprising at least two layers of low thermal conductivity material. 18. A device having an air bearing surface (ABS), the device comprising:
a write pole; a near field transducer (NFT) comprising a peg and a disc, wherein the peg is at the ABS of the device; an overcoat, the overcoat comprising: a low thermal conductivity layer in contact with at least the peg of the NFT, the low thermal conductivity layer comprising a material that has a thermal conductivity of not greater than 5 W/mK; and a protective layer. 19. The device according to claim 18 further comprising a corrosion resistant layer, a gas barrier layer, an adhesion layer, or any combination thereof positioned between the low thermal conductivity layer and the protective layer. 20. The device according to claim 18, wherein the low thermal conductivity layer comprises:
fused silica (SiO2), yttria stabilized zirconia (YSZ), cerium oxide (CeO2), nickel oxide (NiO), thorium oxide (ThO2), tantalum oxide (TaO), tantalum silicate (TaSiO), zirconium oxide (ZrO2), or combinations thereof; YSZ, CeO2, NiO, ThO2, TaSiO, MgAl2O4, Mullite, Gd2Zr2O7, LaMgAl11O19, Monazite, Sm2Zr2O7, La2Zr2O7, Nd2Zr2O7, Zr3Y4O12, 0.1WO3-0.9 Nb2O5, WNb12O33, W4Nb26O77, W3Nb14O44, (3.5Eu-3.5Tm-7Y)SZ, (3.5Eu-3.5Yb-7Y)SZ, (Zr, Hf)3Y4O12, Bi3Ti3O12, Sr2Nb2O7, La5/6Yb1/6Zr2O7., TaZrO, NbZrO, or combinations thereof; LaPO4, Dy2SrAl2O7, SrZrO3, 7YSZ, Yb2Sn2O7, La(Mg1/4Al1/2Ta1/4)O3, Gd2Zr2O7, Ba2ErAlO5, BaNd2Ti3O10, (Eu,Tm,Y)ZrO2, W3Nb14O44, (Zr,Hf)3Y4O12, (Zr0.5Hf0.5)0.87Y0.13O2, Yb0.2Ta0.2Zr0.6O2, (La5/6Yb1/6)Zr2O7, Sr2Nb2O7, Bi4Ti3O12, Gd6Ca4(SiO4)6O, La2Mo2O9, 7YSZ+3.5EuO1.5+3.5TmO1.5, 7YSZ+3.5EuO0.15+3.5YbO1.5, 8YSZ, Zr3Y4O12, W3Nb14O44, WNb12O33, W4Nb26O77, tri-doped YSZ (Zr,Hf)0.87Y0.13O1.93, YPO4, WSe2, or combinations thereof or combinations thereof. | Devices having an air bearing surface (ABS), the device including a write pole; a near field transducer (NFT) that includes a peg and a disc, wherein the peg is at the ABS of the device; an overcoat that includes a low thermal conductivity layer, the low thermal conductivity layer including a material that has a thermal conductivity of not greater than 5 W/mK.1. A device having an air bearing surface (ABS), the device comprising:
a write pole; a near field transducer (NFT) comprising a peg and a disc, wherein the peg is at the ABS of the device; an overcoat, the overcoat comprising: a low thermal conductivity layer, the low thermal conductivity layer comprising a material that has a thermal conductivity of not greater than 5 W/mK. 2. The device according to claim 1, wherein the low thermal conductivity layer comprises a material that has a thermal conductivity of not greater than 2 W/mK. 3. The device according to claim 1, wherein the low thermal conductivity layer comprises fused silica (SiO2), yttria stabilized zirconia (YSZ), cerium oxide (CeO2), nickel oxide (NiO), thorium oxide (ThO2), tantalum oxide (TaO), tantalum silicate (TaSiO), zirconium oxide (ZrO2), or combinations thereof. 4. The device according to claim 1, wherein the low thermal conductivity layer comprises tantalum silicate (TaSiO). 5. The device according to claim 1, wherein the low thermal conductivity layer comprises SiO2, YSZ, CeO2, NiO, ThO2, TaSiO, ZrO2, MgAl2O4, Mullite, Gd2Zr2O7, LaMgAl11O19, Monazite, Sm2Zr2O7, La2Zr2O7, Nd2Zr2O7, Zr3Y4O12, 0.1WO3-0.9 Nb2O5, WNb12O33, W4Nb26O77, W3Nb14O44, (3.5Eu-3.5Tm-7Y)SZ, (3.5Eu-3.5Yb-7Y)SZ, (Zr, Hf)3Y4O12, Bi3Ti3O12, Sr2Nb2O7, La5/6Yb1/6Zr2O7., TaZrO, NbZrO, or combinations thereof. 6. The device according to claim 1, wherein the low thermal conductivity layer comprises LaPO4, Dy2SrAl2O7, SrZrO3, 7YSZ, Yb2Sn2O7, La(Mg1/4Al1/2Ta1/4)O3, Gd2Zr2O7, Ba2ErAlO5, BaNd2Ti3O10, (Eu,Tm,Y)ZrO2, W3Nb14O44, (Zr,Hf)3Y4O12, (Zr0.5Hf0.5)0.87Y0.13O2, Yb0.2Ta0.2Zr0.6O2, (La5/6Yb1/6)Zr2O7, Sr2Nb2O7, Bi4Ti3O12, Gd6Ca4(SiO4)6O, La2Mo2O9, 7YSZ+3.5EuO1.5+3.5TmO1.5, 7YSZ+3.5EuO0.15+3.5YbO1.5, 8YSZ, Zr3Y4O12, W3Nb14O44, WNb12O33, W4Nb26O77, tri-doped YSZ (Zr,Hf)0.87Y0.13O1.93, YPO4, WSe2, or combinations thereof. 7. The device according to claim 1, wherein the low thermal conductivity layer has a refractive index of not less than 1.5, an extinction coefficient of not greater than 0.5, or both. 8. The device according to claim 1 further comprising a diamond like carbon (DLC) layer disposed on at least a portion of the low thermal conductivity layer. 9. The device according to claim 1 further comprising a corrosion resistant layer, a gas barrier layer, an adhesion layer, or any combination thereof. 10. The device according to claim 1, wherein the low thermal conductivity layer is in contact with at least the peg of the NFT. 11. The device according to claim 10 further comprising a corrosion resistant layer, a gas barrier layer, an adhesion layer, or any combination thereof in contact with the low thermal conductivity layer on the side of the low thermal conductivity layer opposite the peg, and an overcoat layer in contact with the gas barrier layer, an adhesion layer, or any combination thereof. 12. The device according to claim 1, wherein the low thermal conductivity layer comprises a multilayer structure comprising at least two layers of low thermal conductivity material. 13. A device having an air bearing surface (ABS), the device comprising:
a write pole; a near field transducer (NFT) comprising a peg and a disc, wherein the peg is at the ABS of the device; an overcoat, the overcoat comprising: a low thermal conductivity layer in contact with at least the peg of the NFT, the low thermal conductivity layer comprising a material that has a thermal conductivity of not greater than 5 W/mK. 14. The device according to claim 13, wherein the low thermal conductivity layer comprises:
fused silica (SiO2), yttria stabilized zirconia (YSZ), cerium oxide (CeO2), nickel oxide (NiO), thorium oxide (ThO2), tantalum oxide (TaO), tantalum silicate (TaSiO), zirconium oxide (ZrO2), or combinations thereof; YSZ, CeO2, NiO, ThO2, TaSiO, MgAl2O4, Mullite, Gd2Zr2O7, LaMgAl11O19, Monazite, Sm2Zr2O7, La2Zr2O7, Nd2Zr2O7, Zr3Y4O12, 0.1WO3-0.9 Nb2O5, WNb12O33, W4Nb26O77, W3Nb14O44, (3.5Eu-3.5Tm-7Y)SZ, (3.5Eu-3.5Yb-7Y)SZ, (Zr, Hf)3Y4O12, Bi3Ti3O12, Sr2Nb2O7, La5/6Yb1/6Zr2O7., TaZrO, NbZrO, or combinations thereof; LaPO4, Dy2SrAl2O7, SrZrO3, 7YSZ, Yb2Sn2O7, La(Mg1/4Al1/2Ta1/4)O3, Gd2Zr2O7, Ba2ErAlO5, BaNd2Ti3O10, (Eu,Tm,Y)ZrO2, W3Nb14O44, (Zr,Hf)3Y4O12, (Zr0.5Hf0.5)0.87Y0.13O2, Yb0.2Ta0.2Zr0.6O2, (La5/6Yb1/6)Zr2O7, Sr2Nb2O7, Bi4Ti3O12, Gd6Ca4(SiO4)6O, La2Mo2O9, 7YSZ+3.5EuO1.5+3.5TmO1.5, 7YSZ+3.5EuO0.15+3.5YbO1.5, 8YSZ, Zr3Y4O12, W3Nb14O44, WNb12O33, W4Nb26O77, tri-doped YSZ (Zr,Hf)0.87Y0.13O1.93, YPO4, WSe2, or combinations thereof; or combinations thereof. 15. The device according to claim 13 further comprising a corrosion resistant layer, a gas barrier layer, an adhesion layer, or any combination thereof in contact with the low thermal conductivity layer on the side of the low thermal conductivity layer opposite the peg, and an overcoat layer in contact with the gas barrier layer, an adhesion layer, or any combination thereof. 16. The device according to claim 15, wherein the protective layer comprises diamond like carbon (DLC). 17. The device according to claim 13, wherein the low thermal conductivity layer comprises a multilayer structure comprising at least two layers of low thermal conductivity material. 18. A device having an air bearing surface (ABS), the device comprising:
a write pole; a near field transducer (NFT) comprising a peg and a disc, wherein the peg is at the ABS of the device; an overcoat, the overcoat comprising: a low thermal conductivity layer in contact with at least the peg of the NFT, the low thermal conductivity layer comprising a material that has a thermal conductivity of not greater than 5 W/mK; and a protective layer. 19. The device according to claim 18 further comprising a corrosion resistant layer, a gas barrier layer, an adhesion layer, or any combination thereof positioned between the low thermal conductivity layer and the protective layer. 20. The device according to claim 18, wherein the low thermal conductivity layer comprises:
fused silica (SiO2), yttria stabilized zirconia (YSZ), cerium oxide (CeO2), nickel oxide (NiO), thorium oxide (ThO2), tantalum oxide (TaO), tantalum silicate (TaSiO), zirconium oxide (ZrO2), or combinations thereof; YSZ, CeO2, NiO, ThO2, TaSiO, MgAl2O4, Mullite, Gd2Zr2O7, LaMgAl11O19, Monazite, Sm2Zr2O7, La2Zr2O7, Nd2Zr2O7, Zr3Y4O12, 0.1WO3-0.9 Nb2O5, WNb12O33, W4Nb26O77, W3Nb14O44, (3.5Eu-3.5Tm-7Y)SZ, (3.5Eu-3.5Yb-7Y)SZ, (Zr, Hf)3Y4O12, Bi3Ti3O12, Sr2Nb2O7, La5/6Yb1/6Zr2O7., TaZrO, NbZrO, or combinations thereof; LaPO4, Dy2SrAl2O7, SrZrO3, 7YSZ, Yb2Sn2O7, La(Mg1/4Al1/2Ta1/4)O3, Gd2Zr2O7, Ba2ErAlO5, BaNd2Ti3O10, (Eu,Tm,Y)ZrO2, W3Nb14O44, (Zr,Hf)3Y4O12, (Zr0.5Hf0.5)0.87Y0.13O2, Yb0.2Ta0.2Zr0.6O2, (La5/6Yb1/6)Zr2O7, Sr2Nb2O7, Bi4Ti3O12, Gd6Ca4(SiO4)6O, La2Mo2O9, 7YSZ+3.5EuO1.5+3.5TmO1.5, 7YSZ+3.5EuO0.15+3.5YbO1.5, 8YSZ, Zr3Y4O12, W3Nb14O44, WNb12O33, W4Nb26O77, tri-doped YSZ (Zr,Hf)0.87Y0.13O1.93, YPO4, WSe2, or combinations thereof or combinations thereof. | 2,600 |
10,715 | 10,715 | 16,003,055 | 2,626 | A display substrate includes an insulating substrate, a first gate line, a first lower electrode, a second lower electrode, a first upper electrode, and a second upper electrode. The insulating substrate includes a first pixel region and a second pixel region located at a first direction from the first pixel region. The first gate line extends in a second direction crossing the first direction on the insulating substrate. The first and the second lower electrodes are in the first and the second pixel regions, respectively. The first upper electrode overlaps the first lower electrode in the first pixel region and includes a first slit pattern extending in a third direction different from the first and the second directions. The second upper electrode overlaps the second lower electrode in the second pixel region and includes a second slit pattern extending in a fourth direction different from the first to the third directions. | 1. A display substrate comprising:
an insulating substrate comprising a first pixel region and a second pixel region located in a first direction from the first pixel region; a first gate line extending in a second direction crossing the first direction on the insulating substrate; a first lower electrode in the first pixel region; a second lower electrode in the second pixel region; a first upper electrode overlapping the first lower electrode in the first pixel region and comprising a first slit pattern comprising a plurality of first slits extending in a third direction different from the first and the second directions; and a second upper electrode overlapping the second lower electrode in the second pixel region and comprising a second slit pattern comprising a plurality of second slits extending in a fourth direction different from the first to the third directions, wherein the first pixel region forms a first pixel in a first row and the second pixel region forms a second pixel in a second row adjacent to the first row, wherein the first row comprising the first pixel and the second row comprising the second pixel have different domains from each other, wherein a first gamma reference voltage group of gamma voltages according to a first gamma curve for compensating for a variation of pixels having a same first domain in the first row is applied to the first pixel in the first row and a first group of rows of pixels, and a second gamma reference voltage group of gamma voltages according to a second gamma curve for compensating for a variation of pixels having a same second domain in the second row is applied to the second pixel in the second row and a second group of rows of pixels alternating with the first group of rows of pixels, wherein the second gamma reference voltage group is different from the first gamma reference voltage group and the second gamma reference voltage group is not applied to pixels in the first group of rows of pixels, and wherein a luminance difference occurring between the adjacent first and second rows having the different domains is compensated by the different first and second gamma reference voltage groups. 2. The display substrate of claim 1, wherein a first end of each of the first slits has a curved shape, and a second end of each of the first slits has a flat shape, and
wherein a third end of each of the second slits has a flat shape, and a fourth end of each of the second slits has a curved shape. 3. The display substrate of claim 2, wherein the first end of each of the first slits and the fourth end of each of the second slits are distal ends with respect to an area between the first pixel region and the second pixel region, and
wherein the second end of each of the first slits and the third end of each of the second slits are proximal ends with respect to the area between the first pixel region and the second pixel region. 4. The display substrate of claim 2, wherein the first end of each of the first slits has an inclined angle smaller than an angle between the second direction and the third direction, and
wherein the fourth end of each of the second slits has an inclined angle smaller than an angle between the second direction and the fourth direction. 5. The display substrate of claim 1, further comprising:
an alignment layer on the insulating substrate on which the first and the second upper electrodes are located, wherein an alignment direction of the alignment layer in the first pixel region is same as that of the alignment layer in the second pixel region. 6. The display substrate of claim 5, wherein the alignment direction of the alignment layer is the first direction or the second direction. 7. The display substrate of claim 1, wherein the second direction is perpendicular to the first direction, and
wherein the third direction and the fourth direction are symmetric to each other with respect to the second direction. 8. The display substrate of claim 1, further comprising:
a second gate line in parallel with the first gate line; a first data line crossing the first and second gate lines; a first switching element in the first pixel region and electrically coupled to the first gate line and the first data line; and a second switching element in the second pixel region and electrically coupled to the second gate line and the first data line, wherein the first lower electrode or the first upper electrode is electrically coupled to the first switching element, and wherein the second lower electrode or the second upper electrode is electrically coupled to the second switching element. 9. The display substrate of claim 8, wherein the first gate line is between the first and the second pixel regions, and
wherein the second pixel region is between the first and the second gate lines. 10. The display substrate of claim 8, wherein the first and the second pixel regions are between the first and the second gate lines. 11. The display substrate of claim 8, wherein the first data line extends parallel to the plurality of first slits in the third direction at the first upper electrode and extends parallel to the plurality of second slits in the fourth direction at the second upper electrode. 12. The display substrate of claim 8, wherein the first data line extends in the first direction. 13. The display substrate of claim 1, further comprising:
a second gate line in parallel with the first gate line; a first data line crossing the first and the second gate lines; a second data line in parallel with the first data line and crossing the first and the second gate lines; a first switching element in the first pixel region and electrically coupled to the first gate line and the second data line; and a second switching element in the second pixel region and electrically coupled to the second gate line and the first data line, wherein the first lower electrode or the first upper electrode is electrically coupled to the first switching element, and wherein the second lower electrode or the second upper electrode is electrically coupled to the second switching element. 14. The display substrate of claim 13, wherein the first and the second pixel regions are between the first and the second data lines, and
wherein the first switching element is adjacent to the second data line, and the second switching element is adjacent to the first data line. 15. The display substrate of claim 1, further comprising:
a first data line crossing the first gate line, wherein both ends of each of the first slits have different shapes from each other, wherein a first slit among the first slits that is parallel to the first data line is shorter than a second slit among the first slits that is parallel to the first data line, with the both ends of the first slit among the first slits having a same shape as the both ends of the second slit among the first slits, respectively, wherein both ends of each of the second slits have different shapes from each other, and wherein the shapes of the both ends of each of the second slits are symmetric to the shapes of the both ends of each of the first slits. 16. A display substrate comprising:
an insulating substrate comprising a first pixel region and a second pixel region located in a first direction from the first pixel region; a first gate line extending in a second direction crossing the first direction on the insulating substrate; a first lower electrode in the first pixel region; a second lower electrode in the second pixel region; a first upper electrode overlapping the first lower electrode in the first pixel region and comprising a first slit pattern comprising a plurality of first slits each sequentially extending in a third direction and in a fourth direction, the third and the fourth directions being different from each other, each of the third and the fourth directions being different from the first and the second directions; and a second upper electrode overlapping the second lower electrode in the second pixel region and comprising a second slit pattern comprising a plurality of second slits each sequentially extending in the fourth direction and the third direction, wherein the first pixel region forms a first pixel in a first row and the second pixel region forms a second pixel in a second row adjacent to the first row, wherein the first row comprising the first pixel and the second row comprising the second pixel have different domains from each other, wherein a first gamma reference voltage group of gamma voltages according to a first gamma curve for compensating for a variation of pixels having a same first domain in the first row is applied to the first pixel in the first row and a first group of rows of pixels, and a second gamma reference voltage group of gamma voltages according to a second gamma curve for compensating for a variation of pixels having a same second domain in the second row is applied to the second pixel in the second row and a second group of rows of pixels alternating with the first group of rows of pixels, wherein the second gamma reference voltage group is different from the first gamma reference voltage group and the second gamma reference voltage group is not applied to pixels in the first group of rows of pixels, and wherein a luminance difference occurring between the adjacent first and second rows having the different domains is compensated by the different first and second gamma reference voltage groups. 17. The display substrate of claim 16, wherein a first end of each of the first slits and a fourth end of each of the second slits are distal ends with respect to an area between the first pixel region and the second pixel region, and
wherein a second end of each of the first slits and a third end of each of the second slits are proximal ends with respect to the area between the first pixel region and the second pixel region. 18. The display substrate of claim 16, wherein a first end of each of the first slits has an inclined angle smaller than an angle between the second direction and the third direction, and
wherein a fourth end of each of the second slits has an inclined angle smaller than an angle between the second direction and the third direction. 19. The display substrate of claim 16, further comprising:
a first data line crossing the first gate line, wherein both ends of each of the first slits have different shapes from each other, wherein a first slit among the first slits that is parallel to the first data line is shorter than a second slit among the first slits that is parallel to the first data line, with the both ends of the first slit among the first slits having a same shape as the both ends of the second slit among the first slits, respectively, and wherein the shapes of the both ends of each of the second slits are symmetric to the shapes of the both ends of each of the first slits. 20. A display device comprising:
a display panel comprising a first pixel, a second pixel and a first gate line, the first pixel comprising a first lower electrode and a first upper electrode overlapping the first lower electrode, the first upper electrode having a first slit pattern, the second pixel located in a first direction from the first pixel, the second pixel comprising a second lower electrode and a second upper electrode overlapping the second lower electrode, the second upper electrode having a second slit pattern extending in a direction different from a longitudinal direction of the first slit pattern, and the first gate line extending in a second direction different from the first direction; a gamma voltage generator configured to generate a first gamma reference voltage group and a second gamma reference voltage group, the first and the second gamma reference voltage groups having different voltage levels; a controller configured to output first and second pixel data corresponding to the first and the second pixels; and a data driver configured to convert the first pixel data to a first pixel voltage based on the first gamma reference voltage group, to convert the second pixel data to a second pixel voltage based on the second gamma reference voltage group, and to output the first pixel voltage and the second pixel voltage to the first pixel and the second pixel, respectively, wherein the first pixel is in a first row and the second pixel is in a second row adjacent to the first row, wherein the first row comprising the first pixel and the second row comprising the second pixel have different domains from each other, wherein the first gamma reference voltage group is a group of gamma voltages according to a first gamma curve for compensating for a variation of pixels having a same first domain in the first row and is applied to the first pixel in the first row and a first group of rows of pixels, and the second gamma reference voltage group is a group of gamma voltages according to a second gamma curve for compensating for a variation of pixels having a same second domain in the second row and is applied to the second pixel in the second row and a second group of rows of pixels alternating with the first group of rows of pixels, wherein the second gamma reference voltage group is different from the first gamma reference voltage group and the second gamma reference voltage group is not applied to pixels in the first group of rows of pixels, and wherein a luminance difference occurring between the adjacent first and second rows having the different domains is compensated by the first and second gamma reference voltage groups having the different voltage levels. | A display substrate includes an insulating substrate, a first gate line, a first lower electrode, a second lower electrode, a first upper electrode, and a second upper electrode. The insulating substrate includes a first pixel region and a second pixel region located at a first direction from the first pixel region. The first gate line extends in a second direction crossing the first direction on the insulating substrate. The first and the second lower electrodes are in the first and the second pixel regions, respectively. The first upper electrode overlaps the first lower electrode in the first pixel region and includes a first slit pattern extending in a third direction different from the first and the second directions. The second upper electrode overlaps the second lower electrode in the second pixel region and includes a second slit pattern extending in a fourth direction different from the first to the third directions.1. A display substrate comprising:
an insulating substrate comprising a first pixel region and a second pixel region located in a first direction from the first pixel region; a first gate line extending in a second direction crossing the first direction on the insulating substrate; a first lower electrode in the first pixel region; a second lower electrode in the second pixel region; a first upper electrode overlapping the first lower electrode in the first pixel region and comprising a first slit pattern comprising a plurality of first slits extending in a third direction different from the first and the second directions; and a second upper electrode overlapping the second lower electrode in the second pixel region and comprising a second slit pattern comprising a plurality of second slits extending in a fourth direction different from the first to the third directions, wherein the first pixel region forms a first pixel in a first row and the second pixel region forms a second pixel in a second row adjacent to the first row, wherein the first row comprising the first pixel and the second row comprising the second pixel have different domains from each other, wherein a first gamma reference voltage group of gamma voltages according to a first gamma curve for compensating for a variation of pixels having a same first domain in the first row is applied to the first pixel in the first row and a first group of rows of pixels, and a second gamma reference voltage group of gamma voltages according to a second gamma curve for compensating for a variation of pixels having a same second domain in the second row is applied to the second pixel in the second row and a second group of rows of pixels alternating with the first group of rows of pixels, wherein the second gamma reference voltage group is different from the first gamma reference voltage group and the second gamma reference voltage group is not applied to pixels in the first group of rows of pixels, and wherein a luminance difference occurring between the adjacent first and second rows having the different domains is compensated by the different first and second gamma reference voltage groups. 2. The display substrate of claim 1, wherein a first end of each of the first slits has a curved shape, and a second end of each of the first slits has a flat shape, and
wherein a third end of each of the second slits has a flat shape, and a fourth end of each of the second slits has a curved shape. 3. The display substrate of claim 2, wherein the first end of each of the first slits and the fourth end of each of the second slits are distal ends with respect to an area between the first pixel region and the second pixel region, and
wherein the second end of each of the first slits and the third end of each of the second slits are proximal ends with respect to the area between the first pixel region and the second pixel region. 4. The display substrate of claim 2, wherein the first end of each of the first slits has an inclined angle smaller than an angle between the second direction and the third direction, and
wherein the fourth end of each of the second slits has an inclined angle smaller than an angle between the second direction and the fourth direction. 5. The display substrate of claim 1, further comprising:
an alignment layer on the insulating substrate on which the first and the second upper electrodes are located, wherein an alignment direction of the alignment layer in the first pixel region is same as that of the alignment layer in the second pixel region. 6. The display substrate of claim 5, wherein the alignment direction of the alignment layer is the first direction or the second direction. 7. The display substrate of claim 1, wherein the second direction is perpendicular to the first direction, and
wherein the third direction and the fourth direction are symmetric to each other with respect to the second direction. 8. The display substrate of claim 1, further comprising:
a second gate line in parallel with the first gate line; a first data line crossing the first and second gate lines; a first switching element in the first pixel region and electrically coupled to the first gate line and the first data line; and a second switching element in the second pixel region and electrically coupled to the second gate line and the first data line, wherein the first lower electrode or the first upper electrode is electrically coupled to the first switching element, and wherein the second lower electrode or the second upper electrode is electrically coupled to the second switching element. 9. The display substrate of claim 8, wherein the first gate line is between the first and the second pixel regions, and
wherein the second pixel region is between the first and the second gate lines. 10. The display substrate of claim 8, wherein the first and the second pixel regions are between the first and the second gate lines. 11. The display substrate of claim 8, wherein the first data line extends parallel to the plurality of first slits in the third direction at the first upper electrode and extends parallel to the plurality of second slits in the fourth direction at the second upper electrode. 12. The display substrate of claim 8, wherein the first data line extends in the first direction. 13. The display substrate of claim 1, further comprising:
a second gate line in parallel with the first gate line; a first data line crossing the first and the second gate lines; a second data line in parallel with the first data line and crossing the first and the second gate lines; a first switching element in the first pixel region and electrically coupled to the first gate line and the second data line; and a second switching element in the second pixel region and electrically coupled to the second gate line and the first data line, wherein the first lower electrode or the first upper electrode is electrically coupled to the first switching element, and wherein the second lower electrode or the second upper electrode is electrically coupled to the second switching element. 14. The display substrate of claim 13, wherein the first and the second pixel regions are between the first and the second data lines, and
wherein the first switching element is adjacent to the second data line, and the second switching element is adjacent to the first data line. 15. The display substrate of claim 1, further comprising:
a first data line crossing the first gate line, wherein both ends of each of the first slits have different shapes from each other, wherein a first slit among the first slits that is parallel to the first data line is shorter than a second slit among the first slits that is parallel to the first data line, with the both ends of the first slit among the first slits having a same shape as the both ends of the second slit among the first slits, respectively, wherein both ends of each of the second slits have different shapes from each other, and wherein the shapes of the both ends of each of the second slits are symmetric to the shapes of the both ends of each of the first slits. 16. A display substrate comprising:
an insulating substrate comprising a first pixel region and a second pixel region located in a first direction from the first pixel region; a first gate line extending in a second direction crossing the first direction on the insulating substrate; a first lower electrode in the first pixel region; a second lower electrode in the second pixel region; a first upper electrode overlapping the first lower electrode in the first pixel region and comprising a first slit pattern comprising a plurality of first slits each sequentially extending in a third direction and in a fourth direction, the third and the fourth directions being different from each other, each of the third and the fourth directions being different from the first and the second directions; and a second upper electrode overlapping the second lower electrode in the second pixel region and comprising a second slit pattern comprising a plurality of second slits each sequentially extending in the fourth direction and the third direction, wherein the first pixel region forms a first pixel in a first row and the second pixel region forms a second pixel in a second row adjacent to the first row, wherein the first row comprising the first pixel and the second row comprising the second pixel have different domains from each other, wherein a first gamma reference voltage group of gamma voltages according to a first gamma curve for compensating for a variation of pixels having a same first domain in the first row is applied to the first pixel in the first row and a first group of rows of pixels, and a second gamma reference voltage group of gamma voltages according to a second gamma curve for compensating for a variation of pixels having a same second domain in the second row is applied to the second pixel in the second row and a second group of rows of pixels alternating with the first group of rows of pixels, wherein the second gamma reference voltage group is different from the first gamma reference voltage group and the second gamma reference voltage group is not applied to pixels in the first group of rows of pixels, and wherein a luminance difference occurring between the adjacent first and second rows having the different domains is compensated by the different first and second gamma reference voltage groups. 17. The display substrate of claim 16, wherein a first end of each of the first slits and a fourth end of each of the second slits are distal ends with respect to an area between the first pixel region and the second pixel region, and
wherein a second end of each of the first slits and a third end of each of the second slits are proximal ends with respect to the area between the first pixel region and the second pixel region. 18. The display substrate of claim 16, wherein a first end of each of the first slits has an inclined angle smaller than an angle between the second direction and the third direction, and
wherein a fourth end of each of the second slits has an inclined angle smaller than an angle between the second direction and the third direction. 19. The display substrate of claim 16, further comprising:
a first data line crossing the first gate line, wherein both ends of each of the first slits have different shapes from each other, wherein a first slit among the first slits that is parallel to the first data line is shorter than a second slit among the first slits that is parallel to the first data line, with the both ends of the first slit among the first slits having a same shape as the both ends of the second slit among the first slits, respectively, and wherein the shapes of the both ends of each of the second slits are symmetric to the shapes of the both ends of each of the first slits. 20. A display device comprising:
a display panel comprising a first pixel, a second pixel and a first gate line, the first pixel comprising a first lower electrode and a first upper electrode overlapping the first lower electrode, the first upper electrode having a first slit pattern, the second pixel located in a first direction from the first pixel, the second pixel comprising a second lower electrode and a second upper electrode overlapping the second lower electrode, the second upper electrode having a second slit pattern extending in a direction different from a longitudinal direction of the first slit pattern, and the first gate line extending in a second direction different from the first direction; a gamma voltage generator configured to generate a first gamma reference voltage group and a second gamma reference voltage group, the first and the second gamma reference voltage groups having different voltage levels; a controller configured to output first and second pixel data corresponding to the first and the second pixels; and a data driver configured to convert the first pixel data to a first pixel voltage based on the first gamma reference voltage group, to convert the second pixel data to a second pixel voltage based on the second gamma reference voltage group, and to output the first pixel voltage and the second pixel voltage to the first pixel and the second pixel, respectively, wherein the first pixel is in a first row and the second pixel is in a second row adjacent to the first row, wherein the first row comprising the first pixel and the second row comprising the second pixel have different domains from each other, wherein the first gamma reference voltage group is a group of gamma voltages according to a first gamma curve for compensating for a variation of pixels having a same first domain in the first row and is applied to the first pixel in the first row and a first group of rows of pixels, and the second gamma reference voltage group is a group of gamma voltages according to a second gamma curve for compensating for a variation of pixels having a same second domain in the second row and is applied to the second pixel in the second row and a second group of rows of pixels alternating with the first group of rows of pixels, wherein the second gamma reference voltage group is different from the first gamma reference voltage group and the second gamma reference voltage group is not applied to pixels in the first group of rows of pixels, and wherein a luminance difference occurring between the adjacent first and second rows having the different domains is compensated by the first and second gamma reference voltage groups having the different voltage levels. | 2,600 |
10,716 | 10,716 | 15,999,355 | 2,636 | An optical system ( 100 ) comprising: a transmitter module ( 102 ) configured to transmit a sequence of optical pulses ( 300 ), each optical pulse in the sequence ( 300 ) having a different magnitude to each other optical pulse in the sequence ( 300 ); a receiver module ( 104 ) comprising one or more optical signal detectors, the receiver module ( 104 ) configured to receive the sequence of optical pulses ( 300 ) transmitted by the transmitter module ( 102 ); and one or more processors ( 110 ) configured to process the sequence of optical pulses received by the receiver module ( 104 ) to select an optical pulse from the received sequence of optical pulses ( 400 ) based on one or more predetermined criteria. The one or more predetermined criteria include a criterion that the selected optical pulse does not saturate the one or more optical signal detectors. | 1. An optical system comprising:
a transmitter module configured to transmit a sequence of optical pulses, each optical pulse in the sequence having a different magnitude to each other optical pulse in the sequence; a receiver module comprising one or more optical signal detectors, the receiver module configured to receive the sequence of optical pulses transmitted by the transmitter module; and one or more processors configured to process the sequence of optical pulses received by the receiver module to select an optical pulse from the received sequence of optical pulses based on one or more predetermined criteria; wherein the one or more predetermined criteria include a criterion that the selected optical pulse does not saturate the one or more optical signal detectors. 2. The optical system according to claim 1, wherein the receiver module comprises a photomultiplier detector for detecting the sequence of optical pulses transmitted by the transmitter module. 3. The optical system according to claim 1, wherein the one or more processors are configured to select, from those optical pulses having magnitudes within linear range of the receiver, a pulse that has the largest magnitude. 4. The optical system according to claim 1, wherein the one or more processors are configured to select, from the received sequence of optical pulses, a largest magnitude optical pulse that does not saturate an optical signal detector of the receiver module. 5. The optical system according to claim 1, wherein the transmitted sequence of optical pulses is a sequence of optical pulses having decreasing or increasing magnitudes. 6. The optical system according to claim 5, wherein the transmitted sequence of optical pulses is a sequence of optical pulses having strictly decreasing magnitudes. 7. The optical system according to claim 1, wherein the one or more processors are further configured to determine data from the selected optical pulse, the data specifying a material property of one or more entities with which the selected optical pulse interacted between the transmitter module and the receiver module. 8. The optical system according to claim 1, further comprising means for transferring, from the one or more processors to the transmitter module, information specifying the selected optical pulse. 9. The optical system according to claim 8, wherein the transmitter module is further configured to, responsive to the transmitter module receiving the information specifying the selected optical pulse, transmit a further optical signal to the receiver module, the further optical signal having a magnitude substantially equal to the pulse from the transmitted sequence that corresponds to the selected optical pulse. 10. The optical system according to claim 1, wherein the optical system further comprises one or more objects disposed between the transmitter module and the receiver module, the one or more objects arranged to:
receive the sequence of optical pulses transmitted by the transmitter module; and reflect and/or scatter the received sequence of optical pulses to the receiver module. 11. The optical system according to claim 10, wherein the one or more objects comprises a retro-reflector. 12. The optical system according to claim 1, wherein the transmitter module comprises:
one or more transmitter lasers configured to generate a sequence of optical pulses; and modulator configured to modulate the optical pulses generated by the one or more transmitter lasers to provide that each optical pulse in the sequence has a different magnitude to each other pulse in the sequence. 13. The optical system according to claim 1, further comprising:
one or more lasers configured to generate an optical signal; a first beam splitter; a second beam splitter; and an optical delay line; wherein
the one or more lasers are coupled to a first input of the first beam splitter such that the first beam splitter receives an optical signal from the one or more lasers;
the first beam splitter is configured to split an optical signal received at its first input between a first output of the first beam splitter and a second output of the first beam splitter;
an input of the second beam splitter is coupled to the first output of the first beam splitter;
the second beam splitter is configured to split an optical signal received at its input between a first output of the second beam splitter and a second output of the second beam splitter;
the transmitter module is coupled to the first output of the second beam splitter;
an input of the optical delay line is coupled to the second output of the second beam splitter;
an output of the optical delay line is coupled to a second input of the first beam splitter; and
the first beam splitter is configured to split an optical signal received at its second input between the first output of the first beam splitter and the second output of the first beam splitter. 14. The optical system according to claim 13, wherein a transmitivity of the first beam splitter is substantially equal to a transmitivity of the second beam splitter. 15. A method for performance by an optical system, the method comprising:
transmitting, by a transmitter module, a sequence of optical pulses, each optical pulse in the sequence having a different magnitude to each other optical pulse in the sequence; receiving, by a receiver module comprising one or more optical signal detectors, the sequence of optical pulses transmitted by the transmitter module; and processing, by one or more processors, the sequence of optical pulses received by the receiver module to select an optical pulse from the received sequence of optical pulses based on one or more predetermined criteria; wherein the one or more predetermined criteria include a criterion that the selected optical pulse does not saturate the one or more optical signal detectors. 16. The optical system according to claim 12, wherein the modulator is an external modulator. 17. The optical system according to claim 1, wherein the receiver module comprises a photomultiplier detector for detecting the sequence of optical pulses transmitted by the transmitter module, and wherein the one or more processors are configured to select, from those optical pulses having magnitudes within linear range of the photomultiplier detector, a pulse that has the largest magnitude, wherein the selected largest magnitude optical pulse does not saturate the photomultiplier detector. 18. The optical system according to claim 1, further comprising a communications link for transferring, from the one or more processors to the transmitter module, information specifying the selected optical pulse. 19. The optical system according to claim 18, wherein the communications link is an optical communications link. 20. The optical system according to claim 18, wherein the communications link is a wireless communications link. | An optical system ( 100 ) comprising: a transmitter module ( 102 ) configured to transmit a sequence of optical pulses ( 300 ), each optical pulse in the sequence ( 300 ) having a different magnitude to each other optical pulse in the sequence ( 300 ); a receiver module ( 104 ) comprising one or more optical signal detectors, the receiver module ( 104 ) configured to receive the sequence of optical pulses ( 300 ) transmitted by the transmitter module ( 102 ); and one or more processors ( 110 ) configured to process the sequence of optical pulses received by the receiver module ( 104 ) to select an optical pulse from the received sequence of optical pulses ( 400 ) based on one or more predetermined criteria. The one or more predetermined criteria include a criterion that the selected optical pulse does not saturate the one or more optical signal detectors.1. An optical system comprising:
a transmitter module configured to transmit a sequence of optical pulses, each optical pulse in the sequence having a different magnitude to each other optical pulse in the sequence; a receiver module comprising one or more optical signal detectors, the receiver module configured to receive the sequence of optical pulses transmitted by the transmitter module; and one or more processors configured to process the sequence of optical pulses received by the receiver module to select an optical pulse from the received sequence of optical pulses based on one or more predetermined criteria; wherein the one or more predetermined criteria include a criterion that the selected optical pulse does not saturate the one or more optical signal detectors. 2. The optical system according to claim 1, wherein the receiver module comprises a photomultiplier detector for detecting the sequence of optical pulses transmitted by the transmitter module. 3. The optical system according to claim 1, wherein the one or more processors are configured to select, from those optical pulses having magnitudes within linear range of the receiver, a pulse that has the largest magnitude. 4. The optical system according to claim 1, wherein the one or more processors are configured to select, from the received sequence of optical pulses, a largest magnitude optical pulse that does not saturate an optical signal detector of the receiver module. 5. The optical system according to claim 1, wherein the transmitted sequence of optical pulses is a sequence of optical pulses having decreasing or increasing magnitudes. 6. The optical system according to claim 5, wherein the transmitted sequence of optical pulses is a sequence of optical pulses having strictly decreasing magnitudes. 7. The optical system according to claim 1, wherein the one or more processors are further configured to determine data from the selected optical pulse, the data specifying a material property of one or more entities with which the selected optical pulse interacted between the transmitter module and the receiver module. 8. The optical system according to claim 1, further comprising means for transferring, from the one or more processors to the transmitter module, information specifying the selected optical pulse. 9. The optical system according to claim 8, wherein the transmitter module is further configured to, responsive to the transmitter module receiving the information specifying the selected optical pulse, transmit a further optical signal to the receiver module, the further optical signal having a magnitude substantially equal to the pulse from the transmitted sequence that corresponds to the selected optical pulse. 10. The optical system according to claim 1, wherein the optical system further comprises one or more objects disposed between the transmitter module and the receiver module, the one or more objects arranged to:
receive the sequence of optical pulses transmitted by the transmitter module; and reflect and/or scatter the received sequence of optical pulses to the receiver module. 11. The optical system according to claim 10, wherein the one or more objects comprises a retro-reflector. 12. The optical system according to claim 1, wherein the transmitter module comprises:
one or more transmitter lasers configured to generate a sequence of optical pulses; and modulator configured to modulate the optical pulses generated by the one or more transmitter lasers to provide that each optical pulse in the sequence has a different magnitude to each other pulse in the sequence. 13. The optical system according to claim 1, further comprising:
one or more lasers configured to generate an optical signal; a first beam splitter; a second beam splitter; and an optical delay line; wherein
the one or more lasers are coupled to a first input of the first beam splitter such that the first beam splitter receives an optical signal from the one or more lasers;
the first beam splitter is configured to split an optical signal received at its first input between a first output of the first beam splitter and a second output of the first beam splitter;
an input of the second beam splitter is coupled to the first output of the first beam splitter;
the second beam splitter is configured to split an optical signal received at its input between a first output of the second beam splitter and a second output of the second beam splitter;
the transmitter module is coupled to the first output of the second beam splitter;
an input of the optical delay line is coupled to the second output of the second beam splitter;
an output of the optical delay line is coupled to a second input of the first beam splitter; and
the first beam splitter is configured to split an optical signal received at its second input between the first output of the first beam splitter and the second output of the first beam splitter. 14. The optical system according to claim 13, wherein a transmitivity of the first beam splitter is substantially equal to a transmitivity of the second beam splitter. 15. A method for performance by an optical system, the method comprising:
transmitting, by a transmitter module, a sequence of optical pulses, each optical pulse in the sequence having a different magnitude to each other optical pulse in the sequence; receiving, by a receiver module comprising one or more optical signal detectors, the sequence of optical pulses transmitted by the transmitter module; and processing, by one or more processors, the sequence of optical pulses received by the receiver module to select an optical pulse from the received sequence of optical pulses based on one or more predetermined criteria; wherein the one or more predetermined criteria include a criterion that the selected optical pulse does not saturate the one or more optical signal detectors. 16. The optical system according to claim 12, wherein the modulator is an external modulator. 17. The optical system according to claim 1, wherein the receiver module comprises a photomultiplier detector for detecting the sequence of optical pulses transmitted by the transmitter module, and wherein the one or more processors are configured to select, from those optical pulses having magnitudes within linear range of the photomultiplier detector, a pulse that has the largest magnitude, wherein the selected largest magnitude optical pulse does not saturate the photomultiplier detector. 18. The optical system according to claim 1, further comprising a communications link for transferring, from the one or more processors to the transmitter module, information specifying the selected optical pulse. 19. The optical system according to claim 18, wherein the communications link is an optical communications link. 20. The optical system according to claim 18, wherein the communications link is a wireless communications link. | 2,600 |
10,717 | 10,717 | 15,788,886 | 2,688 | A magnetic recording head including a trailing surface and a plurality of bond pads in a row, each of which is spaced by a gap from an adjacent bond pad along a width of the trailing surface. Each bond pad includes two side edges spaced from each other across a width of the bond pad, wherein a width of the gap between adjacent bond pads is defined by one side edge of each of two adjacent bond pads, and a top edge extending between the two side edges. The head further includes at least one solder dam including a nonwettable, electrically conductive solder material positioned adjacent to the top edge of at least one of the bond pads. | 1. A magnetic recording head comprising:
a body comprising a trailing surface; a plurality of bond pads in a row, each of which is spaced by a gap from an adjacent bond pad along a width of the trailing surface, wherein each bond pad comprises:
two side edges spaced from each other across a width of the bond pad, wherein a width of the gap between adjacent bond pads is defined by one side edge of each of two adjacent bond pads; and
a top edge extending between the two side edges; and
at least one solder dam comprising a nonwettable, electrically conductive material positioned adjacent to the top edge of at least one of the bond pads. 2. The magnetic recording head of claim 1, wherein the at least one solder dam comprises a plurality of solder dams, and wherein each of the solder dams is positioned adjacent to the top edge of one of the plurality of bond pads. 3. The magnetic recording head of claim 1, wherein each solder dam comprises a width that is at least as large as the width of the bond pad to which it is adjacently positioned. 4. The magnetic recording head of claim 1, wherein the at least one solder dam comprises a material selected from a group including rhodium, osmium, titanium, tantalum, aluminum, nickel, diamond-like carbon, stainless steel, and alloys of one or more materials of the group. 5. The magnetic recording head of claim 1, wherein at least one of the solder dams extends above a top surface of the bond pad to which it is adjacent. 6. The magnetic recording head of claim 1, wherein at least one of the solder dams comprises a top surface that that is in the same plane as the top surface of the bond pad to which it is adjacent. 7. The magnetic recording head of claim 1, wherein at least one bond pad comprises a top surface and a thickness defined by the thicknesses of multiple bond pad material layers. 8. The magnetic recording head of claim 7, wherein at least one of the solder dams comprises a top surface that is spaced from the top surface of the bond pad. 9. The magnetic recording head of claim 7, wherein the at least one bond pad comprises a recessed portion in which the at least one solder dam is positioned. 10. A head gimbal assembly, comprising:
a suspension comprising multiple electrical pads; and a magnetic recording head comprising:
a body comprising a trailing surface;
a plurality of bond pads in a row on the trailing surface, each of which is spaced by a gap from an adjacent bond pad along a width of the trailing surface, wherein each bond pad comprises:
two side edges spaced from each other across a width of the bond pad, wherein a width of the gap between adjacent bond pads is defined by one of the side edges of each of two adjacent bond pads; and
a top edge extending between the two side edges;
at least one solder dam comprising nonwettable, electrically conductive solder material and positioned adjacent to the top edge of one of the bond pads; and
at least one solder joint having a top edge adjacent to the at least one solder dam, wherein each solder joint electrically connects one of the bond pads to one of the electrical pads of the suspension. 11. The head gimbal assembly of claim 10, wherein each solder dam comprises a width that is at least as large as a width of the top edge of the solder joint. 12. The head gimbal assembly of claim 10, wherein each solder dam comprises a width that is at least as large as the width of the bond pad to which it is adjacently positioned. 13. The head gimbal assembly of claim 10, wherein the at least one solder dam comprises a material selected from a group including rhodium, osmium, titanium, tantalum, aluminum, nickel, diamond-like carbon, and alloys of one or more materials of the group. 14. The head gimbal assembly of claim 10, wherein the at least one solder dam comprises a material selected from a group including rhodium, osmium, titanium, tantalum, aluminum, nickel, diamond-like carbon, stainless steel, and alloys of one or more materials of the group. 15. The magnetic recording head of claim 10, wherein at least one of the solder dams extends above a top surface of the bond pad to which it is adjacent. 16. A method of controlling a shape and size of at least one solder joint in a head gimbal assembly using a solder dam adjacent to a bond pad to limit the wetting height, comprising the steps of:
positioning a suspension comprising multiple electrical pads adjacent to a magnetic recording head, the magnetic recording head comprising:
a trailing surface;
a plurality of bond pads in a row on the trailing surface, each of which is spaced by a gap from an adjacent bond pad along a width of the trailing surface, wherein each bond pad comprises:
two side edges spaced from each other across a width of the bond pad, wherein a width of the gap between adjacent bond pads is defined by one of the side edges of each of two adjacent bond pads; and
a top edge extending between the two side edges; and
at least one solder dam comprising nonwettable, electrically conductive solder material and positioned adjacent to the top edge of one of the bond pads;
and
electrically connecting one of the plurality of bond pads of the recording head to one of the multiple electrical pads of the suspension by forming a solder joint having a top edge adjacent to the at least one solder dam. 17. The method of claim 16, wherein the at least one solder dam comprises a plurality of solder dams, each of which is positioned adjacent to the top edge of one of the plurality of bond pads, and wherein the step of electrically connecting bond pads comprises electrically connecting multiple bond pads to multiple respective electrical pads of the suspension. 18. The method of claim 16, wherein each solder dam comprises a width that is at least as large as the width of the bond pad to which it is adjacently positioned. | A magnetic recording head including a trailing surface and a plurality of bond pads in a row, each of which is spaced by a gap from an adjacent bond pad along a width of the trailing surface. Each bond pad includes two side edges spaced from each other across a width of the bond pad, wherein a width of the gap between adjacent bond pads is defined by one side edge of each of two adjacent bond pads, and a top edge extending between the two side edges. The head further includes at least one solder dam including a nonwettable, electrically conductive solder material positioned adjacent to the top edge of at least one of the bond pads.1. A magnetic recording head comprising:
a body comprising a trailing surface; a plurality of bond pads in a row, each of which is spaced by a gap from an adjacent bond pad along a width of the trailing surface, wherein each bond pad comprises:
two side edges spaced from each other across a width of the bond pad, wherein a width of the gap between adjacent bond pads is defined by one side edge of each of two adjacent bond pads; and
a top edge extending between the two side edges; and
at least one solder dam comprising a nonwettable, electrically conductive material positioned adjacent to the top edge of at least one of the bond pads. 2. The magnetic recording head of claim 1, wherein the at least one solder dam comprises a plurality of solder dams, and wherein each of the solder dams is positioned adjacent to the top edge of one of the plurality of bond pads. 3. The magnetic recording head of claim 1, wherein each solder dam comprises a width that is at least as large as the width of the bond pad to which it is adjacently positioned. 4. The magnetic recording head of claim 1, wherein the at least one solder dam comprises a material selected from a group including rhodium, osmium, titanium, tantalum, aluminum, nickel, diamond-like carbon, stainless steel, and alloys of one or more materials of the group. 5. The magnetic recording head of claim 1, wherein at least one of the solder dams extends above a top surface of the bond pad to which it is adjacent. 6. The magnetic recording head of claim 1, wherein at least one of the solder dams comprises a top surface that that is in the same plane as the top surface of the bond pad to which it is adjacent. 7. The magnetic recording head of claim 1, wherein at least one bond pad comprises a top surface and a thickness defined by the thicknesses of multiple bond pad material layers. 8. The magnetic recording head of claim 7, wherein at least one of the solder dams comprises a top surface that is spaced from the top surface of the bond pad. 9. The magnetic recording head of claim 7, wherein the at least one bond pad comprises a recessed portion in which the at least one solder dam is positioned. 10. A head gimbal assembly, comprising:
a suspension comprising multiple electrical pads; and a magnetic recording head comprising:
a body comprising a trailing surface;
a plurality of bond pads in a row on the trailing surface, each of which is spaced by a gap from an adjacent bond pad along a width of the trailing surface, wherein each bond pad comprises:
two side edges spaced from each other across a width of the bond pad, wherein a width of the gap between adjacent bond pads is defined by one of the side edges of each of two adjacent bond pads; and
a top edge extending between the two side edges;
at least one solder dam comprising nonwettable, electrically conductive solder material and positioned adjacent to the top edge of one of the bond pads; and
at least one solder joint having a top edge adjacent to the at least one solder dam, wherein each solder joint electrically connects one of the bond pads to one of the electrical pads of the suspension. 11. The head gimbal assembly of claim 10, wherein each solder dam comprises a width that is at least as large as a width of the top edge of the solder joint. 12. The head gimbal assembly of claim 10, wherein each solder dam comprises a width that is at least as large as the width of the bond pad to which it is adjacently positioned. 13. The head gimbal assembly of claim 10, wherein the at least one solder dam comprises a material selected from a group including rhodium, osmium, titanium, tantalum, aluminum, nickel, diamond-like carbon, and alloys of one or more materials of the group. 14. The head gimbal assembly of claim 10, wherein the at least one solder dam comprises a material selected from a group including rhodium, osmium, titanium, tantalum, aluminum, nickel, diamond-like carbon, stainless steel, and alloys of one or more materials of the group. 15. The magnetic recording head of claim 10, wherein at least one of the solder dams extends above a top surface of the bond pad to which it is adjacent. 16. A method of controlling a shape and size of at least one solder joint in a head gimbal assembly using a solder dam adjacent to a bond pad to limit the wetting height, comprising the steps of:
positioning a suspension comprising multiple electrical pads adjacent to a magnetic recording head, the magnetic recording head comprising:
a trailing surface;
a plurality of bond pads in a row on the trailing surface, each of which is spaced by a gap from an adjacent bond pad along a width of the trailing surface, wherein each bond pad comprises:
two side edges spaced from each other across a width of the bond pad, wherein a width of the gap between adjacent bond pads is defined by one of the side edges of each of two adjacent bond pads; and
a top edge extending between the two side edges; and
at least one solder dam comprising nonwettable, electrically conductive solder material and positioned adjacent to the top edge of one of the bond pads;
and
electrically connecting one of the plurality of bond pads of the recording head to one of the multiple electrical pads of the suspension by forming a solder joint having a top edge adjacent to the at least one solder dam. 17. The method of claim 16, wherein the at least one solder dam comprises a plurality of solder dams, each of which is positioned adjacent to the top edge of one of the plurality of bond pads, and wherein the step of electrically connecting bond pads comprises electrically connecting multiple bond pads to multiple respective electrical pads of the suspension. 18. The method of claim 16, wherein each solder dam comprises a width that is at least as large as the width of the bond pad to which it is adjacently positioned. | 2,600 |
10,718 | 10,718 | 15,544,911 | 2,611 | Examples relate to managing a display input An example system to manage a display input is provided herein. Management of display input includes a determination of a display mode selected. Management of display input also includes control of connections and transfer of data between an external device and an internal device. Management of display input further includes adjustment of a display setting based on display mode. | 1. A system to manage a display input comprising:
a control engine to determine a display mode selected; a connection engine to control connections and transfer of data between an external device and an internal device and a display engine to adjust a display setting based on the display mode. 2. The system of claim 1, wherein the display engine is further to adjust the display setting based on the transfer of data between the internal device and the external device. 3. The system of claim 1, wherein the control engine is to provide status notifications. 4. The system of claim 1, wherein the control engine is to select the display mode from at least one mode selected from an internal display mode, an external display mode, and a multiple display mode. 5. The system of claim 1, wherein the control engine is connected to a toggle button and receives signals from the toggle button. 6. A computer-readable storage medium encoded with instructions that, when executed by a processor performs a method, causes the processor to:
determine a display mode selected based on an input; manage a plurality of input connections based on the display mode, management includes:
identifying external devices connected to an internal device,
determining a display status for each of the plurality of input connections, and
monitoring data transfer between the external device and
the internal device; and adjust a display setting based on
the display mode selected, and
the display status of the each of the plural of input connections. 7. The computer-readable storage medium o claim 6 wherein the input connections include at least one connection selected from a display port, a video graphics array (VGA) port and a High-Definition Multimedia Interface (HDMI) port. 8. The computer-readable storage medium of claim 6, wherein the input connections include at least one wireless connection selected from a Wireless Display (WiDi), Wireless Gigabit (WiGig), or wireless local area network (WLAN) (such as Wi-Fie) connection. 9. The computer-readable storage medium of claim herein the input use a loop to determine the display mode. 10. A method to display content comprising:
determining a display mode using a loop to monitor the display mode, managing input connections using a connection controller to
manage connections and a display status of an internal device and an external device, and
transfer data between the external device and the internal device based on user input; and
adjusting a set of display settings based on the display mode and the display status of the internal device and the external device, 11. The method of claim 10, further comprising prompting a user to select he display mode when the external device is connected. 12. The method of claim 10, further comprising querying each input port to manage connections. 13. The method of claim 10, wherein the display settings divide a portion of a display device to display content from at least two devices. 14. The method of claim 10, wherein the set of display settings include settings for an internal display mode, an external display mode, and a multiple display mode. 15. The method of claim 10, further comprising displaying notifications regarding status of external devices. | Examples relate to managing a display input An example system to manage a display input is provided herein. Management of display input includes a determination of a display mode selected. Management of display input also includes control of connections and transfer of data between an external device and an internal device. Management of display input further includes adjustment of a display setting based on display mode.1. A system to manage a display input comprising:
a control engine to determine a display mode selected; a connection engine to control connections and transfer of data between an external device and an internal device and a display engine to adjust a display setting based on the display mode. 2. The system of claim 1, wherein the display engine is further to adjust the display setting based on the transfer of data between the internal device and the external device. 3. The system of claim 1, wherein the control engine is to provide status notifications. 4. The system of claim 1, wherein the control engine is to select the display mode from at least one mode selected from an internal display mode, an external display mode, and a multiple display mode. 5. The system of claim 1, wherein the control engine is connected to a toggle button and receives signals from the toggle button. 6. A computer-readable storage medium encoded with instructions that, when executed by a processor performs a method, causes the processor to:
determine a display mode selected based on an input; manage a plurality of input connections based on the display mode, management includes:
identifying external devices connected to an internal device,
determining a display status for each of the plurality of input connections, and
monitoring data transfer between the external device and
the internal device; and adjust a display setting based on
the display mode selected, and
the display status of the each of the plural of input connections. 7. The computer-readable storage medium o claim 6 wherein the input connections include at least one connection selected from a display port, a video graphics array (VGA) port and a High-Definition Multimedia Interface (HDMI) port. 8. The computer-readable storage medium of claim 6, wherein the input connections include at least one wireless connection selected from a Wireless Display (WiDi), Wireless Gigabit (WiGig), or wireless local area network (WLAN) (such as Wi-Fie) connection. 9. The computer-readable storage medium of claim herein the input use a loop to determine the display mode. 10. A method to display content comprising:
determining a display mode using a loop to monitor the display mode, managing input connections using a connection controller to
manage connections and a display status of an internal device and an external device, and
transfer data between the external device and the internal device based on user input; and
adjusting a set of display settings based on the display mode and the display status of the internal device and the external device, 11. The method of claim 10, further comprising prompting a user to select he display mode when the external device is connected. 12. The method of claim 10, further comprising querying each input port to manage connections. 13. The method of claim 10, wherein the display settings divide a portion of a display device to display content from at least two devices. 14. The method of claim 10, wherein the set of display settings include settings for an internal display mode, an external display mode, and a multiple display mode. 15. The method of claim 10, further comprising displaying notifications regarding status of external devices. | 2,600 |
10,719 | 10,719 | 16,133,010 | 2,677 | A system is provided. The system comprises at least one artificial neural network configured to: receive an audio signal; for a time period, determine if at least one human voice audio spectrum is in the audio signal; for the time period, identify at least one human voice audio power spectrum; for the time period, extract each of the at least one identified human voice audio power spectrum; remove artifacts from each extracted human voice audio power spectrum to synthesize an estimation of an original human voice prior to its distortion; and transmit the synthesized estimation of an original human voice. | 1. A system, comprising:
at least one artificial neural network configured to:
receive an audio signal;
determine that at least one human voice audio power spectrum is in the audio signal during a time period;
upon determining that at least one human voice audio power spectrum is in the audio signal, then identify, using known human voice audio power spectra, at least one human voice audio power spectrum in the audio signal during the time period;
extract the at least one identified human voice audio power spectrum;
remove artifacts from each of the at least one extracted human voice audio power spectrum to synthesize an estimation of an original human voice prior to its distortion; and
transmit the synthesized estimation of an original human voice. 2. The system of claim 1, further comprising a receiver coupled to the at least one artificial neural network, and configured to provide the audio signal received by the at least one artificial neural network. 3. The system of claim 1, further comprising at least one input/output device coupled to the at least one artificial neural network, and configured to receive the transmitted synthesized estimation of the original human voice. 4. The system of claim 1, wherein the at least one input/output device comprises at least one of a speaker or a display. 5. The system of claim 1, wherein the determining if at least one human voice audio power spectrum is in the audio signal is performed by a recurrent neural network. 6. The system of claim 1, wherein the identifying and the extracting of at least one identified human voice audio power spectrum is performed by a modified convolutional neural network. 7. The system of claim 1, wherein the removing of artifacts is performed by a generative adversarial neural network. 8. The system of claim 1, wherein the at least one artificial neural network is configured to be trained with clean and noisy human voice audio training samples prior to receiving an audio signal that is not a training sample. 9. A method, comprising:
receiving an audio signal; determining that at least one human voice audio power spectrum is in the audio signal during a time period; upon determining that at least one human voice audio power spectrum is in the audio signal, then identifying at least one human voice audio power spectrum in the audio signal during the time period; extracting the at least one identified human voice audio power spectrum; removing artifacts from each of the at least one extracted human voice audio power spectrum to synthesize an estimation of an original human voice prior to its distortion; and transmitting the synthesized estimation of the original human voice. 10. The method of claim 9, wherein the determining if at least one human voice audio power spectrum is in the audio signal is performed by a recurrent neural network. 11. The method of claim 9, wherein the identifying and extracting of at least one human voice power spectrum is performed by a modified convolutional neural network. 12. The method of claim 9, wherein the removing of artifacts is performed by a generative adversarial neural network. 13. The method of claim 9, further comprising training with clean and noisy human voice audio samples prior to receiving an audio signal that is not a training sample. 14. The method of claim 9, further comprising generating signals configured to be used to at least one of:
emit sound of the synthesized estimation of the original human voice; and display the synthesized estimation of the original human voice. 15. A method, comprising:
receiving an audio signal; determining if frequency spectrum of the audio signal includes at least one human voice audio power spectrum; and upon determining that the frequency spectrum includes at least one human voice audio power spectrum, then:
partitioning the frequency spectrum into sub-bands;
identifying, using known human voice audio power spectra, at least one human voice audio power spectrum in at least one sub-band that is correlated to a human voice model;
extracting each of the at least one identified human voice audio power spectrum; and
modifying each of the at least one extracted human voice audio power spectrum to match a corresponding correlated human voice model. 16. The method of claim 15, wherein the determining if at least one human voice audio power spectrum is in the audio signal is performed by a recurrent neural network. 17. The method of claim 15, wherein the identifying and extracting of at least one human voice power spectrum is performed by a modified convolutional neural network. 18. The method of claim 15, wherein the removing of artifacts is performed by a generative adversarial neural network. 19. The method of claim 15, further comprising training with clean and noisy human voice audio samples prior to receiving an audio signal that is not a training sample. 20. The method of claim 15, further comprising generating signals configured to be used to at least one of:
emit sound of the synthesized estimation of the original human voice; and display the synthesized estimation of the original human voice. | A system is provided. The system comprises at least one artificial neural network configured to: receive an audio signal; for a time period, determine if at least one human voice audio spectrum is in the audio signal; for the time period, identify at least one human voice audio power spectrum; for the time period, extract each of the at least one identified human voice audio power spectrum; remove artifacts from each extracted human voice audio power spectrum to synthesize an estimation of an original human voice prior to its distortion; and transmit the synthesized estimation of an original human voice.1. A system, comprising:
at least one artificial neural network configured to:
receive an audio signal;
determine that at least one human voice audio power spectrum is in the audio signal during a time period;
upon determining that at least one human voice audio power spectrum is in the audio signal, then identify, using known human voice audio power spectra, at least one human voice audio power spectrum in the audio signal during the time period;
extract the at least one identified human voice audio power spectrum;
remove artifacts from each of the at least one extracted human voice audio power spectrum to synthesize an estimation of an original human voice prior to its distortion; and
transmit the synthesized estimation of an original human voice. 2. The system of claim 1, further comprising a receiver coupled to the at least one artificial neural network, and configured to provide the audio signal received by the at least one artificial neural network. 3. The system of claim 1, further comprising at least one input/output device coupled to the at least one artificial neural network, and configured to receive the transmitted synthesized estimation of the original human voice. 4. The system of claim 1, wherein the at least one input/output device comprises at least one of a speaker or a display. 5. The system of claim 1, wherein the determining if at least one human voice audio power spectrum is in the audio signal is performed by a recurrent neural network. 6. The system of claim 1, wherein the identifying and the extracting of at least one identified human voice audio power spectrum is performed by a modified convolutional neural network. 7. The system of claim 1, wherein the removing of artifacts is performed by a generative adversarial neural network. 8. The system of claim 1, wherein the at least one artificial neural network is configured to be trained with clean and noisy human voice audio training samples prior to receiving an audio signal that is not a training sample. 9. A method, comprising:
receiving an audio signal; determining that at least one human voice audio power spectrum is in the audio signal during a time period; upon determining that at least one human voice audio power spectrum is in the audio signal, then identifying at least one human voice audio power spectrum in the audio signal during the time period; extracting the at least one identified human voice audio power spectrum; removing artifacts from each of the at least one extracted human voice audio power spectrum to synthesize an estimation of an original human voice prior to its distortion; and transmitting the synthesized estimation of the original human voice. 10. The method of claim 9, wherein the determining if at least one human voice audio power spectrum is in the audio signal is performed by a recurrent neural network. 11. The method of claim 9, wherein the identifying and extracting of at least one human voice power spectrum is performed by a modified convolutional neural network. 12. The method of claim 9, wherein the removing of artifacts is performed by a generative adversarial neural network. 13. The method of claim 9, further comprising training with clean and noisy human voice audio samples prior to receiving an audio signal that is not a training sample. 14. The method of claim 9, further comprising generating signals configured to be used to at least one of:
emit sound of the synthesized estimation of the original human voice; and display the synthesized estimation of the original human voice. 15. A method, comprising:
receiving an audio signal; determining if frequency spectrum of the audio signal includes at least one human voice audio power spectrum; and upon determining that the frequency spectrum includes at least one human voice audio power spectrum, then:
partitioning the frequency spectrum into sub-bands;
identifying, using known human voice audio power spectra, at least one human voice audio power spectrum in at least one sub-band that is correlated to a human voice model;
extracting each of the at least one identified human voice audio power spectrum; and
modifying each of the at least one extracted human voice audio power spectrum to match a corresponding correlated human voice model. 16. The method of claim 15, wherein the determining if at least one human voice audio power spectrum is in the audio signal is performed by a recurrent neural network. 17. The method of claim 15, wherein the identifying and extracting of at least one human voice power spectrum is performed by a modified convolutional neural network. 18. The method of claim 15, wherein the removing of artifacts is performed by a generative adversarial neural network. 19. The method of claim 15, further comprising training with clean and noisy human voice audio samples prior to receiving an audio signal that is not a training sample. 20. The method of claim 15, further comprising generating signals configured to be used to at least one of:
emit sound of the synthesized estimation of the original human voice; and display the synthesized estimation of the original human voice. | 2,600 |
10,720 | 10,720 | 13,392,858 | 2,626 | An input apparatus, which provides feedback to a user through a tactile sensation in response to a push operation and a slide operation to a touch sensor by the user, is provided. The control unit controls drive of a tactile sensation providing unit when the touch object slides on a touch face such that a tactile sensation of sliding is provided to a touch object and controls drive of the tactile sensation providing unit when a load detection unit detects a pressure load changing from a state failing to satisfy a predetermined standard load to a state satisfying it such that a tactile sensation of pushing, different from the tactile sensation of sliding, is provided to the touch object. | 1. An input apparatus comprising:
a touch sensor configured to detect a touch input; a tactile sensation providing unit configured to vibrate a touch face of the touch sensor; a load detection unit configured to detect a pressure load on the touch face; and a control unit configured to control drive of the tactile sensation providing unit such that a tactile sensation is provided to a touch object touching the touch face,
wherein
the control unit, when the touch object is sliding on the touch face, controls drive of the tactile sensation providing unit such that a tactile sensation of sliding is provided to the touch object and, when the load detection unit detects the pressure load changing from a state failing to satisfy a predetermined standard load to a state satisfying the predetermined standard load, controls drive of the tactile sensation providing unit such that a tactile sensation of pushing, different from the tactile sensation of sliding, is provided to the touch object. 2. The input apparatus according to claim 1, wherein the control unit controls drive of the tactile sensation providing unit such that the tactile sensation of pushing is provided to the touch object when the touch object is not sliding on the touch face. 3. The input apparatus according to claim 1 or 2, wherein the control unit controls drive of the tactile sensation providing unit such that the tactile sensation of sliding is provided to the touch object when the pressure load detected by the load detection unit satisfies a standard load lower than the predetermined standard load. 4. The input apparatus according to any one of claims 1 to 3, wherein, when the touch object stops sliding while the control unit is controlling drive of the tactile sensation providing unit such that the tactile sensation of sliding is provided to the touch object, and the load detection unit detects application of a predetermined pressure load in addition to a standard, which is the pressure load detected by the load detection unit when the touch object stops sliding, the control unit controls drive of the tactile sensation providing unit such that the tactile sensation of pushing is provided to the touch object. 5. The input apparatus according to any one of claims 1 to 4, further comprising a display unit configured to display an input object, wherein
the touch sensor detects the touch input to the display unit, and
the control unit controls drive of the tactile sensation providing unit, when the touch input to an area, where the input object is not displayed, is detected, such that the tactile sensation of sliding is provided, and controls drive of the tactile sensation providing unit, when the touch input to the input object is detected, such that the tactile sensation of pushing is provided. | An input apparatus, which provides feedback to a user through a tactile sensation in response to a push operation and a slide operation to a touch sensor by the user, is provided. The control unit controls drive of a tactile sensation providing unit when the touch object slides on a touch face such that a tactile sensation of sliding is provided to a touch object and controls drive of the tactile sensation providing unit when a load detection unit detects a pressure load changing from a state failing to satisfy a predetermined standard load to a state satisfying it such that a tactile sensation of pushing, different from the tactile sensation of sliding, is provided to the touch object.1. An input apparatus comprising:
a touch sensor configured to detect a touch input; a tactile sensation providing unit configured to vibrate a touch face of the touch sensor; a load detection unit configured to detect a pressure load on the touch face; and a control unit configured to control drive of the tactile sensation providing unit such that a tactile sensation is provided to a touch object touching the touch face,
wherein
the control unit, when the touch object is sliding on the touch face, controls drive of the tactile sensation providing unit such that a tactile sensation of sliding is provided to the touch object and, when the load detection unit detects the pressure load changing from a state failing to satisfy a predetermined standard load to a state satisfying the predetermined standard load, controls drive of the tactile sensation providing unit such that a tactile sensation of pushing, different from the tactile sensation of sliding, is provided to the touch object. 2. The input apparatus according to claim 1, wherein the control unit controls drive of the tactile sensation providing unit such that the tactile sensation of pushing is provided to the touch object when the touch object is not sliding on the touch face. 3. The input apparatus according to claim 1 or 2, wherein the control unit controls drive of the tactile sensation providing unit such that the tactile sensation of sliding is provided to the touch object when the pressure load detected by the load detection unit satisfies a standard load lower than the predetermined standard load. 4. The input apparatus according to any one of claims 1 to 3, wherein, when the touch object stops sliding while the control unit is controlling drive of the tactile sensation providing unit such that the tactile sensation of sliding is provided to the touch object, and the load detection unit detects application of a predetermined pressure load in addition to a standard, which is the pressure load detected by the load detection unit when the touch object stops sliding, the control unit controls drive of the tactile sensation providing unit such that the tactile sensation of pushing is provided to the touch object. 5. The input apparatus according to any one of claims 1 to 4, further comprising a display unit configured to display an input object, wherein
the touch sensor detects the touch input to the display unit, and
the control unit controls drive of the tactile sensation providing unit, when the touch input to an area, where the input object is not displayed, is detected, such that the tactile sensation of sliding is provided, and controls drive of the tactile sensation providing unit, when the touch input to the input object is detected, such that the tactile sensation of pushing is provided. | 2,600 |
10,721 | 10,721 | 15,957,153 | 2,632 | Coupled resonators for galvanically isolated signaling between integrated circuit modules. An illustrative system embodiment includes first and second integrated circuits. The first integrated circuit includes: a transmitter that produces a modulated carrier signal on a primary conductor; a first transfer conductor connected to a first connection terminal; and a first floating loop electromagnetically coupled to the primary conductor and to the transfer conductor to convey the modulated carrier. The second integrated circuit includes: a second transfer conductor connected to a second connection terminal, the second connection terminal being electrically connected to the first connection terminal; a receiver that demodulates the modulated carrier signal; and a second floating loop electromagnetically coupled to the second transfer conductor and to the receiver to convey the modulated carrier signal to the receiver. The first and second floating loops are each resonant at the carrier frequency to provide resonance-coupled signalling between the integrated circuits. | 1. A system for galvanically isolated signaling, comprising:
a first integrated circuit including:
a transmitter that modulates a carrier signal having a carrier frequency to produce a modulated carrier signal on a primary conductor;
a first transfer conductor connected to a first connection terminal; and
a first floating loop electromagnetically coupled to the primary conductor and electromagnetically coupled to the transfer conductor to convey the modulated carrier signal from the primary conductor to the transfer conductor,
wherein the first floating loop is resonant at the carrier frequency; and
a second integrated circuit including:
a second transfer conductor connected to a second connection terminal, the second connection terminal being electrically connected to the first connection terminal;
a receiver that demodulates the modulated carrier signal; and
a second floating loop electromagnetically coupled to the second transfer conductor and electromagnetic coupled to the receiver to convey the modulated carrier signal from the second transfer conductor to the receiver,
wherein the second floating loop is resonant at the carrier frequency. 2. The system of claim 1, wherein the first floating loop shares a common metallization layer with the primary conductor and the first transfer conductor, and wherein the second floating loop shares a common metallization layer with the second transfer conductor. 3. The system of claim 1, wherein the first and second floating loops each include an integrated metal-insulator-metal plate capacitor. 4. The system of claim 1, wherein the first and second transfer conductors are part of a closed, floating transfer loop. 5. The system of claim 4, wherein the first connection terminal and the second connection terminal are coupled via a bond wire. 6. The system of claim 4, wherein the transfer loop is not resonant at the carrier frequency. 7. The system of claim 1, wherein the transmitter is configured to receive a digital signal and to responsively produce pulses in the modulated carrier signal. 8. A method for galvanically isolated signaling, comprising:
equipping a first integrated circuit with a transmitter that modulates a carrier signal having a carrier frequency to produce a modulated carrier signal on a primary conductor; equipping a second integrated circuit with a receiver that demodulates the modulated carrier signal; and electromagnetically coupling the transmitter to the receiver using a resonantly-coupled signal path having:
a first floating loop that is resonant at the carrier frequency in the first integrated circuit;
a transfer loop that is not resonant at the carrier frequency; and
a second floating loop that is resonant at the carrier frequency in the second integrated circuit. 9. The method of claim 8, wherein the transfer loop includes a first transfer conductor in the first integrated circuit, the first transfer conductor sharing a common metallization layer with the first floating loop, and wherein the transfer loop includes a second transfer conductor in the second integrated circuit, the second transfer conductor sharing a common metallization layer with the second floating loop. 10. The method of claim 9, wherein the transfer loop further includes a first terminal connected to the first transfer conductor, a second terminal connected to the second transfer conductor, and a bond wire connecting the first terminal to the second terminal. 11. The method of claim 8, further comprising providing an integrated capacitor in each of the first and second floating loops to make them each resonant at the carrier frequency. 12. The method of claim 11, wherein the integrated capacitors are each metal-insulator-metal plate capacitors. 13. The method of claim 8, wherein the transmitter is configured to receive a digital signal and to responsively produce pulses in the modulated carrier signal. 14. An integrated circuit for galvanically isolated signaling via a connection terminal, the integrated circuit comprising:
a transfer conductor connected to the connection terminal; a transmitter that modulates a carrier signal having a carrier frequency to produce a modulated carrier signal; and a floating loop electromagnetically coupled to the transmitter and to the transfer conductor to convey the modulated carrier signal from the transmitter to the transfer conductor, wherein the floating loop is resonant at the carrier frequency. 15. The integrated circuit of claim 14, wherein said connection terminal is a first connection terminal connected to a second connection terminal by the transfer conductor, the first and second connection terminals being configured for electrically connecting to a remote pair of connection terminals which in turn are electrically connected to form a floating transfer loop. 16. The integrated circuit of claim 15, wherein the floating transfer loop is not resonant at the carrier frequency, and wherein the floating loop is configured to resonantly couple with a second floating loop via the transfer conductor, the second floating loop being resonant at the carrier frequency. 17. The integrated circuit of claim 15, wherein each of said first and second connection terminals comprises a bonding pad on a first substrate, and wherein said remote pair of connection terminals comprises a pair of bonding pads on a second substrate. 18. The integrated circuit of claim 14, wherein the transmitter is configured to receive a digital signal and to responsively produce pulses in the modulated carrier signal. 19. The integrated circuit of claim 14, wherein the floating loop includes an integrated metal-insulator-metal plate capacitor. | Coupled resonators for galvanically isolated signaling between integrated circuit modules. An illustrative system embodiment includes first and second integrated circuits. The first integrated circuit includes: a transmitter that produces a modulated carrier signal on a primary conductor; a first transfer conductor connected to a first connection terminal; and a first floating loop electromagnetically coupled to the primary conductor and to the transfer conductor to convey the modulated carrier. The second integrated circuit includes: a second transfer conductor connected to a second connection terminal, the second connection terminal being electrically connected to the first connection terminal; a receiver that demodulates the modulated carrier signal; and a second floating loop electromagnetically coupled to the second transfer conductor and to the receiver to convey the modulated carrier signal to the receiver. The first and second floating loops are each resonant at the carrier frequency to provide resonance-coupled signalling between the integrated circuits.1. A system for galvanically isolated signaling, comprising:
a first integrated circuit including:
a transmitter that modulates a carrier signal having a carrier frequency to produce a modulated carrier signal on a primary conductor;
a first transfer conductor connected to a first connection terminal; and
a first floating loop electromagnetically coupled to the primary conductor and electromagnetically coupled to the transfer conductor to convey the modulated carrier signal from the primary conductor to the transfer conductor,
wherein the first floating loop is resonant at the carrier frequency; and
a second integrated circuit including:
a second transfer conductor connected to a second connection terminal, the second connection terminal being electrically connected to the first connection terminal;
a receiver that demodulates the modulated carrier signal; and
a second floating loop electromagnetically coupled to the second transfer conductor and electromagnetic coupled to the receiver to convey the modulated carrier signal from the second transfer conductor to the receiver,
wherein the second floating loop is resonant at the carrier frequency. 2. The system of claim 1, wherein the first floating loop shares a common metallization layer with the primary conductor and the first transfer conductor, and wherein the second floating loop shares a common metallization layer with the second transfer conductor. 3. The system of claim 1, wherein the first and second floating loops each include an integrated metal-insulator-metal plate capacitor. 4. The system of claim 1, wherein the first and second transfer conductors are part of a closed, floating transfer loop. 5. The system of claim 4, wherein the first connection terminal and the second connection terminal are coupled via a bond wire. 6. The system of claim 4, wherein the transfer loop is not resonant at the carrier frequency. 7. The system of claim 1, wherein the transmitter is configured to receive a digital signal and to responsively produce pulses in the modulated carrier signal. 8. A method for galvanically isolated signaling, comprising:
equipping a first integrated circuit with a transmitter that modulates a carrier signal having a carrier frequency to produce a modulated carrier signal on a primary conductor; equipping a second integrated circuit with a receiver that demodulates the modulated carrier signal; and electromagnetically coupling the transmitter to the receiver using a resonantly-coupled signal path having:
a first floating loop that is resonant at the carrier frequency in the first integrated circuit;
a transfer loop that is not resonant at the carrier frequency; and
a second floating loop that is resonant at the carrier frequency in the second integrated circuit. 9. The method of claim 8, wherein the transfer loop includes a first transfer conductor in the first integrated circuit, the first transfer conductor sharing a common metallization layer with the first floating loop, and wherein the transfer loop includes a second transfer conductor in the second integrated circuit, the second transfer conductor sharing a common metallization layer with the second floating loop. 10. The method of claim 9, wherein the transfer loop further includes a first terminal connected to the first transfer conductor, a second terminal connected to the second transfer conductor, and a bond wire connecting the first terminal to the second terminal. 11. The method of claim 8, further comprising providing an integrated capacitor in each of the first and second floating loops to make them each resonant at the carrier frequency. 12. The method of claim 11, wherein the integrated capacitors are each metal-insulator-metal plate capacitors. 13. The method of claim 8, wherein the transmitter is configured to receive a digital signal and to responsively produce pulses in the modulated carrier signal. 14. An integrated circuit for galvanically isolated signaling via a connection terminal, the integrated circuit comprising:
a transfer conductor connected to the connection terminal; a transmitter that modulates a carrier signal having a carrier frequency to produce a modulated carrier signal; and a floating loop electromagnetically coupled to the transmitter and to the transfer conductor to convey the modulated carrier signal from the transmitter to the transfer conductor, wherein the floating loop is resonant at the carrier frequency. 15. The integrated circuit of claim 14, wherein said connection terminal is a first connection terminal connected to a second connection terminal by the transfer conductor, the first and second connection terminals being configured for electrically connecting to a remote pair of connection terminals which in turn are electrically connected to form a floating transfer loop. 16. The integrated circuit of claim 15, wherein the floating transfer loop is not resonant at the carrier frequency, and wherein the floating loop is configured to resonantly couple with a second floating loop via the transfer conductor, the second floating loop being resonant at the carrier frequency. 17. The integrated circuit of claim 15, wherein each of said first and second connection terminals comprises a bonding pad on a first substrate, and wherein said remote pair of connection terminals comprises a pair of bonding pads on a second substrate. 18. The integrated circuit of claim 14, wherein the transmitter is configured to receive a digital signal and to responsively produce pulses in the modulated carrier signal. 19. The integrated circuit of claim 14, wherein the floating loop includes an integrated metal-insulator-metal plate capacitor. | 2,600 |
10,722 | 10,722 | 16,184,083 | 2,613 | A mechanism is provided for implementing an augmented reality display via a head mounted display (HMD) system that indicates areas of a patient's body corresponding to a medical condition and/or treatment of the patient overlayed on the actual view of the patient. A real-time image of an area of a patient's body being viewed by a medical professional is captured via the HMD system. One or more body parts of the patient are identified within the real-time image. The one or more identified body parks are correlated with the patient's electronic medical records (EMRs) indicating the medical condition and/or treatments associated with the patient. An augmented reality display is then generated in the HMD system of one or more areas of the patient's body corresponding to the medical condition and/or treatment of the patient overlaying the real-time image of the area of the patient's body. | 1-20. (canceled) 21. A method, in a data processing system comprising at least one processor and at least one memory, the at least one memory comprising instructions executed by the at least one processor to cause the at least one processor to implement a cognitive healthcare system, wherein the cognitive healthcare system operates to:
capturing, by a capturing mechanism of the cognitive healthcare system, a real-time image of an area of a patient's body being viewed by a medical professional via a head mounted display (HMD) system; identifying, by the cognitive healthcare system, one or more body parts of the patient within the real-time image; correlating, by the cognitive healthcare system, the one or more identified body parts with the patient's electronic medical records (EMRs) indicating a medical condition of the patient, wherein the patient's electronic medical records (EMRs) are correlated to the patient by either:
the capturing mechanism capturing an image of the patient's face and the cognitive healthcare system utilizing facial recognition to identify the patient; or
the capturing mechanism capturing an audible utterance from the patient and the cognitive healthcare system utilizing voice recognition to identify the patient; and
generating, by the cognitive healthcare system, an augmented reality display, in the HMD system, of one or more areas of the patient's body affecting or needing to be further investigated with regard to the medical condition by overlaying the one or more areas of the patient's body affecting or needing to be further investigated with regard to the medical condition over the real-time image of the area of the patient's body, wherein a level of information displayed in the augmented reality display is based on a schedule of the medical professional such that the cognitive healthcare system:
accesses a schedule of the medical professional through a medical professional corpus or corpora of data;
determines an amount of time the medical professional has to spend with the patient; and
displays the level of information in the augmented reality display commensurate with the amount of time the medical professional has to spend with the patient. 22. The method of claim 21, wherein the augmented reality display displays one or more of a basic organ model, a current x-ray, a current computerized axial tomography (CAT) scan (CT), a current magnetic resonance imaging (MRI) scan, one or more of dissection models, overlapping organ systems, previous x-rays, previous CT scans, previous MRI scans, or points of surgery or pressure. 23. The method of claim 21, wherein the augmented reality display further displays textual data representing lab results, treatment options, medical codes, latest medical research studies, or available organs for transplant. 24. The method of claim 21, wherein the cognitive healthcare system further:
captures a facial expression of the patient; captures one or more audible utterances of the patient; identifies a mood of the patient using the captured facial expression and the one or more audible utterances; and displays via the augmented reality display an indication of how the medical professional should be presenting information to the patient mood. 25. The method of claim 21, wherein the medical professional treating the patient is identified by the capturing mechanism capturing an image of the medical professional's face and the cognitive healthcare system utilizing facial recognition to identify the medical professional. 26. The method of claim 21, wherein the medical professional treating the patient is identified by the capturing mechanism capturing an audible utterance from the medical professional and the cognitive healthcare system utilizing voice recognition to identify the medical professional. 27. (canceled) 28. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device, causes the computing device to:
capture, by a capturing mechanism, a real-time image of an area of a patient's body being viewed by a medical professional via a head mounted display (HMD) system; identify one or more body parts of the patient within the real-time image; correlate the one or more identified body parts with the patient's electronic medical records (EMRs) indicating a medical condition of the patient, wherein the patient's electronic medical records (EMRs) are correlated to the patient by either:
the capturing mechanism capturing an image of the patient's face and the cognitive healthcare system utilizing facial recognition to identify the patient; or
the capturing mechanism capturing an audible utterance from the patient and the cognitive healthcare system utilizing voice recognition to identify the patient; and
generate an augmented reality display, in the HMD system, of one or more areas of the patient's body affecting or needing to be further investigated with regard to the medical condition by overlaying the one or more areas of the patient's body affecting or needing to be further investigated with regard to the medical condition over the real-time image of the area of the patient's body, wherein a level of information displayed in the augmented reality display is based on a schedule of the medical professional such that the computer readable program causes the computing device to:
access a schedule of the medical professional through a medical professional corpus or corpora of data;
determine an amount of tame the medical professional has to spend with the patient; and
display the level of information in the augmented reality display commensurate with the amount of time the medical professional has to spend with the patient. 29. The computer program product of claim 28, wherein the augmented reality display displays one or more of a basic organ model, a current x-ray, a current computerized axial tomography (CAT) scan (CT), a current magnetic resonance imaging (MRI) scan, one or more of dissection models, overlapping organ systems, previous x-rays, previous CT scans, previous MRI scans, or points of surgery or pressure. 30. The computer program product of claim 28, wherein the augmented reality display further displays textual data representing lab results, treatment options, medical codes, latest medical research studies, or available organs for transplant. 31. The computer program product of claim 28, wherein the computer readable program further causes the computing device to:
capture a facial expression of the patient; capture one or more audible utterances of the patient; identify a mood of the patient using the captured facial expression and the one or more audible utterances; and display via the augmented reality display an indication of how the medical professional should be presenting information to the patient mood. 32. The computer program product of claim 28, wherein the medical professional treating the patient is identified by the capturing mechanism capturing an image of the medical professional's face and the cognitive healthcare system utilizing facial recognition to identify the medical professional. 33. The computer program product of claim 28, wherein the medical professional treating the patient is identified by the capturing mechanism capturing an audible utterance from the medical professional and the cognitive healthcare system utilizing voice recognition to identify the medical professional. 34. (canceled) 35. An apparatus comprising:
a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: capture, by a capturing mechanism, a real-time image of an area of a patient's body being viewed by a medical professional via a head mounted display (HMD) system; identify one or more body parts of the patient within the real-time image; correlate the one or more identified body parts with the patient's electronic medical records (EMRs) indicating a medical condition of the patient, wherein the patient's electronic medical records (EMRs) are correlated to the patient by either:
the capturing mechanism capturing an image of the patient's face and the cognitive healthcare system utilizing facial recognition to identify the patient; or
the capturing mechanism capturing an audible utterance from the patient and the cognitive healthcare system utilizing voice recognition to identify the patient; and
generate an augmented reality display, in the HMD system, of one or more areas of the patient's body affecting or needing to be further investigated with regard to the medical condition by overlaying the one or more areas of the patient's body affecting or needing to be further investigated with regard to the medical condition over the real-time image of the area of the patient's body, wherein a level of information displayed in the augmented reality display is based on a schedule of the medical professional such that the instructions cause the processor to:
access a schedule of the medical professional through a medical professional corpus or corpora of data;
determine an amount of time the medical professional has to spend with the patient; and
display the level of information in the augmented reality display commensurate with the amount of time the medical professional has to spend with the patient. 36. The apparatus of claim 35, wherein the augmented reality display displays one or more of a basic organ model, a current x-ray, a current computerized axial tomography (CAT) scan (CT), a current magnetic resonance imaging (MRI) scan, one or more of dissection models, overlapping organ systems, previous x-rays, previous CT scans, previous MRI scans, or points of surgery or pressure. 37. The apparatus of claim 35, wherein the augmented reality display further displays textual data representing lab results, treatment options, medical codes, latest medical research studies, or available organs for transplant. 38. The apparatus of claim 35, wherein the instructions further cause the processor to:
capture a facial expression of the patient; capture one or more audible utterances of the patient; identify a mood of the patient using the captured facial expression and the one more audible utterances; and display via the augmented reality display an indication of how the medical professional should be presenting information to the patient mood. 39. The apparatus of claim 35, wherein the medical professional treating the patient is identified by the capturing mechanism capturing an image of the medical professional's face and the cognitive healthcare system utilizing facial recognition to identify the medical professional. 40. The apparatus of claim 35, wherein the medical professional treating the patient is identified by the capturing mechanism capturing an audible utterance from the medical professional and the cognitive healthcare system utilizing voice recognition to identify the medical professional. | A mechanism is provided for implementing an augmented reality display via a head mounted display (HMD) system that indicates areas of a patient's body corresponding to a medical condition and/or treatment of the patient overlayed on the actual view of the patient. A real-time image of an area of a patient's body being viewed by a medical professional is captured via the HMD system. One or more body parts of the patient are identified within the real-time image. The one or more identified body parks are correlated with the patient's electronic medical records (EMRs) indicating the medical condition and/or treatments associated with the patient. An augmented reality display is then generated in the HMD system of one or more areas of the patient's body corresponding to the medical condition and/or treatment of the patient overlaying the real-time image of the area of the patient's body.1-20. (canceled) 21. A method, in a data processing system comprising at least one processor and at least one memory, the at least one memory comprising instructions executed by the at least one processor to cause the at least one processor to implement a cognitive healthcare system, wherein the cognitive healthcare system operates to:
capturing, by a capturing mechanism of the cognitive healthcare system, a real-time image of an area of a patient's body being viewed by a medical professional via a head mounted display (HMD) system; identifying, by the cognitive healthcare system, one or more body parts of the patient within the real-time image; correlating, by the cognitive healthcare system, the one or more identified body parts with the patient's electronic medical records (EMRs) indicating a medical condition of the patient, wherein the patient's electronic medical records (EMRs) are correlated to the patient by either:
the capturing mechanism capturing an image of the patient's face and the cognitive healthcare system utilizing facial recognition to identify the patient; or
the capturing mechanism capturing an audible utterance from the patient and the cognitive healthcare system utilizing voice recognition to identify the patient; and
generating, by the cognitive healthcare system, an augmented reality display, in the HMD system, of one or more areas of the patient's body affecting or needing to be further investigated with regard to the medical condition by overlaying the one or more areas of the patient's body affecting or needing to be further investigated with regard to the medical condition over the real-time image of the area of the patient's body, wherein a level of information displayed in the augmented reality display is based on a schedule of the medical professional such that the cognitive healthcare system:
accesses a schedule of the medical professional through a medical professional corpus or corpora of data;
determines an amount of time the medical professional has to spend with the patient; and
displays the level of information in the augmented reality display commensurate with the amount of time the medical professional has to spend with the patient. 22. The method of claim 21, wherein the augmented reality display displays one or more of a basic organ model, a current x-ray, a current computerized axial tomography (CAT) scan (CT), a current magnetic resonance imaging (MRI) scan, one or more of dissection models, overlapping organ systems, previous x-rays, previous CT scans, previous MRI scans, or points of surgery or pressure. 23. The method of claim 21, wherein the augmented reality display further displays textual data representing lab results, treatment options, medical codes, latest medical research studies, or available organs for transplant. 24. The method of claim 21, wherein the cognitive healthcare system further:
captures a facial expression of the patient; captures one or more audible utterances of the patient; identifies a mood of the patient using the captured facial expression and the one or more audible utterances; and displays via the augmented reality display an indication of how the medical professional should be presenting information to the patient mood. 25. The method of claim 21, wherein the medical professional treating the patient is identified by the capturing mechanism capturing an image of the medical professional's face and the cognitive healthcare system utilizing facial recognition to identify the medical professional. 26. The method of claim 21, wherein the medical professional treating the patient is identified by the capturing mechanism capturing an audible utterance from the medical professional and the cognitive healthcare system utilizing voice recognition to identify the medical professional. 27. (canceled) 28. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device, causes the computing device to:
capture, by a capturing mechanism, a real-time image of an area of a patient's body being viewed by a medical professional via a head mounted display (HMD) system; identify one or more body parts of the patient within the real-time image; correlate the one or more identified body parts with the patient's electronic medical records (EMRs) indicating a medical condition of the patient, wherein the patient's electronic medical records (EMRs) are correlated to the patient by either:
the capturing mechanism capturing an image of the patient's face and the cognitive healthcare system utilizing facial recognition to identify the patient; or
the capturing mechanism capturing an audible utterance from the patient and the cognitive healthcare system utilizing voice recognition to identify the patient; and
generate an augmented reality display, in the HMD system, of one or more areas of the patient's body affecting or needing to be further investigated with regard to the medical condition by overlaying the one or more areas of the patient's body affecting or needing to be further investigated with regard to the medical condition over the real-time image of the area of the patient's body, wherein a level of information displayed in the augmented reality display is based on a schedule of the medical professional such that the computer readable program causes the computing device to:
access a schedule of the medical professional through a medical professional corpus or corpora of data;
determine an amount of tame the medical professional has to spend with the patient; and
display the level of information in the augmented reality display commensurate with the amount of time the medical professional has to spend with the patient. 29. The computer program product of claim 28, wherein the augmented reality display displays one or more of a basic organ model, a current x-ray, a current computerized axial tomography (CAT) scan (CT), a current magnetic resonance imaging (MRI) scan, one or more of dissection models, overlapping organ systems, previous x-rays, previous CT scans, previous MRI scans, or points of surgery or pressure. 30. The computer program product of claim 28, wherein the augmented reality display further displays textual data representing lab results, treatment options, medical codes, latest medical research studies, or available organs for transplant. 31. The computer program product of claim 28, wherein the computer readable program further causes the computing device to:
capture a facial expression of the patient; capture one or more audible utterances of the patient; identify a mood of the patient using the captured facial expression and the one or more audible utterances; and display via the augmented reality display an indication of how the medical professional should be presenting information to the patient mood. 32. The computer program product of claim 28, wherein the medical professional treating the patient is identified by the capturing mechanism capturing an image of the medical professional's face and the cognitive healthcare system utilizing facial recognition to identify the medical professional. 33. The computer program product of claim 28, wherein the medical professional treating the patient is identified by the capturing mechanism capturing an audible utterance from the medical professional and the cognitive healthcare system utilizing voice recognition to identify the medical professional. 34. (canceled) 35. An apparatus comprising:
a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: capture, by a capturing mechanism, a real-time image of an area of a patient's body being viewed by a medical professional via a head mounted display (HMD) system; identify one or more body parts of the patient within the real-time image; correlate the one or more identified body parts with the patient's electronic medical records (EMRs) indicating a medical condition of the patient, wherein the patient's electronic medical records (EMRs) are correlated to the patient by either:
the capturing mechanism capturing an image of the patient's face and the cognitive healthcare system utilizing facial recognition to identify the patient; or
the capturing mechanism capturing an audible utterance from the patient and the cognitive healthcare system utilizing voice recognition to identify the patient; and
generate an augmented reality display, in the HMD system, of one or more areas of the patient's body affecting or needing to be further investigated with regard to the medical condition by overlaying the one or more areas of the patient's body affecting or needing to be further investigated with regard to the medical condition over the real-time image of the area of the patient's body, wherein a level of information displayed in the augmented reality display is based on a schedule of the medical professional such that the instructions cause the processor to:
access a schedule of the medical professional through a medical professional corpus or corpora of data;
determine an amount of time the medical professional has to spend with the patient; and
display the level of information in the augmented reality display commensurate with the amount of time the medical professional has to spend with the patient. 36. The apparatus of claim 35, wherein the augmented reality display displays one or more of a basic organ model, a current x-ray, a current computerized axial tomography (CAT) scan (CT), a current magnetic resonance imaging (MRI) scan, one or more of dissection models, overlapping organ systems, previous x-rays, previous CT scans, previous MRI scans, or points of surgery or pressure. 37. The apparatus of claim 35, wherein the augmented reality display further displays textual data representing lab results, treatment options, medical codes, latest medical research studies, or available organs for transplant. 38. The apparatus of claim 35, wherein the instructions further cause the processor to:
capture a facial expression of the patient; capture one or more audible utterances of the patient; identify a mood of the patient using the captured facial expression and the one more audible utterances; and display via the augmented reality display an indication of how the medical professional should be presenting information to the patient mood. 39. The apparatus of claim 35, wherein the medical professional treating the patient is identified by the capturing mechanism capturing an image of the medical professional's face and the cognitive healthcare system utilizing facial recognition to identify the medical professional. 40. The apparatus of claim 35, wherein the medical professional treating the patient is identified by the capturing mechanism capturing an audible utterance from the medical professional and the cognitive healthcare system utilizing voice recognition to identify the medical professional. | 2,600 |
10,723 | 10,723 | 15,789,661 | 2,689 | A portable safety assembly for alerting a user to a potential threat includes a housing that may be worn on an article of clothing. A processor is positioned within the housing and a proximity sensor is positioned within the housing. The proximity sensor detects motion within a trigger distance of the housing. A plurality of light emitters is each coupled to the outer wall of the housing. Each of the light emitters selectively emits light outwardly therefrom when the proximity sensor detects motion. In this way the plurality of light emitters alert a user to a potential threat. | 1. A portable safety assembly being configured to detect hidden threats thereby alerting a user to the hidden threats, said assembly comprising:
a housing being configured to be worn on an article of clothing; a processor being positioned within said housing; a proximity sensor being coupled to said housing, said proximity sensor being configured to detect motion within a trigger distance of said housing; a plurality of light emitters, each of said light emitters being coupled to said outer wall of said housing wherein each of said light emitters is configured to selectively emit light outwardly when said proximity sensor detects motion thereby facilitating said plurality of light emitters to alert a user to a potential threat; and a clip being coupled to an outer wall of said housing wherein said clip is configured to engage the article of clothing, said clip being spaced from said outer wall wherein said clips configured to facilitate the article of clothing to be positioned between said clip and said housing. 2. (canceled) 3. The assembly according to claim 1, wherein said processor selectively generates an alarm sequence, said processor generating said alarm sequence when said proximity sensor detects motion within the trigger distance. 4. The assembly according to claim 3, wherein each of said light emitters is electrically coupled to said processor, each of said light emitters being turned on when said processor generates said alarm sequence. 5. The assembly according to claim 4, further comprising a switch being movably coupled to said outer wall of said housing wherein said switch is configured to be manipulated, said switch being electrically coupled to said processor such that said switch turns said processor on and off. 6. The assembly according to claim 5, further comprising a power supply being positioned within said housing, said power supply being electrically coupled to said power supply, said power supply comprising at least one battery. 7. A portable safety assembly being configured to detect hidden threats thereby alerting a user to the hidden threats, said assembly comprising:
a housing being configured to be worn on an article of clothing, said housing having an outer wall; a clip being coupled to said outer wall of said housing wherein said clip is configured to engage the article of clothing, said clip being spaced from said outer wall wherein said clips configured to facilitate the article of clothing to be positioned between said clip and said housing; a processor being positioned within said housing, said processor selectively generating an alarm sequence; a proximity sensor being coupled to said housing, said proximity sensor being configured to detect motion within a trigger distance of said housing, said processor generating said alarm sequence when said proximity sensor detects motion within the trigger distance; a plurality of light emitters, each of said light emitters being coupled to said outer wall of said housing wherein each of said light emitters is configured to selectively emit light outwardly therefrom, each of said light emitters being electrically coupled to said processor, each of said light emitters being turned on when said processor generates said alarm sequence wherein said plurality of light emitters is configured to alert the user to a potential threat; a switch being movably coupled to said outer wall of said housing wherein said switch is configured to be manipulated, said switch being electrically coupled to said processor such that said switch turns said processor on and off; and a power supply being positioned within said housing, said power supply being electrically coupled to said power supply, said power supply comprising at least one battery. | A portable safety assembly for alerting a user to a potential threat includes a housing that may be worn on an article of clothing. A processor is positioned within the housing and a proximity sensor is positioned within the housing. The proximity sensor detects motion within a trigger distance of the housing. A plurality of light emitters is each coupled to the outer wall of the housing. Each of the light emitters selectively emits light outwardly therefrom when the proximity sensor detects motion. In this way the plurality of light emitters alert a user to a potential threat.1. A portable safety assembly being configured to detect hidden threats thereby alerting a user to the hidden threats, said assembly comprising:
a housing being configured to be worn on an article of clothing; a processor being positioned within said housing; a proximity sensor being coupled to said housing, said proximity sensor being configured to detect motion within a trigger distance of said housing; a plurality of light emitters, each of said light emitters being coupled to said outer wall of said housing wherein each of said light emitters is configured to selectively emit light outwardly when said proximity sensor detects motion thereby facilitating said plurality of light emitters to alert a user to a potential threat; and a clip being coupled to an outer wall of said housing wherein said clip is configured to engage the article of clothing, said clip being spaced from said outer wall wherein said clips configured to facilitate the article of clothing to be positioned between said clip and said housing. 2. (canceled) 3. The assembly according to claim 1, wherein said processor selectively generates an alarm sequence, said processor generating said alarm sequence when said proximity sensor detects motion within the trigger distance. 4. The assembly according to claim 3, wherein each of said light emitters is electrically coupled to said processor, each of said light emitters being turned on when said processor generates said alarm sequence. 5. The assembly according to claim 4, further comprising a switch being movably coupled to said outer wall of said housing wherein said switch is configured to be manipulated, said switch being electrically coupled to said processor such that said switch turns said processor on and off. 6. The assembly according to claim 5, further comprising a power supply being positioned within said housing, said power supply being electrically coupled to said power supply, said power supply comprising at least one battery. 7. A portable safety assembly being configured to detect hidden threats thereby alerting a user to the hidden threats, said assembly comprising:
a housing being configured to be worn on an article of clothing, said housing having an outer wall; a clip being coupled to said outer wall of said housing wherein said clip is configured to engage the article of clothing, said clip being spaced from said outer wall wherein said clips configured to facilitate the article of clothing to be positioned between said clip and said housing; a processor being positioned within said housing, said processor selectively generating an alarm sequence; a proximity sensor being coupled to said housing, said proximity sensor being configured to detect motion within a trigger distance of said housing, said processor generating said alarm sequence when said proximity sensor detects motion within the trigger distance; a plurality of light emitters, each of said light emitters being coupled to said outer wall of said housing wherein each of said light emitters is configured to selectively emit light outwardly therefrom, each of said light emitters being electrically coupled to said processor, each of said light emitters being turned on when said processor generates said alarm sequence wherein said plurality of light emitters is configured to alert the user to a potential threat; a switch being movably coupled to said outer wall of said housing wherein said switch is configured to be manipulated, said switch being electrically coupled to said processor such that said switch turns said processor on and off; and a power supply being positioned within said housing, said power supply being electrically coupled to said power supply, said power supply comprising at least one battery. | 2,600 |
10,724 | 10,724 | 15,212,466 | 2,647 | Systems and methods for verifying a device requesting map data is within an approved geographic boundary. The method includes receiving a request including a route hop count and a latency value calculated from the device to a network node. The route hop count and latency values are compared against threshold values. The device is determined to be within the approved geographic boundary based on the comparison. | 1. A method for verifying a device requesting map data is within an approved geographic boundary, the method comprising:
receiving, by a processor, a request from the device for map data, the request including a first route hop count and a first latency value calculated from the device to a first network node; comparing, by the processor, the first route hop count to a first threshold hop count; comparing, by the processor, the first latency value to a first latency threshold; and determining, by the processor, whether the device is within the approved geographic boundary based on the comparison of the first route hop count and the first latency value. 2. The method of claim 1, further comprising:
transmitting, after determining the device is within the approved geographic boundary, the map data to the device. 3. The method of claim 2, further comprising:
determining whether the device has an active subscription for the map data, and transmitting the map data only when the device has an active subscription. 4. The method of claim 1, wherein the request further includes an IP address of the device; and wherein determining whether the device is within the approved geographic boundary is further based on the IP address. 5. The method of claim 1, wherein the request further includes positional data derived from a global positioning system (GPS) of the device; and wherein determining whether the device is within the approved geographic boundary is further based on the positional data. 6. The method of claim 1, wherein the request includes a second route hop count and a second latency value calculated from the device to a second network node; the second route hop count and second latency value are compared to a second threshold hop count and second latency value; and wherein determining whether the device is within the approved geographic boundary is further based on the comparisons of the second route hop count and the second latency value. 7. The method of claim 1, wherein the first threshold hop count is a minimum number of route hops from the first network node to a location outside the approved geographic boundary. 8. The method of claim 1, wherein when the device is determined to not be within the approved geographic boundary, the processor transmits a command to lock a mapping application on the device. 9. A system for verifying a device is within an approved geographic boundary, the system comprising:
a geographic database configured to store map data; a receiving module configured to receive a request from the device for the map data, the request including a route hop count and a latency value calculated from the device to a network node; a threshold identification module configured to calculate a threshold hop count and a threshold latency value that correspond to the network node and the approved geographic boundary; a location verification module configured to determine based on the rout hop count, the latency value, the threshold hop count, and the threshold latency value that the device is within the approved geographic boundary; and a transmitting module configured to transmit the map data to the device when the device is within the approved geographic boundary. 10. The system of claim 9, further comprising:
a subscription module configured to check if the device has an active subscription for the map data. 11. The system of claim 9, wherein the location verification module is further configured to determine that the device is within the approved geographic boundary based on an IP address for the device received by the receiving module. 12. The system of claim 9, wherein the latency value is an average latency value from the device to the network node. 13. The system of claim 9, wherein the location verification module is further configured to determine that the device is within the approved geographic boundary based on positional data derived from a global positioning system (GPS) of the device and received by the receiving module. 14. A method for distributing map data to a device, the method comprising:
generating, by a server, a blockchain including a smart contract for map data, wherein the smart contract includes a condition that a device be within a geographic boundary; receiving, by the server, a transaction including a hop count value and a latency value calculated from the device to a network node of a plurality of network nodes; receiving, by the server, a validation of the transaction by the plurality of network nodes storing the blockchain; determining, by the server, based on the hop count value and the latency value whether the device is within the geographic boundary; and transmitting, by the server, the map data to the device when the condition is true. 15. The method of claim 14, wherein the network node has an identified geospatial location. 16. The method of claim 14, wherein the latency value is an average latency value from the device to the network node. 17. A method for verifying a device requesting map data is within an approved geographic boundary, the method comprising:
transmitting, by the device, a first plurality of messages to a network node; receiving, by the device, a second plurality of messages from the network node; calculating, by the device, a number of route hops from the device to the network node from the first plurality of messages and the second plurality of messages; calculating, by the device, a latency value for a round trip from the device to the network node from the first plurality of messages and the second plurality of messages; transmitting, by the device, a request for the map data including an identity of the network node, the number of route hops, and the latency value; and receiving, by the device, the map data when a location of the device is verified to be within the approved geographic boundary based on the identity of the network node, the number of route hops, and the latency value. 18. The method of claim 17, further comprising:
identifying one or more network devices between the device and the network node; and transmitting with the request, the identified one or more network devices. 19. The method of claim 17, wherein the network node has an identified geospatial location. 20. The method of claim 17, wherein the request further includes an IP address for the device. | Systems and methods for verifying a device requesting map data is within an approved geographic boundary. The method includes receiving a request including a route hop count and a latency value calculated from the device to a network node. The route hop count and latency values are compared against threshold values. The device is determined to be within the approved geographic boundary based on the comparison.1. A method for verifying a device requesting map data is within an approved geographic boundary, the method comprising:
receiving, by a processor, a request from the device for map data, the request including a first route hop count and a first latency value calculated from the device to a first network node; comparing, by the processor, the first route hop count to a first threshold hop count; comparing, by the processor, the first latency value to a first latency threshold; and determining, by the processor, whether the device is within the approved geographic boundary based on the comparison of the first route hop count and the first latency value. 2. The method of claim 1, further comprising:
transmitting, after determining the device is within the approved geographic boundary, the map data to the device. 3. The method of claim 2, further comprising:
determining whether the device has an active subscription for the map data, and transmitting the map data only when the device has an active subscription. 4. The method of claim 1, wherein the request further includes an IP address of the device; and wherein determining whether the device is within the approved geographic boundary is further based on the IP address. 5. The method of claim 1, wherein the request further includes positional data derived from a global positioning system (GPS) of the device; and wherein determining whether the device is within the approved geographic boundary is further based on the positional data. 6. The method of claim 1, wherein the request includes a second route hop count and a second latency value calculated from the device to a second network node; the second route hop count and second latency value are compared to a second threshold hop count and second latency value; and wherein determining whether the device is within the approved geographic boundary is further based on the comparisons of the second route hop count and the second latency value. 7. The method of claim 1, wherein the first threshold hop count is a minimum number of route hops from the first network node to a location outside the approved geographic boundary. 8. The method of claim 1, wherein when the device is determined to not be within the approved geographic boundary, the processor transmits a command to lock a mapping application on the device. 9. A system for verifying a device is within an approved geographic boundary, the system comprising:
a geographic database configured to store map data; a receiving module configured to receive a request from the device for the map data, the request including a route hop count and a latency value calculated from the device to a network node; a threshold identification module configured to calculate a threshold hop count and a threshold latency value that correspond to the network node and the approved geographic boundary; a location verification module configured to determine based on the rout hop count, the latency value, the threshold hop count, and the threshold latency value that the device is within the approved geographic boundary; and a transmitting module configured to transmit the map data to the device when the device is within the approved geographic boundary. 10. The system of claim 9, further comprising:
a subscription module configured to check if the device has an active subscription for the map data. 11. The system of claim 9, wherein the location verification module is further configured to determine that the device is within the approved geographic boundary based on an IP address for the device received by the receiving module. 12. The system of claim 9, wherein the latency value is an average latency value from the device to the network node. 13. The system of claim 9, wherein the location verification module is further configured to determine that the device is within the approved geographic boundary based on positional data derived from a global positioning system (GPS) of the device and received by the receiving module. 14. A method for distributing map data to a device, the method comprising:
generating, by a server, a blockchain including a smart contract for map data, wherein the smart contract includes a condition that a device be within a geographic boundary; receiving, by the server, a transaction including a hop count value and a latency value calculated from the device to a network node of a plurality of network nodes; receiving, by the server, a validation of the transaction by the plurality of network nodes storing the blockchain; determining, by the server, based on the hop count value and the latency value whether the device is within the geographic boundary; and transmitting, by the server, the map data to the device when the condition is true. 15. The method of claim 14, wherein the network node has an identified geospatial location. 16. The method of claim 14, wherein the latency value is an average latency value from the device to the network node. 17. A method for verifying a device requesting map data is within an approved geographic boundary, the method comprising:
transmitting, by the device, a first plurality of messages to a network node; receiving, by the device, a second plurality of messages from the network node; calculating, by the device, a number of route hops from the device to the network node from the first plurality of messages and the second plurality of messages; calculating, by the device, a latency value for a round trip from the device to the network node from the first plurality of messages and the second plurality of messages; transmitting, by the device, a request for the map data including an identity of the network node, the number of route hops, and the latency value; and receiving, by the device, the map data when a location of the device is verified to be within the approved geographic boundary based on the identity of the network node, the number of route hops, and the latency value. 18. The method of claim 17, further comprising:
identifying one or more network devices between the device and the network node; and transmitting with the request, the identified one or more network devices. 19. The method of claim 17, wherein the network node has an identified geospatial location. 20. The method of claim 17, wherein the request further includes an IP address for the device. | 2,600 |
10,725 | 10,725 | 15,980,338 | 2,624 | A method comprising entering a passive viewing state of an apparatus, receiving information indicative of a first input, determining a first operation based, at least in part, on a passive viewing state and the first input, performing the first operation, receiving environmental sensor information, determining that the environmental sensor information indicates that the apparatus is actively viewed by a user, entering of an active viewing state of the apparatus based, at least in part, on the determination that the environmental sensor information indicates that the apparatus is actively viewed by the user, receiving information indicative of a second input, the second input being substantially the same as the first input, determining a second operation based, at least in part, on the active viewing state and the second input, the second operation being different from the first operation, and performing the second operation is disclosed. | 1. An apparatus, comprising:
a near eye display; at least one processor; and at least one memory including computer program code, the memory and the computer program code configured to, working with the processor, cause the apparatus to perform at least the following: entering of a passive viewing state of the apparatus; receipt of information indicative of a first input; determination of a first operation based, at least in part, on the passive viewing state and the first input; performance of the first operation; receipt of environmental sensor information indicating proximity of a user to the near eye display; determination that the environmental sensor information indicates that the apparatus is actively viewed by the user; entering of an active viewing state of the apparatus based, at least in part, on the determination that the environmental sensor information indicates that the apparatus is actively viewed by the user; receipt of information indicative of a second input, the second input being substantially the same as the first input; determination of a second operation based, at least in part, on the active viewing state and the second input, the second operation being different from the first operation; and performance of the second operation. 2. The apparatus of claim 1, wherein the memory includes computer program code configured to, working with the processor, cause the apparatus to perform:
receipt of different environmental sensor information; determination that the different environmental sensor information indicates that the apparatus is not actively viewed by the user; and entering of the passive viewing state of the apparatus based, at least in part, on the determination that the different environmental sensor information indicates that the apparatus is not actively viewed by the user. 3. The apparatus of claim 2, wherein the different environmental sensor information indicates that the user is distant from the near eye display. 4. The apparatus of claim 1, wherein the memory includes computer program code configured to, working with the processor, cause the apparatus to perform:
receipt of information indicative of a third input; determination of a third operation based, at least in part, on the active viewing state and the third input; performance of the third operation; receipt of other environmental sensor information; determination that the other environmental sensor information indicates that the apparatus is not actively viewed by the user; entering of the passive viewing state of the apparatus based, at least in part, on the determination that the other environmental sensor information indicates that the apparatus is not actively viewed by the user; receipt of information indicative of a fourth input, the fourth input being substantially the same as the third input; and preclusion of performance of an operation based, at least in part, on the passive viewing state and the fourth input. 5. The apparatus of claim 1, wherein the determination of the first operation comprises determination that the first operation correlates with the first input and the passive viewing state. 6. The apparatus of claim 1, wherein the determination of the second operation comprises determination that the second operation correlates with the second input and the active viewing state. 7. The apparatus of claim 1, wherein the operations that correlate with inputs and the passive viewing state avoid interaction associated with information displayed in an unimpaired-viewing display mode. 8. The apparatus of claim 1, wherein the operations that correlate with inputs and the active viewing state avoid limited user visual interaction associated with the impaired-viewing display mode. 9. The apparatus of claim 1, wherein the first input is a tilt input and the second input is a tilt input. 10. A method comprising:
entering a passive viewing state of an apparatus; receiving information indicative of a first input; determining of a first operation based, at least in part, on the passive viewing state and the first input; performing the first operation; receiving environmental sensor information indicating proximity of a user to a near eye display of the apparatus; determining that the environmental sensor information indicates that the apparatus is actively viewed by the user; entering an active viewing state of the apparatus based, at least in part, on the determination that the environmental sensor information indicates that the apparatus is actively viewed by the user; receiving information indicative of a second input, the second input being substantially the same as the first input; determining a second operation based, at least in part, on the active viewing state and the second input, the second operation being different from the first operation; and performing the second operation. 11. The method of claim 10, further comprising:
receiving different environmental sensor information; determining that the different environmental sensor information indicates that the apparatus is not actively viewed by the user; and entering the passive viewing state of the apparatus based, at least in part, on the determination that the different environmental sensor information indicates that the apparatus is not actively viewed by the user. 12. The method of claim 11, wherein the different environmental sensor information indicates that the user is distant from the near eye display. 13. The method of claim 10, further comprising:
receiving information indicative of a third input; determining a third operation based, at least in part, on the active viewing state and the third input; performing the third operation; receiving other environmental sensor information; determining that the other environmental sensor information indicates that the apparatus is not actively viewed by the user; entering the passive viewing state of the apparatus based, at least in part, on the determination that the other environmental sensor information indicates that the apparatus is not actively viewed by the user; receiving information indicative of a fourth input, the fourth input being substantially the same as the third input; and precluding performance of an operation based, at least in part, on the passive viewing state and the fourth input. 14. The method of claim 10, wherein the determination of the first operation comprises determination that the first operation correlates with the first input and the passive viewing state. 15. The method of claim 10, wherein the determination of the second operation comprises determination that the second operation correlates with the second input and the active viewing state. 16. The method of claim 10, wherein the operations that correlate with inputs and the passive viewing state avoid interaction associated with information displayed in an unimpaired-viewing display mode. 17. The method of claim 10, wherein the operations that correlate with inputs and the active viewing state avoid limited user visual interaction associated with the impaired-viewing display mode. 18. At least one computer-readable medium encoded with instructions that, when executed by a processor, perform:
entering of a passive viewing state of an apparatus; receipt of information indicative of a first input; determination of a first operation based, at least in part, on the passive viewing state and the first input; performance of the first operation; receipt of environmental sensor information indicating proximity of a user to a near eye display of the apparatus; determination that the environmental sensor information indicates that the apparatus is actively viewed by the user; entering of an active viewing state of the apparatus based, at least in part, on the determination that the environmental sensor information indicates that the apparatus is actively viewed by the user; receipt of information indicative of a second input, the second input being substantially the same as the first input; determination of a second operation based, at least in part, on the active viewing state and the second input, the second operation being different from the first operation; and performance of the second operation. 19. The medium of claim 18, further encoded with instructions that, when executed by a processor, perform:
receipt of different environmental sensor information; determination that the different environmental sensor information indicates that the apparatus is not actively viewed by the user; and entering of the passive viewing state of the apparatus based, at least in part, on the determination that the different environmental sensor information indicates that the apparatus is not actively viewed by the user. 20. The medium of claim 18, wherein the different environmental sensor information indicates that the user is distant from the near eye display. | A method comprising entering a passive viewing state of an apparatus, receiving information indicative of a first input, determining a first operation based, at least in part, on a passive viewing state and the first input, performing the first operation, receiving environmental sensor information, determining that the environmental sensor information indicates that the apparatus is actively viewed by a user, entering of an active viewing state of the apparatus based, at least in part, on the determination that the environmental sensor information indicates that the apparatus is actively viewed by the user, receiving information indicative of a second input, the second input being substantially the same as the first input, determining a second operation based, at least in part, on the active viewing state and the second input, the second operation being different from the first operation, and performing the second operation is disclosed.1. An apparatus, comprising:
a near eye display; at least one processor; and at least one memory including computer program code, the memory and the computer program code configured to, working with the processor, cause the apparatus to perform at least the following: entering of a passive viewing state of the apparatus; receipt of information indicative of a first input; determination of a first operation based, at least in part, on the passive viewing state and the first input; performance of the first operation; receipt of environmental sensor information indicating proximity of a user to the near eye display; determination that the environmental sensor information indicates that the apparatus is actively viewed by the user; entering of an active viewing state of the apparatus based, at least in part, on the determination that the environmental sensor information indicates that the apparatus is actively viewed by the user; receipt of information indicative of a second input, the second input being substantially the same as the first input; determination of a second operation based, at least in part, on the active viewing state and the second input, the second operation being different from the first operation; and performance of the second operation. 2. The apparatus of claim 1, wherein the memory includes computer program code configured to, working with the processor, cause the apparatus to perform:
receipt of different environmental sensor information; determination that the different environmental sensor information indicates that the apparatus is not actively viewed by the user; and entering of the passive viewing state of the apparatus based, at least in part, on the determination that the different environmental sensor information indicates that the apparatus is not actively viewed by the user. 3. The apparatus of claim 2, wherein the different environmental sensor information indicates that the user is distant from the near eye display. 4. The apparatus of claim 1, wherein the memory includes computer program code configured to, working with the processor, cause the apparatus to perform:
receipt of information indicative of a third input; determination of a third operation based, at least in part, on the active viewing state and the third input; performance of the third operation; receipt of other environmental sensor information; determination that the other environmental sensor information indicates that the apparatus is not actively viewed by the user; entering of the passive viewing state of the apparatus based, at least in part, on the determination that the other environmental sensor information indicates that the apparatus is not actively viewed by the user; receipt of information indicative of a fourth input, the fourth input being substantially the same as the third input; and preclusion of performance of an operation based, at least in part, on the passive viewing state and the fourth input. 5. The apparatus of claim 1, wherein the determination of the first operation comprises determination that the first operation correlates with the first input and the passive viewing state. 6. The apparatus of claim 1, wherein the determination of the second operation comprises determination that the second operation correlates with the second input and the active viewing state. 7. The apparatus of claim 1, wherein the operations that correlate with inputs and the passive viewing state avoid interaction associated with information displayed in an unimpaired-viewing display mode. 8. The apparatus of claim 1, wherein the operations that correlate with inputs and the active viewing state avoid limited user visual interaction associated with the impaired-viewing display mode. 9. The apparatus of claim 1, wherein the first input is a tilt input and the second input is a tilt input. 10. A method comprising:
entering a passive viewing state of an apparatus; receiving information indicative of a first input; determining of a first operation based, at least in part, on the passive viewing state and the first input; performing the first operation; receiving environmental sensor information indicating proximity of a user to a near eye display of the apparatus; determining that the environmental sensor information indicates that the apparatus is actively viewed by the user; entering an active viewing state of the apparatus based, at least in part, on the determination that the environmental sensor information indicates that the apparatus is actively viewed by the user; receiving information indicative of a second input, the second input being substantially the same as the first input; determining a second operation based, at least in part, on the active viewing state and the second input, the second operation being different from the first operation; and performing the second operation. 11. The method of claim 10, further comprising:
receiving different environmental sensor information; determining that the different environmental sensor information indicates that the apparatus is not actively viewed by the user; and entering the passive viewing state of the apparatus based, at least in part, on the determination that the different environmental sensor information indicates that the apparatus is not actively viewed by the user. 12. The method of claim 11, wherein the different environmental sensor information indicates that the user is distant from the near eye display. 13. The method of claim 10, further comprising:
receiving information indicative of a third input; determining a third operation based, at least in part, on the active viewing state and the third input; performing the third operation; receiving other environmental sensor information; determining that the other environmental sensor information indicates that the apparatus is not actively viewed by the user; entering the passive viewing state of the apparatus based, at least in part, on the determination that the other environmental sensor information indicates that the apparatus is not actively viewed by the user; receiving information indicative of a fourth input, the fourth input being substantially the same as the third input; and precluding performance of an operation based, at least in part, on the passive viewing state and the fourth input. 14. The method of claim 10, wherein the determination of the first operation comprises determination that the first operation correlates with the first input and the passive viewing state. 15. The method of claim 10, wherein the determination of the second operation comprises determination that the second operation correlates with the second input and the active viewing state. 16. The method of claim 10, wherein the operations that correlate with inputs and the passive viewing state avoid interaction associated with information displayed in an unimpaired-viewing display mode. 17. The method of claim 10, wherein the operations that correlate with inputs and the active viewing state avoid limited user visual interaction associated with the impaired-viewing display mode. 18. At least one computer-readable medium encoded with instructions that, when executed by a processor, perform:
entering of a passive viewing state of an apparatus; receipt of information indicative of a first input; determination of a first operation based, at least in part, on the passive viewing state and the first input; performance of the first operation; receipt of environmental sensor information indicating proximity of a user to a near eye display of the apparatus; determination that the environmental sensor information indicates that the apparatus is actively viewed by the user; entering of an active viewing state of the apparatus based, at least in part, on the determination that the environmental sensor information indicates that the apparatus is actively viewed by the user; receipt of information indicative of a second input, the second input being substantially the same as the first input; determination of a second operation based, at least in part, on the active viewing state and the second input, the second operation being different from the first operation; and performance of the second operation. 19. The medium of claim 18, further encoded with instructions that, when executed by a processor, perform:
receipt of different environmental sensor information; determination that the different environmental sensor information indicates that the apparatus is not actively viewed by the user; and entering of the passive viewing state of the apparatus based, at least in part, on the determination that the different environmental sensor information indicates that the apparatus is not actively viewed by the user. 20. The medium of claim 18, wherein the different environmental sensor information indicates that the user is distant from the near eye display. | 2,600 |
10,726 | 10,726 | 15,969,139 | 2,659 | A signal activity detector (SAD) combines at least three decision signals to generate a combined decision signal as input to a hangover addition circuit of the SAD. Each of the decision signals indicating whether or not activity is detected in the input signal according to respective decision criteria. The SAD sends the combined decision signal to the hangover addition circuit to generate a final decision signal of the SAD as to whether or not activity is detected in the input signal. | 1. A method, implemented in a signal activity detector (SAD), for detecting activity in an input signal, the method comprising:
combining at least three decision signals to generate a combined decision signal as input to a hangover addition circuit of the SAD, each of the decision signals indicating whether or not activity is detected in the input signal according to respective decision criteria; sending the combined decision signal to the hangover addition circuit to generate a final decision signal of the SAD as to whether or not activity is detected in the input signal. 2. The method of claim 1, further comprising receiving the decision signals from one or more processing circuits of the SAD configured to apply the decision criteria to the input signal to generate the decision signals. 3. The method of claim 1, wherein the decision criteria of at least one of the decision signals is without regard to hangover. 4. The method of claim 1, wherein the decision criteria of at least one of the decision signals is based on hangover. 5. The method of claim 1, wherein the combining comprises combining by a logical AND of at least two of the decision signals. 6. The method of claim 1, wherein the combining comprises combining by a logical OR of at least two of the decision signals. 7. The method of claim 1, further comprising selecting a combination logic for the combining based on properties of the input signal. 8. The method of claim 1, wherein the combining corrects a false indication of activity in the input signal under a given noise condition indicated by at least one of the decision signals. 9. The method of claim 1, wherein the combining comprises combining a first and second decision signal using a first combination logic to produce a preliminary result, and combining the preliminary result with a third decision signal using a second combination logic that is different from the first combination logic. 10. A signal activity detector (SAD) for detecting activity in a received input signal, the SAD comprising:
one or more processing circuits and a memory, the memory containing instructions executable by the one or more processing circuits whereby the SAD is configured to:
combine at least three decision signals to generate a combined decision signal as input to a hangover addition circuit of the SAD, each of the decision signals indicating whether or not activity is detected in the input signal according to respective decision criteria;
send the combined decision signal to the hangover addition circuit to generate a final decision signal of the SAD as to whether or not activity is detected in the input signal. 11. The SAD of claim 10, wherein the one or more processing circuits are configured to apply the decision criteria to the input signal to generate the decision signals. 12. The SAD of claim 10, wherein the decision criteria of at least one of the decision signals is without regard to hangover. 13. The SAD of claim 10, wherein the decision criteria of at least one of the decision signals is based on hangover. 14. The SAD of claim 10, wherein to combine, the SAD is configured to combine, by a logical AND, at least two of the decision signals. 15. The SAD of claim 10, wherein to combine, the SAD is configured to combine, by a logical OR, at least two of the decision signals. 16. The SAD of claim 10, wherein the instructions are executable by the one or more processing circuits to further configure the SAD to select a combination logic for the combining based on properties of the input signal. 17. The SAD of claim 10, wherein to combine, the SAD is configured to correct a false indication of activity in the input signal under a given noise condition indicated by at least one of the decision signals. 18. The SAD of claim 10, wherein to combine, the SAD is configured to combine a first and second decision signal using a first combination logic to produce a preliminary result, and combine the preliminary result with a third decision signal using a second combination logic that is different from the first combination logic. 19. A non-transitory computer readable medium storing a computer program product for controlling a signal activity detector (SAD), the computer program product comprising software instructions that, when run on a programmable processor circuit of the SAD, cause the SAD to:
combine at least three decision signals to generate a combined decision signal as input to a hangover addition circuit of the SAD, each of the decision signals indicating whether or not activity is detected in the input signal according to respective decision criteria; send the combined decision signal to the hangover addition circuit to generate a final decision signal of the SAD as to whether or not activity is detected in the input signal. | A signal activity detector (SAD) combines at least three decision signals to generate a combined decision signal as input to a hangover addition circuit of the SAD. Each of the decision signals indicating whether or not activity is detected in the input signal according to respective decision criteria. The SAD sends the combined decision signal to the hangover addition circuit to generate a final decision signal of the SAD as to whether or not activity is detected in the input signal.1. A method, implemented in a signal activity detector (SAD), for detecting activity in an input signal, the method comprising:
combining at least three decision signals to generate a combined decision signal as input to a hangover addition circuit of the SAD, each of the decision signals indicating whether or not activity is detected in the input signal according to respective decision criteria; sending the combined decision signal to the hangover addition circuit to generate a final decision signal of the SAD as to whether or not activity is detected in the input signal. 2. The method of claim 1, further comprising receiving the decision signals from one or more processing circuits of the SAD configured to apply the decision criteria to the input signal to generate the decision signals. 3. The method of claim 1, wherein the decision criteria of at least one of the decision signals is without regard to hangover. 4. The method of claim 1, wherein the decision criteria of at least one of the decision signals is based on hangover. 5. The method of claim 1, wherein the combining comprises combining by a logical AND of at least two of the decision signals. 6. The method of claim 1, wherein the combining comprises combining by a logical OR of at least two of the decision signals. 7. The method of claim 1, further comprising selecting a combination logic for the combining based on properties of the input signal. 8. The method of claim 1, wherein the combining corrects a false indication of activity in the input signal under a given noise condition indicated by at least one of the decision signals. 9. The method of claim 1, wherein the combining comprises combining a first and second decision signal using a first combination logic to produce a preliminary result, and combining the preliminary result with a third decision signal using a second combination logic that is different from the first combination logic. 10. A signal activity detector (SAD) for detecting activity in a received input signal, the SAD comprising:
one or more processing circuits and a memory, the memory containing instructions executable by the one or more processing circuits whereby the SAD is configured to:
combine at least three decision signals to generate a combined decision signal as input to a hangover addition circuit of the SAD, each of the decision signals indicating whether or not activity is detected in the input signal according to respective decision criteria;
send the combined decision signal to the hangover addition circuit to generate a final decision signal of the SAD as to whether or not activity is detected in the input signal. 11. The SAD of claim 10, wherein the one or more processing circuits are configured to apply the decision criteria to the input signal to generate the decision signals. 12. The SAD of claim 10, wherein the decision criteria of at least one of the decision signals is without regard to hangover. 13. The SAD of claim 10, wherein the decision criteria of at least one of the decision signals is based on hangover. 14. The SAD of claim 10, wherein to combine, the SAD is configured to combine, by a logical AND, at least two of the decision signals. 15. The SAD of claim 10, wherein to combine, the SAD is configured to combine, by a logical OR, at least two of the decision signals. 16. The SAD of claim 10, wherein the instructions are executable by the one or more processing circuits to further configure the SAD to select a combination logic for the combining based on properties of the input signal. 17. The SAD of claim 10, wherein to combine, the SAD is configured to correct a false indication of activity in the input signal under a given noise condition indicated by at least one of the decision signals. 18. The SAD of claim 10, wherein to combine, the SAD is configured to combine a first and second decision signal using a first combination logic to produce a preliminary result, and combine the preliminary result with a third decision signal using a second combination logic that is different from the first combination logic. 19. A non-transitory computer readable medium storing a computer program product for controlling a signal activity detector (SAD), the computer program product comprising software instructions that, when run on a programmable processor circuit of the SAD, cause the SAD to:
combine at least three decision signals to generate a combined decision signal as input to a hangover addition circuit of the SAD, each of the decision signals indicating whether or not activity is detected in the input signal according to respective decision criteria; send the combined decision signal to the hangover addition circuit to generate a final decision signal of the SAD as to whether or not activity is detected in the input signal. | 2,600 |
10,727 | 10,727 | 15,142,034 | 2,674 | An image processing apparatus comprising: an applied amount calculation unit which calculates an amount of applied toner of the print image data and an amount of applied toner of the copy-forgery-inhibited pattern image data; a determination unit which determines, based on the amounts of applied toner calculated by the applied amount calculation unit, whether the sum of the amounts of applied toner of the print image data and the copy-forgery-inhibited image exceeds a predetermined amount of applied toner; and an applied amount control unit which, in a case where the determination unit determines that the sum of the amounts of applied toner exceeds the predetermined amount of applied toner, restricts the amount of applied toner of the print image data to prevent the sum of the amounts of applied toner from exceeding the predetermined amount of applied toner. | 1. An image processing apparatus which generates pattern print image data by compositing print image data and pattern image data, the apparatus comprising:
an applied amount calculation unit which calculates an amount of applied toner of the print image data and an amount of applied toner of the pattern image data; a determination unit which determines, based on the amounts of applied toner calculated by the applied amount calculation unit, whether the sum of the amounts of applied toner of the print image data and the pattern image exceeds a predetermined amount of applied toner; an applied amount control unit which, in a case where the determination unit determines that the sum of the amounts of applied toner exceeds the predetermined amount of applied toner, restricts the amount of applied toner of the print image data to prevent the sum of the amounts of applied toner from exceeding the predetermined amount of applied toner; and a composition unit which composites the print image data whose amount of applied toner is restricted by the applied amount control unit, and the pattern image data, generating the pattern print image data. | An image processing apparatus comprising: an applied amount calculation unit which calculates an amount of applied toner of the print image data and an amount of applied toner of the copy-forgery-inhibited pattern image data; a determination unit which determines, based on the amounts of applied toner calculated by the applied amount calculation unit, whether the sum of the amounts of applied toner of the print image data and the copy-forgery-inhibited image exceeds a predetermined amount of applied toner; and an applied amount control unit which, in a case where the determination unit determines that the sum of the amounts of applied toner exceeds the predetermined amount of applied toner, restricts the amount of applied toner of the print image data to prevent the sum of the amounts of applied toner from exceeding the predetermined amount of applied toner.1. An image processing apparatus which generates pattern print image data by compositing print image data and pattern image data, the apparatus comprising:
an applied amount calculation unit which calculates an amount of applied toner of the print image data and an amount of applied toner of the pattern image data; a determination unit which determines, based on the amounts of applied toner calculated by the applied amount calculation unit, whether the sum of the amounts of applied toner of the print image data and the pattern image exceeds a predetermined amount of applied toner; an applied amount control unit which, in a case where the determination unit determines that the sum of the amounts of applied toner exceeds the predetermined amount of applied toner, restricts the amount of applied toner of the print image data to prevent the sum of the amounts of applied toner from exceeding the predetermined amount of applied toner; and a composition unit which composites the print image data whose amount of applied toner is restricted by the applied amount control unit, and the pattern image data, generating the pattern print image data. | 2,600 |
10,728 | 10,728 | 16,013,682 | 2,628 | A 3D object is opened for presentation on a display in response to a trigger, such as a gaze direction detected by eye tracking in conjunction with a trigger hand gesture. A player's emulated hand in emulated space is configured to have the same gesture as the player's real hand as imaged by a camera, and only when the emulated hand is within the 3D object in emulated space are gestures of the hand correlated to input commands. Otherwise, hand gestures are not considered for correlation to commands. | 1. A device comprising:
at least one computer memory that is not a transitory signal and that comprises instructions executable by at least one processor to: present a three-dimensional (3D) object on at least one display; image a player's appendage to render an emulated appendage, the emulated appendage having a gesture configuration as established by the player's appendage, the emulated appendage being different from the 3D object; responsive to the emulated appendage being at least partially within the 3D object, and responsive to the gesture configuration correlating to a command, execute the command; and responsive to the emulated appendage not being at least partially within the 3D object, not execute the command. 2. The device of claim 1, comprising the at least one processor and the at least one display. 3. The device of claim 1, wherein the instructions are executable to:
present the 3D object responsive to reception of at least one trigger. 4. The device of claim 3, wherein the at least one trigger comprises at least one eye tracking input signal. 5. The device of claim 3, wherein the at least one trigger comprises at least one gesture of the appendage. 6. The device of claim 5, wherein the at least one trigger comprises at least one eye tracking input signal. 7. The device of claim 1, wherein the command comprises a command to present at least one menu with at least one selection. 8. The device of claim 3, wherein the trigger is input by a first player and the emulated appendage is an emulated appendage of a second player. 9. The device of claim 1, wherein the instructions are executable to:
conceal the gesture configuration of the emulated appendage. 10. An assembly, comprising:
at least one display; at least one processor configured to control the at least one display to present images thereon, the at least one processor being configured with instructions to: responsive to receiving a first eye tracking signal, present a three-dimensional (3D) object on the at least one display; receive at least one image of a player's appendage; based at least in part on the at least one image, render an emulated appendage, the emulated appendage having a gesture configuration as established by the player's appendage; responsive to the emulated appendage being at least partially within the 3D object, determine whether the gesture configuration correlates to a command, and responsive to the gesture configuration correlating to a command, execute the command; and responsive to the emulated appendage not being at least partially within the 3D object, not determine whether the gesture configuration correlates to a command. 11. The assembly of claim 10, wherein the instructions are executable to:
present the 3D object responsive to reception of at least one trigger. 12. The assembly of claim 11, wherein the at least one trigger comprises at least one gesture of the appendage. 13. The assembly of claim 10, wherein the command comprises a command to present at least one menu with at least one selection. 14. The assembly of claim 11, wherein the trigger is input by a first player and the emulated appendage is an emulated appendage of a second player. 15. The assembly of claim 10, wherein the instructions are executable to:
conceal the gesture configuration of the emulated appendage. 16. A method, comprising:
opening a 3D object for presentation on a display in response to a trigger; emulating a player's hand in emulated space to render an emulated hand configured to have a same gesture as the player's hand as imaged by a camera; and only when the emulated hand is within the 3D object in emulated space, correlating gestures of the emulated hand to input commands, and otherwise not considering hand gestures for correlation to commands. 17. The method of claim 16, wherein the trigger comprises a gaze direction detected by eye tracking. 18. The method of claim 16, wherein the trigger comprises a trigger hand gesture. 19. The method of claim 16, wherein at least one of the commands comprises a command to present at least one menu with at least one selection. 20. The method of claim 16, wherein the trigger is input by a first player and the emulated hand is an emulated hand of a second player. | A 3D object is opened for presentation on a display in response to a trigger, such as a gaze direction detected by eye tracking in conjunction with a trigger hand gesture. A player's emulated hand in emulated space is configured to have the same gesture as the player's real hand as imaged by a camera, and only when the emulated hand is within the 3D object in emulated space are gestures of the hand correlated to input commands. Otherwise, hand gestures are not considered for correlation to commands.1. A device comprising:
at least one computer memory that is not a transitory signal and that comprises instructions executable by at least one processor to: present a three-dimensional (3D) object on at least one display; image a player's appendage to render an emulated appendage, the emulated appendage having a gesture configuration as established by the player's appendage, the emulated appendage being different from the 3D object; responsive to the emulated appendage being at least partially within the 3D object, and responsive to the gesture configuration correlating to a command, execute the command; and responsive to the emulated appendage not being at least partially within the 3D object, not execute the command. 2. The device of claim 1, comprising the at least one processor and the at least one display. 3. The device of claim 1, wherein the instructions are executable to:
present the 3D object responsive to reception of at least one trigger. 4. The device of claim 3, wherein the at least one trigger comprises at least one eye tracking input signal. 5. The device of claim 3, wherein the at least one trigger comprises at least one gesture of the appendage. 6. The device of claim 5, wherein the at least one trigger comprises at least one eye tracking input signal. 7. The device of claim 1, wherein the command comprises a command to present at least one menu with at least one selection. 8. The device of claim 3, wherein the trigger is input by a first player and the emulated appendage is an emulated appendage of a second player. 9. The device of claim 1, wherein the instructions are executable to:
conceal the gesture configuration of the emulated appendage. 10. An assembly, comprising:
at least one display; at least one processor configured to control the at least one display to present images thereon, the at least one processor being configured with instructions to: responsive to receiving a first eye tracking signal, present a three-dimensional (3D) object on the at least one display; receive at least one image of a player's appendage; based at least in part on the at least one image, render an emulated appendage, the emulated appendage having a gesture configuration as established by the player's appendage; responsive to the emulated appendage being at least partially within the 3D object, determine whether the gesture configuration correlates to a command, and responsive to the gesture configuration correlating to a command, execute the command; and responsive to the emulated appendage not being at least partially within the 3D object, not determine whether the gesture configuration correlates to a command. 11. The assembly of claim 10, wherein the instructions are executable to:
present the 3D object responsive to reception of at least one trigger. 12. The assembly of claim 11, wherein the at least one trigger comprises at least one gesture of the appendage. 13. The assembly of claim 10, wherein the command comprises a command to present at least one menu with at least one selection. 14. The assembly of claim 11, wherein the trigger is input by a first player and the emulated appendage is an emulated appendage of a second player. 15. The assembly of claim 10, wherein the instructions are executable to:
conceal the gesture configuration of the emulated appendage. 16. A method, comprising:
opening a 3D object for presentation on a display in response to a trigger; emulating a player's hand in emulated space to render an emulated hand configured to have a same gesture as the player's hand as imaged by a camera; and only when the emulated hand is within the 3D object in emulated space, correlating gestures of the emulated hand to input commands, and otherwise not considering hand gestures for correlation to commands. 17. The method of claim 16, wherein the trigger comprises a gaze direction detected by eye tracking. 18. The method of claim 16, wherein the trigger comprises a trigger hand gesture. 19. The method of claim 16, wherein at least one of the commands comprises a command to present at least one menu with at least one selection. 20. The method of claim 16, wherein the trigger is input by a first player and the emulated hand is an emulated hand of a second player. | 2,600 |
10,729 | 10,729 | 15,116,843 | 2,616 | The present invention relates to a an image reconstruction apparatus ( 10 ) comprising: a receiving unit ( 60 ) for receiving a 3D image sequence ( 56 ) of 3D medical images over time resulting from a scan of a body part of a subject ( 12 ); a selection unit ( 64 ) for selecting a local point of interest ( 76 ) within at least one of the 3D medical images of the 3D image sequence ( 56 ); a slice generator ( 66 ) for generating three 2D view planes ( 74 ) of the at least one of the 3D medical images, wherein said three 2D view planes ( 74 ) are arranged perpendicularly to each other and intersect in the selected point of interest ( 76 ); and a tracking unit ( 68 ) for determining a trajectory of the point of interest ( 76 ) within the 3D image sequence ( 56 ) over time; wherein the slice generator ( 66 ) is configured to generate from the 3D image sequence ( 56 ) 2D image sequences ( 72 ) in the 2D view planes ( 74 ) by automatically adapting the intersection of the 2D view planes ( 74 ) over time along the trajectory of the point of interest ( 76 ). | 1. An image reconstruction apparatus comprising:
a receiving unit for receiving a 3D image sequence of 3D medical images over time resulting from a scan of a body part of a subject; a selection unit for selecting a local point of interest within at least one of the 3D medical images of the 3D image sequence; a slice generator for generating three 2D view planes of the at least one of the 3D medical images, wherein said three 2D view planes are arranged perpendicularly to each other and intersect in the selected point of interest; and a tracking unit (40 for determining a trajectory of the point of interest within the 3D image sequence over time; wherein the slice generator is configured to generate from the 3D image sequence 2D image sequences in the 2D view planes by automatically adapting the intersection of the 2D view planes over time along the trajectory of the point of interest. 2. The image reconstruction apparatus of claim 1, wherein the selection unit comprises a user input interface for manually selecting the point of interest within the at least one of the 3D medical images of the 3D image sequenced. 3. The image reconstruction apparatus of claim 1, wherein the selection unit is configured to automatically select the point of interest within the at least one of the 3D medical images of the 3D image sequence by identifying one or more landmarks within the at least one of the 3D medical images. 4. The image reconstruction apparatus of claim 1, further comprising a storage unit for storing the received 3D image sequence. 5. The image reconstruction apparatus of claim 1, further comprising an image acquisition unit for scanning the body part of the subject and acquiring the 3D image sequence. 6. The image reconstruction apparatus of claim 1, wherein the 3D image sequence is a 3D ultrasound image sequence. 7. The image reconstruction apparatus of claim 5, wherein the image acquisition unit comprises:
an ultrasound transducer for transmitting and receiving ultrasound waves to and from the body part of the subject; and an ultrasound image reconstruction unit for reconstructing the 3D ultrasound image sequence from the ultrasound waves received from body part of the subject. 8. The image reconstruction apparatus of claim 1, wherein the tracking unit is configured to determine the trajectory of the point of interest by:
identifying one or more distinctive points or image features in a local surrounding of the point of interest in the at least one of the 3D medical images of the 3D image sequence; tracking one or more reference trajectories of the one or more distinctive points or image features in the 3D image sequence over time; and determining the trajectory of the point of interest based on the one or more reference trajectories. 9. The image reconstruction apparatus of claim 8, wherein the tracking unit is configured to identify the one or more distinctive points or image features by identifying image regions within the at least one of the 3D medical images having local image speckle gradients above a predefined threshold value. 10. The image reconstruction apparatus of claim 8, wherein the tracking unit is configured to track the one or more reference trajectories by minimizing an energy term of a dense displacement field that includes a displacement of the one or more distinctive points or image features. 11. The image reconstruction apparatus of claim 8, wherein the tracking unit is configured to determine the trajectory of the point of interest based on the one or more reference trajectories by a local interpolation between the one or more reference trajectories. 12. The image reconstruction apparatus of claim 1, further comprising a display unit for displaying at least one of the 2D image sequences. 13. The image reconstruction apparatus of claim 12, wherein the display unit is configured to concurrently display the 3D image sequence and three 2D image sequences belonging to the three perpendicularly arranged 2D view planes. 14. Method for reconstructing medical images, comprising the steps of:
receiving a 3D image sequence of 3D medical images over time resulting from a scan of a body part of a subject; selecting a local point of interest within at least one of the 3D medical images of the 3D image sequence; generating three 2D view planes of the at least one of the 3D medical images, wherein said three 2D view planes are arranged perpendicularly to each other and intersect in the selected point of interest; determining a trajectory of the point of interest within the 3D image sequence over time; and generating from the 3D image sequence 2D image sequences in the 2D view planes by automatically adapting the intersection of the 2D view planes over time along the trajectory of the point of interest. 15. Computer program comprising program code means for causing a computer to carry out the steps of the method as claimed in claim 14 when said computer program is carried out on a computer. | The present invention relates to a an image reconstruction apparatus ( 10 ) comprising: a receiving unit ( 60 ) for receiving a 3D image sequence ( 56 ) of 3D medical images over time resulting from a scan of a body part of a subject ( 12 ); a selection unit ( 64 ) for selecting a local point of interest ( 76 ) within at least one of the 3D medical images of the 3D image sequence ( 56 ); a slice generator ( 66 ) for generating three 2D view planes ( 74 ) of the at least one of the 3D medical images, wherein said three 2D view planes ( 74 ) are arranged perpendicularly to each other and intersect in the selected point of interest ( 76 ); and a tracking unit ( 68 ) for determining a trajectory of the point of interest ( 76 ) within the 3D image sequence ( 56 ) over time; wherein the slice generator ( 66 ) is configured to generate from the 3D image sequence ( 56 ) 2D image sequences ( 72 ) in the 2D view planes ( 74 ) by automatically adapting the intersection of the 2D view planes ( 74 ) over time along the trajectory of the point of interest ( 76 ).1. An image reconstruction apparatus comprising:
a receiving unit for receiving a 3D image sequence of 3D medical images over time resulting from a scan of a body part of a subject; a selection unit for selecting a local point of interest within at least one of the 3D medical images of the 3D image sequence; a slice generator for generating three 2D view planes of the at least one of the 3D medical images, wherein said three 2D view planes are arranged perpendicularly to each other and intersect in the selected point of interest; and a tracking unit (40 for determining a trajectory of the point of interest within the 3D image sequence over time; wherein the slice generator is configured to generate from the 3D image sequence 2D image sequences in the 2D view planes by automatically adapting the intersection of the 2D view planes over time along the trajectory of the point of interest. 2. The image reconstruction apparatus of claim 1, wherein the selection unit comprises a user input interface for manually selecting the point of interest within the at least one of the 3D medical images of the 3D image sequenced. 3. The image reconstruction apparatus of claim 1, wherein the selection unit is configured to automatically select the point of interest within the at least one of the 3D medical images of the 3D image sequence by identifying one or more landmarks within the at least one of the 3D medical images. 4. The image reconstruction apparatus of claim 1, further comprising a storage unit for storing the received 3D image sequence. 5. The image reconstruction apparatus of claim 1, further comprising an image acquisition unit for scanning the body part of the subject and acquiring the 3D image sequence. 6. The image reconstruction apparatus of claim 1, wherein the 3D image sequence is a 3D ultrasound image sequence. 7. The image reconstruction apparatus of claim 5, wherein the image acquisition unit comprises:
an ultrasound transducer for transmitting and receiving ultrasound waves to and from the body part of the subject; and an ultrasound image reconstruction unit for reconstructing the 3D ultrasound image sequence from the ultrasound waves received from body part of the subject. 8. The image reconstruction apparatus of claim 1, wherein the tracking unit is configured to determine the trajectory of the point of interest by:
identifying one or more distinctive points or image features in a local surrounding of the point of interest in the at least one of the 3D medical images of the 3D image sequence; tracking one or more reference trajectories of the one or more distinctive points or image features in the 3D image sequence over time; and determining the trajectory of the point of interest based on the one or more reference trajectories. 9. The image reconstruction apparatus of claim 8, wherein the tracking unit is configured to identify the one or more distinctive points or image features by identifying image regions within the at least one of the 3D medical images having local image speckle gradients above a predefined threshold value. 10. The image reconstruction apparatus of claim 8, wherein the tracking unit is configured to track the one or more reference trajectories by minimizing an energy term of a dense displacement field that includes a displacement of the one or more distinctive points or image features. 11. The image reconstruction apparatus of claim 8, wherein the tracking unit is configured to determine the trajectory of the point of interest based on the one or more reference trajectories by a local interpolation between the one or more reference trajectories. 12. The image reconstruction apparatus of claim 1, further comprising a display unit for displaying at least one of the 2D image sequences. 13. The image reconstruction apparatus of claim 12, wherein the display unit is configured to concurrently display the 3D image sequence and three 2D image sequences belonging to the three perpendicularly arranged 2D view planes. 14. Method for reconstructing medical images, comprising the steps of:
receiving a 3D image sequence of 3D medical images over time resulting from a scan of a body part of a subject; selecting a local point of interest within at least one of the 3D medical images of the 3D image sequence; generating three 2D view planes of the at least one of the 3D medical images, wherein said three 2D view planes are arranged perpendicularly to each other and intersect in the selected point of interest; determining a trajectory of the point of interest within the 3D image sequence over time; and generating from the 3D image sequence 2D image sequences in the 2D view planes by automatically adapting the intersection of the 2D view planes over time along the trajectory of the point of interest. 15. Computer program comprising program code means for causing a computer to carry out the steps of the method as claimed in claim 14 when said computer program is carried out on a computer. | 2,600 |
10,730 | 10,730 | 15,927,173 | 2,646 | Various communication systems may benefit from appropriate restriction on use. For example, certain wireless communication systems may benefit from radio-access-technology-specific access restrictions. A method can include registering a user equipment with a network element. The registering can include identifying user equipment capabilities. The method can also include receiving a response from the network element indicating restriction on use of at least one radio access technology. | 1. A method, comprising:
registering a user equipment with a network element, wherein the registering comprises identifying user equipment capabilities; receiving a response from the network element indicating restriction on use of at least one radio access technology; and operating the user equipment in accordance with the indicated restriction. 2. The method of claim 1, wherein the capabilities are provided via S1 or non-access stratum signaling. 3. The method of claim 1, wherein the response comprises a tracking area update acceptance message or attach acceptance message. 4. The method of claim 1, wherein the capabilities comprise capabilities for using a plurality of radio access technologies. 5. The method of claim 1, further comprising:
disabling at least one radio access technology for the user equipment based on the response. 6. A method, comprising:
receiving a registration request for a user equipment at a network element, wherein the request identifies user equipment capabilities; determining a restriction on use of at least one radio access technology for the user equipment; and at least one of sending a response to the user equipment indicating the restriction on use of at least one radio access technology, or indicating to an access node that the access node is to impose at least one restriction on serving at least one radio access technology to the user equipment. 7. The method of claim 6, further comprising:
obtaining subscription information regarding the user equipment from a further network element, wherein the restriction on use is determined based on the subscription information. 8. The method of claim 6, wherein the restriction on use is determined further based on roaming information regarding the user equipment. 9. The method of claim 6, wherein the capabilities are provided via S1 or non-access stratum signaling. 10. The method of claim 6, wherein the response comprises a tracking area update acceptance message or attach acceptance message. 11. The method of claim 6, wherein the capabilities comprise capabilities for using a plurality of radio access technologies. 12. The method of claim 6, wherein the determination is further based on a timer related to use of at least one radio access technology. 13. The method of claim 6, wherein the determination is further based on access control validity information related to use of at least one radio access technology. 14. A method, comprising:
receiving a context setup message indicating at least one restriction on serving at least one radio access technology to a user equipment; and imposing the at least one restriction on serving at least one radio access technology to the user equipment, based on the context setup message. 15. An apparatus, comprising:
at least one processor; and at least one memory including computer program code, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus at least to register a user equipment with a network element, wherein the registering comprises identifying user equipment capabilities; receive a response from the network element indicating restriction on use of at least one radio access technology; and operate the user equipment in accordance with the indicated restriction. 16. The apparatus of claim 15, wherein the network element comprises a mobility management entity. 17. The apparatus of claim 15, wherein the capabilities are provided via S1 or non-access stratum signaling. 18. The apparatus of claim 15, wherein the response comprises a tracking area update acceptance message or attach acceptance message. 19. The apparatus of claim 15, wherein the capabilities comprise capabilities for using a plurality of radio access technologies. 20. The apparatus of claim 15, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus at least to disable at least one radio access technology for the user equipment based on the response. 21. An apparatus, comprising:
at least one processor; and at least one memory including computer program code, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus at least to receive a registration request for a user equipment at a network element, wherein the request identifies user equipment capabilities; determine a restriction on use of at least one radio access technology for the user equipment; and at least one of send a response to the user equipment indicating the restriction on use of at least one radio access technology, or indicate to an access node that the access node is to impose at least one restriction on serving at least one radio access technology to the user equipment. 22. The apparatus of claim 21, wherein the network element comprises a mobility management entity. 23. The apparatus of claim 21, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus at least to obtain subscription information regarding the user equipment from a further network element, wherein the restriction on use is determined based on the subscription information. 24. The apparatus of claim 21, wherein the restriction on use is determined further based on roaming information regarding the user equipment. 25. The apparatus of claim 21, wherein the further network element comprises a home subscriber server or unified data manager. 26. The apparatus of claim 21, wherein the capabilities are provided via S1 or non-access stratum signaling. 27. The apparatus of claim 21, wherein the response comprises a tracking area update acceptance message or attach acceptance message. 28. The apparatus of claim 21, wherein the access node comprises an evolved Node B or a next generation Node B. 29. The apparatus of claim 21, wherein the capabilities comprise capabilities for using a plurality of radio access technologies. 30. The apparatus of claim 21, wherein the determination is further based on a timer related to use of at least one radio access technology. 31. The apparatus of claim 21, wherein the determination is further based on access control validity information related to use of at least one radio access technology. 32. An apparatus, comprising:
at least one processor; and at least one memory including computer program code, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus at least to receive a context setup message indicating at least one restriction on serving at least one radio access technology to a user equipment; and impose the at least one restriction on serving at least one radio access technology to the user equipment, based on the context setup message. | Various communication systems may benefit from appropriate restriction on use. For example, certain wireless communication systems may benefit from radio-access-technology-specific access restrictions. A method can include registering a user equipment with a network element. The registering can include identifying user equipment capabilities. The method can also include receiving a response from the network element indicating restriction on use of at least one radio access technology.1. A method, comprising:
registering a user equipment with a network element, wherein the registering comprises identifying user equipment capabilities; receiving a response from the network element indicating restriction on use of at least one radio access technology; and operating the user equipment in accordance with the indicated restriction. 2. The method of claim 1, wherein the capabilities are provided via S1 or non-access stratum signaling. 3. The method of claim 1, wherein the response comprises a tracking area update acceptance message or attach acceptance message. 4. The method of claim 1, wherein the capabilities comprise capabilities for using a plurality of radio access technologies. 5. The method of claim 1, further comprising:
disabling at least one radio access technology for the user equipment based on the response. 6. A method, comprising:
receiving a registration request for a user equipment at a network element, wherein the request identifies user equipment capabilities; determining a restriction on use of at least one radio access technology for the user equipment; and at least one of sending a response to the user equipment indicating the restriction on use of at least one radio access technology, or indicating to an access node that the access node is to impose at least one restriction on serving at least one radio access technology to the user equipment. 7. The method of claim 6, further comprising:
obtaining subscription information regarding the user equipment from a further network element, wherein the restriction on use is determined based on the subscription information. 8. The method of claim 6, wherein the restriction on use is determined further based on roaming information regarding the user equipment. 9. The method of claim 6, wherein the capabilities are provided via S1 or non-access stratum signaling. 10. The method of claim 6, wherein the response comprises a tracking area update acceptance message or attach acceptance message. 11. The method of claim 6, wherein the capabilities comprise capabilities for using a plurality of radio access technologies. 12. The method of claim 6, wherein the determination is further based on a timer related to use of at least one radio access technology. 13. The method of claim 6, wherein the determination is further based on access control validity information related to use of at least one radio access technology. 14. A method, comprising:
receiving a context setup message indicating at least one restriction on serving at least one radio access technology to a user equipment; and imposing the at least one restriction on serving at least one radio access technology to the user equipment, based on the context setup message. 15. An apparatus, comprising:
at least one processor; and at least one memory including computer program code, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus at least to register a user equipment with a network element, wherein the registering comprises identifying user equipment capabilities; receive a response from the network element indicating restriction on use of at least one radio access technology; and operate the user equipment in accordance with the indicated restriction. 16. The apparatus of claim 15, wherein the network element comprises a mobility management entity. 17. The apparatus of claim 15, wherein the capabilities are provided via S1 or non-access stratum signaling. 18. The apparatus of claim 15, wherein the response comprises a tracking area update acceptance message or attach acceptance message. 19. The apparatus of claim 15, wherein the capabilities comprise capabilities for using a plurality of radio access technologies. 20. The apparatus of claim 15, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus at least to disable at least one radio access technology for the user equipment based on the response. 21. An apparatus, comprising:
at least one processor; and at least one memory including computer program code, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus at least to receive a registration request for a user equipment at a network element, wherein the request identifies user equipment capabilities; determine a restriction on use of at least one radio access technology for the user equipment; and at least one of send a response to the user equipment indicating the restriction on use of at least one radio access technology, or indicate to an access node that the access node is to impose at least one restriction on serving at least one radio access technology to the user equipment. 22. The apparatus of claim 21, wherein the network element comprises a mobility management entity. 23. The apparatus of claim 21, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus at least to obtain subscription information regarding the user equipment from a further network element, wherein the restriction on use is determined based on the subscription information. 24. The apparatus of claim 21, wherein the restriction on use is determined further based on roaming information regarding the user equipment. 25. The apparatus of claim 21, wherein the further network element comprises a home subscriber server or unified data manager. 26. The apparatus of claim 21, wherein the capabilities are provided via S1 or non-access stratum signaling. 27. The apparatus of claim 21, wherein the response comprises a tracking area update acceptance message or attach acceptance message. 28. The apparatus of claim 21, wherein the access node comprises an evolved Node B or a next generation Node B. 29. The apparatus of claim 21, wherein the capabilities comprise capabilities for using a plurality of radio access technologies. 30. The apparatus of claim 21, wherein the determination is further based on a timer related to use of at least one radio access technology. 31. The apparatus of claim 21, wherein the determination is further based on access control validity information related to use of at least one radio access technology. 32. An apparatus, comprising:
at least one processor; and at least one memory including computer program code, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus at least to receive a context setup message indicating at least one restriction on serving at least one radio access technology to a user equipment; and impose the at least one restriction on serving at least one radio access technology to the user equipment, based on the context setup message. | 2,600 |
10,731 | 10,731 | 16,051,602 | 2,685 | A data acquisition and signal detection through radio frequency identification (RFID) system and a method of using the system are provided. The system includes a base station, a receiver, and an RFID device. The system is operable to be used with a wellbore and a drill string during a drilling process to obtain data regarding properties of the wellbore and/or the drill string. | 1. A data acquisition and signal detection system comprising:
a base station (i) positioned proximate to a surface of a wellbore, and (ii) operable to distribute a radio frequency identification (RFID) device down a wellbore via an opening of a drill string; a command sub (i) secured to the drill string, and (ii) operable to wirelessly communicate a command to the device; and a receiver (i) positioned proximate to the surface of the wellbore, (ii) configured to detect the device via a signal when the device travels up an annulus of the wellbore, and (iii) configured to acquire data from the device, wherein, the device is configured to (i) be programmed by the base station, (ii) receive the command from the command sub, (iii) obtain the data, and (iv) permit transmission of the data to the receiver. 2. The system of claim 1,
wherein,
the device is operable to travel (i) out of the drill string via a bit of the drill string, (ii) up the annulus of the wellbore via a fluid within the wellbore, and (iii) to the surface of the wellbore. 3. The system of claim 1,
wherein,
the device is configured to (i) obtain the data from a sensor operable to measure a property of the wellbore, and (ii) store the data. 4. The system of claim 3,
wherein,
the device is operable to store the data via a read/write memory. 5. The system of claim 3,
wherein,
the sensor is positioned on the device or along the drill string. 6. The system of claim 1,
wherein,
the receiver is operable to wirelessly (i) identify the device, and (ii) receive the data from the device. 7. The system of claim 6,
wherein,
the receiver is operable to (i) decode the data, and (ii) transmit decoded data to a database. 8. The system of claim 1,
wherein,
the device is encapsulated by a casing. 9. The system of claim 1,
wherein,
the system is operable to capture the device via a filter positioned proximate to the surface of the wellbore. 10. The system of claim 9,
wherein,
the device is operable to be (i) reprogrammed by the base station, and (ii) recirculated within the wellbore. 11. A method to obtain data of a wellbore via a data acquisition and signal detection system, the method comprising the steps of:
distributing, via a base station positioned proximate to a surface of a wellbore, a radio frequency identification (RFID) device down the wellbore via a drill string, the device configured to (i) receive a command from a command sub of the drill string, and (ii) obtain data associated with a property of the wellbore; circulating a fluid, via a pump, to cause the device to travel (i) down the drill string, (ii) out of a bit of the drill string, (iii) up an annulus of the wellbore, and (iv) to a surface of the wellbore; detecting, via a receiver positioned proximate to the surface of the wellbore, the device as the device travels from the bit and to the surface of the wellbore; and acquiring, via the receiver, the data of the device. 12. The method of claim 11, further comprising the step of:
programming the device, via the base station, to receive the command from the command sub. 13. The method of claim 11,
wherein,
the device is configured to (i) obtain the data from a sensor operable to measure a property of the wellbore, and (ii) store the data. 14. The method of claim 13,
wherein,
the device is operable to store the data via a read/write memory. 15. The method of claim 13,
wherein,
the sensor is positioned on the device or along the drill string. 16. The method of claim 11, further comprising the steps of:
wirelessly identifying the device via the receiver; and wirelessly receiving the data from the device via the receiver. 17. The method of claim 16, further comprising the steps of:
decoding the data received from the device via the receiver; and transmitting decoded data to a remote device via the receiver. 18. The method of claim 11, further comprising the step of:
capturing the device via a filter positioned proximate to the surface of the wellbore. 19. The method of claim 18, further comprising the steps of:
resetting the device via the base station; and recirculating the device within the wellbore via the pump. 20. The method of claim 11,
wherein,
the device is encapsulated by a casing. | A data acquisition and signal detection through radio frequency identification (RFID) system and a method of using the system are provided. The system includes a base station, a receiver, and an RFID device. The system is operable to be used with a wellbore and a drill string during a drilling process to obtain data regarding properties of the wellbore and/or the drill string.1. A data acquisition and signal detection system comprising:
a base station (i) positioned proximate to a surface of a wellbore, and (ii) operable to distribute a radio frequency identification (RFID) device down a wellbore via an opening of a drill string; a command sub (i) secured to the drill string, and (ii) operable to wirelessly communicate a command to the device; and a receiver (i) positioned proximate to the surface of the wellbore, (ii) configured to detect the device via a signal when the device travels up an annulus of the wellbore, and (iii) configured to acquire data from the device, wherein, the device is configured to (i) be programmed by the base station, (ii) receive the command from the command sub, (iii) obtain the data, and (iv) permit transmission of the data to the receiver. 2. The system of claim 1,
wherein,
the device is operable to travel (i) out of the drill string via a bit of the drill string, (ii) up the annulus of the wellbore via a fluid within the wellbore, and (iii) to the surface of the wellbore. 3. The system of claim 1,
wherein,
the device is configured to (i) obtain the data from a sensor operable to measure a property of the wellbore, and (ii) store the data. 4. The system of claim 3,
wherein,
the device is operable to store the data via a read/write memory. 5. The system of claim 3,
wherein,
the sensor is positioned on the device or along the drill string. 6. The system of claim 1,
wherein,
the receiver is operable to wirelessly (i) identify the device, and (ii) receive the data from the device. 7. The system of claim 6,
wherein,
the receiver is operable to (i) decode the data, and (ii) transmit decoded data to a database. 8. The system of claim 1,
wherein,
the device is encapsulated by a casing. 9. The system of claim 1,
wherein,
the system is operable to capture the device via a filter positioned proximate to the surface of the wellbore. 10. The system of claim 9,
wherein,
the device is operable to be (i) reprogrammed by the base station, and (ii) recirculated within the wellbore. 11. A method to obtain data of a wellbore via a data acquisition and signal detection system, the method comprising the steps of:
distributing, via a base station positioned proximate to a surface of a wellbore, a radio frequency identification (RFID) device down the wellbore via a drill string, the device configured to (i) receive a command from a command sub of the drill string, and (ii) obtain data associated with a property of the wellbore; circulating a fluid, via a pump, to cause the device to travel (i) down the drill string, (ii) out of a bit of the drill string, (iii) up an annulus of the wellbore, and (iv) to a surface of the wellbore; detecting, via a receiver positioned proximate to the surface of the wellbore, the device as the device travels from the bit and to the surface of the wellbore; and acquiring, via the receiver, the data of the device. 12. The method of claim 11, further comprising the step of:
programming the device, via the base station, to receive the command from the command sub. 13. The method of claim 11,
wherein,
the device is configured to (i) obtain the data from a sensor operable to measure a property of the wellbore, and (ii) store the data. 14. The method of claim 13,
wherein,
the device is operable to store the data via a read/write memory. 15. The method of claim 13,
wherein,
the sensor is positioned on the device or along the drill string. 16. The method of claim 11, further comprising the steps of:
wirelessly identifying the device via the receiver; and wirelessly receiving the data from the device via the receiver. 17. The method of claim 16, further comprising the steps of:
decoding the data received from the device via the receiver; and transmitting decoded data to a remote device via the receiver. 18. The method of claim 11, further comprising the step of:
capturing the device via a filter positioned proximate to the surface of the wellbore. 19. The method of claim 18, further comprising the steps of:
resetting the device via the base station; and recirculating the device within the wellbore via the pump. 20. The method of claim 11,
wherein,
the device is encapsulated by a casing. | 2,600 |
10,732 | 10,732 | 16,144,629 | 2,612 | The present disclosure generally relates to user interfaces for adjusting simulated image effects. In some embodiments, user interfaces for adjusting a simulated depth effect is described. In some embodiments, user interfaces for displaying adjustments to a simulated depth effect is described. In some embodiments, user interfaces for indicating an interference to adjusting simulated image effects is described. | 1. An electronic device, comprising:
a display; one or more input devices; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for:
displaying, on the display, a representation of image data and a simulated depth effect indicator without displaying a control for adjusting a magnitude of the simulated depth effect, wherein the simulated depth effect indicator includes a numerical indication of the current magnitude of the simulated depth effect;
while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, detecting, via the one or more input devices, a first input;
in response to detecting the first input;
displaying, on the display, an adjustable slider associated with manipulating the representation of image data concurrently with the simulated depth effect indicator, wherein displaying the adjustable slider comprises sliding the representation of image data on the display to display the adjustable slider, wherein the slider is displayed at a location that was previously occupied by a portion of the representation of the image data that is displayed above the slider after sliding the representation of the image data on the display, and wherein the adjustable slider includes:
a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and
a selection indicator indicating that the first value is a currently-selected simulated depth effect value;
while displaying the adjustable slider, detecting, via the one or more input devices, an input directed to the adjustable slider;
in response to detecting the input directed to the adjustable slider:
moving the adjustable slider to indicate that a second value, of the plurality of selectable values for the simulated depth effect, is the currently-selected simulated depth effect value, and
changing an appearance of the representation of image data in accordance with the simulated depth effect as modified by the second value; and
subsequent to changing the appearance of the representation of image data:
displaying, on the display, the simulated depth effect indicator, wherein the simulated depth effect indicator includes an updated numerical indication of the current magnitude of the simulated depth effect, the updated numerical indication corresponding to the second value. 2. (canceled) 3. The electronic device of claim 1, wherein:
prior to detecting the first input, the simulated depth effect indicator is displayed with a first visual characteristic, and after detecting the first input, the simulated depth effect indicator is displayed with a second visual characteristic different from the first visual characteristic. 4-6. (canceled) 7. The electronic device of claim 1, wherein the one or more programs further include instructions for:
prior to detecting the first input, displaying, on the display, one or more mode selector affordances, wherein displaying the adjustable slider comprises replacing display of the one or more mode selector affordances with the adjustable slider. 8. The electronic device of claim 1, wherein the one or more programs further include instructions for:
prior to detecting the first input, displaying, on the display, a zoom control element, wherein displaying the adjustable slider comprises replacing display of the zoom control element. 9. The electronic device of claim 1, wherein the input directed to the adjustable slider is a swipe gesture on the adjustable slider, wherein the swipe gesture includes a user movement in a first direction having at least a first velocity at an end of the swipe gesture. 10. The electronic device of claim 1, wherein moving the adjustable slider comprises moving the plurality of option indicators while the selection indicator remains fixed. 11. The electronic device of claim 1, wherein moving the adjustable slider comprises moving the selection indicator while the plurality of option indicators remain fixed. 12. The electronic device of claim 1, wherein the one or more programs further include instructions for:
while moving the adjustable slider, generating a first type of output in sync with the movement of the adjustable slider as different values are selected for a parameter controlled by the adjustable slider. 13. The electronic device of claim 12, wherein, while moving the adjustable slider:
in accordance with a determination that the representation of image data corresponds to stored image data, the first type of output includes audio output; and in accordance with a determination that the representation of image data corresponds to a live preview of image data being captured by the one or more cameras, the first type of output does not include audio output. 14. The electronic device of claim 1, wherein displaying, on the display, the representation of image data further comprises:
in accordance with a determination that the representation of image data corresponds to stored image data, displaying the representation of image data with a prior simulated depth effect as previously modified by a prior first value for the simulated depth effect. 15. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for:
displaying, on the display, a representation of image data and a simulated depth effect indicator without displaying a control for adjusting a magnitude of the simulated depth effect, wherein the simulated depth effect indicator includes a numerical indication of the current magnitude of the simulated depth effect; while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, detecting, via the one or more input devices, a first input; in response to detecting the first input;
displaying, on the display, an adjustable slider associated with manipulating the representation of image data concurrently with the simulated depth effect indicator, wherein displaying the adjustable slider comprises sliding the representation of image data on the display to display the adjustable slider, wherein the slider is displayed at a location that was previously occupied by a portion of the representation of the image data that is displayed above the slider after sliding the representation of the image data on the display, and wherein the adjustable slider includes:
a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and
a selection indicator indicating that the first value is a currently-selected simulated depth effect value;
while displaying the adjustable slider, detecting, via the one or more input devices, an input directed to the adjustable slider; and in response to detecting the input directed to the adjustable slider:
moving the adjustable slider to indicate that a second value, of the plurality of selectable values for the simulated depth effect, is the currently-selected simulated depth effect value, and
changing an appearance of the representation of image data in accordance with the simulated depth effect as modified by the second value; and
subsequent to changing the appearance of the representation of image data:
displaying, on the display, the simulated depth effect indicator, wherein the simulated depth effect indicator includes an updated numerical indication of the current magnitude of the simulated depth effect, the updated numerical indication corresponding to the second value. 16. A method, comprising:
at an electronic device with a display and one or more input devices:
displaying, on the display, a representation of image data and a simulated depth effect indicator without displaying a control for adjusting a magnitude of the simulated depth effect, wherein the simulated depth effect indicator includes a numerical indication of the current magnitude of the simulated depth effect;
while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, detecting, via the one or more input devices, a first input;
in response to detecting the first input;
displaying, on the display, an adjustable slider associated with manipulating the representation of image data concurrently with the simulated depth effect indicator, wherein displaying the adjustable slider comprises sliding the representation of image data on the display to display the adjustable slider, wherein the slider is displayed at a location that was previously occupied by a portion of the representation of the image data that is displayed above the slider after sliding the representation of the image data on the display, and wherein the adjustable slider includes:
a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and
a selection indicator indicating that the first value is a currently-selected simulated depth effect value;
while displaying the adjustable slider, detecting, via the one or more input devices, an input directed to the adjustable slider; and
in response to detecting the input directed to the adjustable slider:
moving the adjustable slider to indicate that a second value, of the plurality of selectable values for the simulated depth effect, is the currently-selected simulated depth effect value; and
changing an appearance of the representation of image data in accordance with the simulated depth effect as modified by the second value; and
subsequent to changing the appearance of the representation of image data:
displaying, on the display, the simulated depth effect indicator, wherein the simulated depth effect indicator includes an updated numerical indication of the current magnitude of the simulated depth effect, the updated numerical indication corresponding to the second value. 17. (canceled) 18. The non-transitory computer-readable storage medium of claim 15, wherein:
prior to detecting the first input, the simulated depth effect indicator is displayed with a first visual characteristic, and after detecting the first input, the simulated depth effect indicator is displayed with a second visual characteristic different from the first visual characteristic. 19. (canceled) 20. The non-transitory computer-readable storage medium of claim 15, wherein the one or more programs further include instructions for:
prior to detecting the first input, displaying, on the display, one or more mode selector affordances, wherein displaying the adjustable slider comprises replacing display of the one or more mode selector affordances with the adjustable slider. 21. The non-transitory computer-readable storage medium of claim 15, wherein the one or more programs further include instructions for:
prior to detecting the first input, displaying, on the display, a zoom control element, wherein displaying the adjustable slider comprises replacing display of the zoom control element. 22. The non-transitory computer-readable storage medium of claim 15, wherein the input directed to the adjustable slider is a swipe gesture on the adjustable slider, wherein the swipe gesture includes a user movement in a first direction having at least a first velocity at an end of the swipe gesture. 23. The non-transitory computer-readable storage medium of claim 15, wherein moving the adjustable slider comprises moving the plurality of option indicators while the selection indicator remains fixed. 24. The non-transitory computer-readable storage medium of claim 15, wherein moving the adjustable slider comprises moving the selection indicator while the plurality of option indicators remain fixed. 25. The non-transitory computer-readable storage medium of claim 15, wherein the one or more programs further include instructions for:
while moving the adjustable slider, generating a first type of output in sync with the movement of the adjustable slider as different values are selected for a parameter controlled by the adjustable slider. 26. The non-transitory computer-readable storage medium of claim 25, wherein, while moving the adjustable slider:
in accordance with a determination that the representation of image data corresponds to stored image data, the first type of output includes audio output; and in accordance with a determination that the representation of image data corresponds to a live preview of image data being captured by the one or more cameras, the first type of output does not include audio output. 27. The non-transitory computer-readable storage medium of claim 15, wherein displaying, on the display, the representation of image data further comprises:
in accordance with a determination that the representation of image data corresponds to stored image data, displaying the representation of image data with a prior simulated depth effect as previously modified by a prior first value for the simulated depth effect. 28. (canceled) 29. The method of claim 16, wherein:
prior to detecting the first input, the simulated depth effect indicator is displayed with a first visual characteristic, and after detecting the first input, the simulated depth effect indicator is displayed with a second visual characteristic different from the first visual characteristic. 30. (canceled) 31. The method of claim 16, further comprising:
prior to detecting the first input, displaying, on the display, one or more mode selector affordances, wherein displaying the adjustable slider comprises replacing display of the one or more mode selector affordances with the adjustable slider. 32. The method of claim 16, further comprising:
prior to detecting the first input, displaying, on the display, a zoom control element, wherein displaying the adjustable slider comprises replacing display of the zoom control element. 33. The method of claim 16, wherein the input directed to the adjustable slider is a swipe gesture on the adjustable slider, wherein the swipe gesture includes a user movement in a first direction having at least a first velocity at an end of the swipe gesture. 34. The method of claim 16, wherein moving the adjustable slider comprises moving the plurality of option indicators while the selection indicator remains fixed. 35. The method of claim 16, wherein moving the adjustable slider comprises moving the selection indicator while the plurality of option indicators remain fixed. 36. The method of claim 16, further comprising:
while moving the adjustable slider, generating a first type of output in sync with the movement of the adjustable slider as different values are selected for a parameter controlled by the adjustable slider. 37. The method of claim 36, wherein, while moving the adjustable slider:
in accordance with a determination that the representation of image data corresponds to stored image data, the first type of output includes audio output; and in accordance with a determination that the representation of image data corresponds to a live preview of image data being captured by the one or more cameras, the first type of output does not include audio output. 38. The method of claim 16, wherein displaying, on the display, the representation of image data further comprises:
in accordance with a determination that the representation of image data corresponds to stored image data, displaying the representation of image data with a prior simulated depth effect as previously modified by a prior first value for the simulated depth effect. | The present disclosure generally relates to user interfaces for adjusting simulated image effects. In some embodiments, user interfaces for adjusting a simulated depth effect is described. In some embodiments, user interfaces for displaying adjustments to a simulated depth effect is described. In some embodiments, user interfaces for indicating an interference to adjusting simulated image effects is described.1. An electronic device, comprising:
a display; one or more input devices; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for:
displaying, on the display, a representation of image data and a simulated depth effect indicator without displaying a control for adjusting a magnitude of the simulated depth effect, wherein the simulated depth effect indicator includes a numerical indication of the current magnitude of the simulated depth effect;
while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, detecting, via the one or more input devices, a first input;
in response to detecting the first input;
displaying, on the display, an adjustable slider associated with manipulating the representation of image data concurrently with the simulated depth effect indicator, wherein displaying the adjustable slider comprises sliding the representation of image data on the display to display the adjustable slider, wherein the slider is displayed at a location that was previously occupied by a portion of the representation of the image data that is displayed above the slider after sliding the representation of the image data on the display, and wherein the adjustable slider includes:
a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and
a selection indicator indicating that the first value is a currently-selected simulated depth effect value;
while displaying the adjustable slider, detecting, via the one or more input devices, an input directed to the adjustable slider;
in response to detecting the input directed to the adjustable slider:
moving the adjustable slider to indicate that a second value, of the plurality of selectable values for the simulated depth effect, is the currently-selected simulated depth effect value, and
changing an appearance of the representation of image data in accordance with the simulated depth effect as modified by the second value; and
subsequent to changing the appearance of the representation of image data:
displaying, on the display, the simulated depth effect indicator, wherein the simulated depth effect indicator includes an updated numerical indication of the current magnitude of the simulated depth effect, the updated numerical indication corresponding to the second value. 2. (canceled) 3. The electronic device of claim 1, wherein:
prior to detecting the first input, the simulated depth effect indicator is displayed with a first visual characteristic, and after detecting the first input, the simulated depth effect indicator is displayed with a second visual characteristic different from the first visual characteristic. 4-6. (canceled) 7. The electronic device of claim 1, wherein the one or more programs further include instructions for:
prior to detecting the first input, displaying, on the display, one or more mode selector affordances, wherein displaying the adjustable slider comprises replacing display of the one or more mode selector affordances with the adjustable slider. 8. The electronic device of claim 1, wherein the one or more programs further include instructions for:
prior to detecting the first input, displaying, on the display, a zoom control element, wherein displaying the adjustable slider comprises replacing display of the zoom control element. 9. The electronic device of claim 1, wherein the input directed to the adjustable slider is a swipe gesture on the adjustable slider, wherein the swipe gesture includes a user movement in a first direction having at least a first velocity at an end of the swipe gesture. 10. The electronic device of claim 1, wherein moving the adjustable slider comprises moving the plurality of option indicators while the selection indicator remains fixed. 11. The electronic device of claim 1, wherein moving the adjustable slider comprises moving the selection indicator while the plurality of option indicators remain fixed. 12. The electronic device of claim 1, wherein the one or more programs further include instructions for:
while moving the adjustable slider, generating a first type of output in sync with the movement of the adjustable slider as different values are selected for a parameter controlled by the adjustable slider. 13. The electronic device of claim 12, wherein, while moving the adjustable slider:
in accordance with a determination that the representation of image data corresponds to stored image data, the first type of output includes audio output; and in accordance with a determination that the representation of image data corresponds to a live preview of image data being captured by the one or more cameras, the first type of output does not include audio output. 14. The electronic device of claim 1, wherein displaying, on the display, the representation of image data further comprises:
in accordance with a determination that the representation of image data corresponds to stored image data, displaying the representation of image data with a prior simulated depth effect as previously modified by a prior first value for the simulated depth effect. 15. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for:
displaying, on the display, a representation of image data and a simulated depth effect indicator without displaying a control for adjusting a magnitude of the simulated depth effect, wherein the simulated depth effect indicator includes a numerical indication of the current magnitude of the simulated depth effect; while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, detecting, via the one or more input devices, a first input; in response to detecting the first input;
displaying, on the display, an adjustable slider associated with manipulating the representation of image data concurrently with the simulated depth effect indicator, wherein displaying the adjustable slider comprises sliding the representation of image data on the display to display the adjustable slider, wherein the slider is displayed at a location that was previously occupied by a portion of the representation of the image data that is displayed above the slider after sliding the representation of the image data on the display, and wherein the adjustable slider includes:
a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and
a selection indicator indicating that the first value is a currently-selected simulated depth effect value;
while displaying the adjustable slider, detecting, via the one or more input devices, an input directed to the adjustable slider; and in response to detecting the input directed to the adjustable slider:
moving the adjustable slider to indicate that a second value, of the plurality of selectable values for the simulated depth effect, is the currently-selected simulated depth effect value, and
changing an appearance of the representation of image data in accordance with the simulated depth effect as modified by the second value; and
subsequent to changing the appearance of the representation of image data:
displaying, on the display, the simulated depth effect indicator, wherein the simulated depth effect indicator includes an updated numerical indication of the current magnitude of the simulated depth effect, the updated numerical indication corresponding to the second value. 16. A method, comprising:
at an electronic device with a display and one or more input devices:
displaying, on the display, a representation of image data and a simulated depth effect indicator without displaying a control for adjusting a magnitude of the simulated depth effect, wherein the simulated depth effect indicator includes a numerical indication of the current magnitude of the simulated depth effect;
while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, detecting, via the one or more input devices, a first input;
in response to detecting the first input;
displaying, on the display, an adjustable slider associated with manipulating the representation of image data concurrently with the simulated depth effect indicator, wherein displaying the adjustable slider comprises sliding the representation of image data on the display to display the adjustable slider, wherein the slider is displayed at a location that was previously occupied by a portion of the representation of the image data that is displayed above the slider after sliding the representation of the image data on the display, and wherein the adjustable slider includes:
a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and
a selection indicator indicating that the first value is a currently-selected simulated depth effect value;
while displaying the adjustable slider, detecting, via the one or more input devices, an input directed to the adjustable slider; and
in response to detecting the input directed to the adjustable slider:
moving the adjustable slider to indicate that a second value, of the plurality of selectable values for the simulated depth effect, is the currently-selected simulated depth effect value; and
changing an appearance of the representation of image data in accordance with the simulated depth effect as modified by the second value; and
subsequent to changing the appearance of the representation of image data:
displaying, on the display, the simulated depth effect indicator, wherein the simulated depth effect indicator includes an updated numerical indication of the current magnitude of the simulated depth effect, the updated numerical indication corresponding to the second value. 17. (canceled) 18. The non-transitory computer-readable storage medium of claim 15, wherein:
prior to detecting the first input, the simulated depth effect indicator is displayed with a first visual characteristic, and after detecting the first input, the simulated depth effect indicator is displayed with a second visual characteristic different from the first visual characteristic. 19. (canceled) 20. The non-transitory computer-readable storage medium of claim 15, wherein the one or more programs further include instructions for:
prior to detecting the first input, displaying, on the display, one or more mode selector affordances, wherein displaying the adjustable slider comprises replacing display of the one or more mode selector affordances with the adjustable slider. 21. The non-transitory computer-readable storage medium of claim 15, wherein the one or more programs further include instructions for:
prior to detecting the first input, displaying, on the display, a zoom control element, wherein displaying the adjustable slider comprises replacing display of the zoom control element. 22. The non-transitory computer-readable storage medium of claim 15, wherein the input directed to the adjustable slider is a swipe gesture on the adjustable slider, wherein the swipe gesture includes a user movement in a first direction having at least a first velocity at an end of the swipe gesture. 23. The non-transitory computer-readable storage medium of claim 15, wherein moving the adjustable slider comprises moving the plurality of option indicators while the selection indicator remains fixed. 24. The non-transitory computer-readable storage medium of claim 15, wherein moving the adjustable slider comprises moving the selection indicator while the plurality of option indicators remain fixed. 25. The non-transitory computer-readable storage medium of claim 15, wherein the one or more programs further include instructions for:
while moving the adjustable slider, generating a first type of output in sync with the movement of the adjustable slider as different values are selected for a parameter controlled by the adjustable slider. 26. The non-transitory computer-readable storage medium of claim 25, wherein, while moving the adjustable slider:
in accordance with a determination that the representation of image data corresponds to stored image data, the first type of output includes audio output; and in accordance with a determination that the representation of image data corresponds to a live preview of image data being captured by the one or more cameras, the first type of output does not include audio output. 27. The non-transitory computer-readable storage medium of claim 15, wherein displaying, on the display, the representation of image data further comprises:
in accordance with a determination that the representation of image data corresponds to stored image data, displaying the representation of image data with a prior simulated depth effect as previously modified by a prior first value for the simulated depth effect. 28. (canceled) 29. The method of claim 16, wherein:
prior to detecting the first input, the simulated depth effect indicator is displayed with a first visual characteristic, and after detecting the first input, the simulated depth effect indicator is displayed with a second visual characteristic different from the first visual characteristic. 30. (canceled) 31. The method of claim 16, further comprising:
prior to detecting the first input, displaying, on the display, one or more mode selector affordances, wherein displaying the adjustable slider comprises replacing display of the one or more mode selector affordances with the adjustable slider. 32. The method of claim 16, further comprising:
prior to detecting the first input, displaying, on the display, a zoom control element, wherein displaying the adjustable slider comprises replacing display of the zoom control element. 33. The method of claim 16, wherein the input directed to the adjustable slider is a swipe gesture on the adjustable slider, wherein the swipe gesture includes a user movement in a first direction having at least a first velocity at an end of the swipe gesture. 34. The method of claim 16, wherein moving the adjustable slider comprises moving the plurality of option indicators while the selection indicator remains fixed. 35. The method of claim 16, wherein moving the adjustable slider comprises moving the selection indicator while the plurality of option indicators remain fixed. 36. The method of claim 16, further comprising:
while moving the adjustable slider, generating a first type of output in sync with the movement of the adjustable slider as different values are selected for a parameter controlled by the adjustable slider. 37. The method of claim 36, wherein, while moving the adjustable slider:
in accordance with a determination that the representation of image data corresponds to stored image data, the first type of output includes audio output; and in accordance with a determination that the representation of image data corresponds to a live preview of image data being captured by the one or more cameras, the first type of output does not include audio output. 38. The method of claim 16, wherein displaying, on the display, the representation of image data further comprises:
in accordance with a determination that the representation of image data corresponds to stored image data, displaying the representation of image data with a prior simulated depth effect as previously modified by a prior first value for the simulated depth effect. | 2,600 |
10,733 | 10,733 | 15,730,305 | 2,685 | Equipment and methods for coupling a top drive to one or more tools to facilitate data and/or signal transfer therebetween include a receiver assembly connectable to a top drive; a tool adapter connectable to a tool string, wherein a coupling between the receiver assembly and the tool adapter transfers at least one of torque and load therebetween; and a stationary data uplink comprising at least one of: a data swivel coupled to the receiver assembly; a wireless module coupled to the tool adapter; and a wireless transceiver coupled to the tool adapter. Equipment and methods include coupling a receiver assembly to a tool adapter to transfer at least one of torque and load therebetween, the tool adapter being connected to the tool string; collecting data at one or more points proximal the tool string; and communicating the data to a stationary computer while rotating the tool adapter. | 1. A tool coupler, comprising:
a receiver assembly connectable to a top drive; a tool adapter connectable to a tool string, wherein a coupling between the receiver assembly and the tool adapter transfers at least one of torque and load therebetween; and a stationary data uplink comprising at least one selected from the group of:
a data swivel coupled to the receiver assembly;
a wireless module coupled to the tool adapter; and
a wireless transceiver coupled to the tool adapter. 2. The tool coupler of claim 1, wherein:
the stationary data uplink comprises the data swivel coupled to the receiver assembly, and the data swivel is communicatively coupled with a stationary computer by data stator lines. 3. The tool coupler of claim 1, wherein the stationary data uplink comprises the data swivel coupled to the receiver assembly, the tool coupler further comprising a data coupling between the receiver assembly and the tool adapter. 4. The tool coupler of claim 3, wherein the data swivel is communicatively coupled with the data coupling by data rotator lines. 5. The tool coupler of claim 3, wherein the data coupling is communicatively coupled with a downhole data feed comprising at least one telemetry network selected from the group of:
a mud pulse telemetry network, an electromagnetic telemetry network, a wired drill pipe telemetry network, and an acoustic telemetry network. 6. The tool coupler of claim 1, wherein:
the stationary data uplink comprises the wireless module coupled to the tool adapter, and the wireless module is communicatively coupled with a stationary computer by at least one signal selected from the group of:
Wi-Fi signals,
Bluetooth signals, and
radio signals. 7. The tool coupler of claim 1, wherein:
the stationary data uplink comprises the wireless module coupled to the tool adapter, and the wireless module is communicatively coupled with a downhole data feed comprising at least one telemetry network selected from the group of:
a mud pulse telemetry network,
an electromagnetic telemetry network,
a wired drill pipe telemetry network, and
an acoustic telemetry network. 8. The tool coupler of claim 1, wherein:
the stationary data uplink comprises the wireless transceiver coupled to the tool adapter, and the wireless transceiver comprises an electronic acoustic receiver. 9. The tool coupler of claim 8, wherein the wireless transceiver is communicatively coupled with a stationary computer by at least one signal selected from the group of:
Wi-Fi signals, Bluetooth signals, radio signals, and acoustic signals. 10. The tool coupler of claim 8, wherein the wireless transceiver is wirelessly communicatively coupled with a downhole data feed comprising at least one selected from the group of:
a mud pulse telemetry network, an electromagnetic telemetry network, a wired drill pipe telemetry network, and an acoustic telemetry network. 11. The tool coupler of claim 1, further comprising an electric power supply for the stationary data uplink. 12. The tool coupler of claim 11, wherein the electric power supply is selected from the group consisting of:
an inductor coupled to the receiver assembly, and a battery coupled to the tool adapter. 13.-20. (canceled) 21. The tool coupler of claim 1, further comprising:
the receiver assembly having a housing, one or more ring couplers disposed within the housing, and an actuator connected to each ring coupler. 22. The tool coupler of claim 21, wherein the one or more ring couplers is a first and second ring coupler, wherein the first ring coupler is movable translationally relative to the housing and the second ring coupler is movable rotationally relative to the housing. 23. The tool coupler of claim 21, wherein the tool adapter having a tool stem, a central shaft, and a profile complimentary to the one or more ring couplers. 24. The tool coupler of claim 23, wherein the profile includes a plurality of splines complimentary with a mating feature of the one or more ring couplers. | Equipment and methods for coupling a top drive to one or more tools to facilitate data and/or signal transfer therebetween include a receiver assembly connectable to a top drive; a tool adapter connectable to a tool string, wherein a coupling between the receiver assembly and the tool adapter transfers at least one of torque and load therebetween; and a stationary data uplink comprising at least one of: a data swivel coupled to the receiver assembly; a wireless module coupled to the tool adapter; and a wireless transceiver coupled to the tool adapter. Equipment and methods include coupling a receiver assembly to a tool adapter to transfer at least one of torque and load therebetween, the tool adapter being connected to the tool string; collecting data at one or more points proximal the tool string; and communicating the data to a stationary computer while rotating the tool adapter.1. A tool coupler, comprising:
a receiver assembly connectable to a top drive; a tool adapter connectable to a tool string, wherein a coupling between the receiver assembly and the tool adapter transfers at least one of torque and load therebetween; and a stationary data uplink comprising at least one selected from the group of:
a data swivel coupled to the receiver assembly;
a wireless module coupled to the tool adapter; and
a wireless transceiver coupled to the tool adapter. 2. The tool coupler of claim 1, wherein:
the stationary data uplink comprises the data swivel coupled to the receiver assembly, and the data swivel is communicatively coupled with a stationary computer by data stator lines. 3. The tool coupler of claim 1, wherein the stationary data uplink comprises the data swivel coupled to the receiver assembly, the tool coupler further comprising a data coupling between the receiver assembly and the tool adapter. 4. The tool coupler of claim 3, wherein the data swivel is communicatively coupled with the data coupling by data rotator lines. 5. The tool coupler of claim 3, wherein the data coupling is communicatively coupled with a downhole data feed comprising at least one telemetry network selected from the group of:
a mud pulse telemetry network, an electromagnetic telemetry network, a wired drill pipe telemetry network, and an acoustic telemetry network. 6. The tool coupler of claim 1, wherein:
the stationary data uplink comprises the wireless module coupled to the tool adapter, and the wireless module is communicatively coupled with a stationary computer by at least one signal selected from the group of:
Wi-Fi signals,
Bluetooth signals, and
radio signals. 7. The tool coupler of claim 1, wherein:
the stationary data uplink comprises the wireless module coupled to the tool adapter, and the wireless module is communicatively coupled with a downhole data feed comprising at least one telemetry network selected from the group of:
a mud pulse telemetry network,
an electromagnetic telemetry network,
a wired drill pipe telemetry network, and
an acoustic telemetry network. 8. The tool coupler of claim 1, wherein:
the stationary data uplink comprises the wireless transceiver coupled to the tool adapter, and the wireless transceiver comprises an electronic acoustic receiver. 9. The tool coupler of claim 8, wherein the wireless transceiver is communicatively coupled with a stationary computer by at least one signal selected from the group of:
Wi-Fi signals, Bluetooth signals, radio signals, and acoustic signals. 10. The tool coupler of claim 8, wherein the wireless transceiver is wirelessly communicatively coupled with a downhole data feed comprising at least one selected from the group of:
a mud pulse telemetry network, an electromagnetic telemetry network, a wired drill pipe telemetry network, and an acoustic telemetry network. 11. The tool coupler of claim 1, further comprising an electric power supply for the stationary data uplink. 12. The tool coupler of claim 11, wherein the electric power supply is selected from the group consisting of:
an inductor coupled to the receiver assembly, and a battery coupled to the tool adapter. 13.-20. (canceled) 21. The tool coupler of claim 1, further comprising:
the receiver assembly having a housing, one or more ring couplers disposed within the housing, and an actuator connected to each ring coupler. 22. The tool coupler of claim 21, wherein the one or more ring couplers is a first and second ring coupler, wherein the first ring coupler is movable translationally relative to the housing and the second ring coupler is movable rotationally relative to the housing. 23. The tool coupler of claim 21, wherein the tool adapter having a tool stem, a central shaft, and a profile complimentary to the one or more ring couplers. 24. The tool coupler of claim 23, wherein the profile includes a plurality of splines complimentary with a mating feature of the one or more ring couplers. | 2,600 |
10,734 | 10,734 | 14,720,051 | 2,684 | An apparatus, system, and method provide drying procedure information through a user interface. A monitoring device transmits drying procedure data measured by sensors within a structure undergoing the drying procedure to a server. In response to requests received through a communication network from a user interface, the server transmits the drying procedure information that is presented through the user interface. A variety of information and services related to the drying procedure may be provided through the user interface. | 1. A device comprising:
a moisture sensor; a data interface connected to the moisture sensor and configured to obtain, using the moisture sensor, a moisture measurement of a building material forming a portion of a building structure; and communication interface circuitry connected to the data interface and configured to provide, to a cellular telephone, information indicative of the moisture measurement for transmission of the information by the cellular telephone through a cellular network. 2. The device of claim 1, wherein the information comprises a digital image indicative of the moisture measurement. 3. The device of claim 2, wherein the communication interface circuitry is further configured to provide the cellular telephone with a photograph captured at the building structure. 4. The device of claim 2, wherein the data interface is further configured to send the digital image to a visual display of a user interface for displaying the information on the visual display. 5. The device of claim 3, wherein the information comprises a time stamp. 6. The device of claim 1, wherein the data interface is further configured to send the information to a visual display of a user interface for displaying the information on the visual display. 7. The device of claim 1, wherein the communication interface circuitry is configured to provide the cellular telephone with information indicative of the moisture measurement for transmission of the information by the cellular telephone through a cellular network to a server. 8. A device comprising:
a moisture sensor; and a data interface connected to the moisture sensor and configured to obtain a moisture measurement of a building material forming a portion of a building structure using the moisture sensor, the device forming an apparatus for transmitting information indicative of the moisture measurement when connected to a cellular telephone such that the apparatus, in response to instructions from a controller, transmits the information from the cellular telephone through a cellular network. 9. The device of claim 8, wherein the information comprises a digital image indicative of the moisture measurement. 10. The device of claim 9, wherein the communication interface circuitry is further configured to provide the cellular telephone with a photograph captured at the building structure. 11. The device of claim 9, wherein the data interface is further configured to send the digital image to a visual display of a user interface for displaying the information on the visual display. 12. The device of claim 10, wherein the information comprises a time stamp. 13. The device of claim 8, wherein the data interface is further configured to send the information to a visual display of a user interface for displaying the information on the visual display. 14. The device of claim 8, wherein the apparatus transmits the information from the cellular telephone through the cellular network to a server. 15. A method comprising:
receiving, from a moisture sensor, moisture measurement data of a building material forming a portion of a building structure; and transmitting information indicative of the moisture measurement data through a cellular telephone. 16. The method of claim 15, wherein the information comprises a digital image indicative of the moisture measurement. 17. The method of claim 16, further comprising:
transmitting a photograph captured at the building structure through the cellular telephone. 18. The method of claim 17, further comprising:
capturing the photograph at the building structure. 19. The method of claim 18, further comprising sending the digital image to a visual display of a user interface for displaying the information on the visual display. 20. The method of claim 17, wherein the information comprises a time stamp. 21. The method of claim 15, further comprising:
generating a data message comprising the information; and transmitting the data message through a cellular network to a server. | An apparatus, system, and method provide drying procedure information through a user interface. A monitoring device transmits drying procedure data measured by sensors within a structure undergoing the drying procedure to a server. In response to requests received through a communication network from a user interface, the server transmits the drying procedure information that is presented through the user interface. A variety of information and services related to the drying procedure may be provided through the user interface.1. A device comprising:
a moisture sensor; a data interface connected to the moisture sensor and configured to obtain, using the moisture sensor, a moisture measurement of a building material forming a portion of a building structure; and communication interface circuitry connected to the data interface and configured to provide, to a cellular telephone, information indicative of the moisture measurement for transmission of the information by the cellular telephone through a cellular network. 2. The device of claim 1, wherein the information comprises a digital image indicative of the moisture measurement. 3. The device of claim 2, wherein the communication interface circuitry is further configured to provide the cellular telephone with a photograph captured at the building structure. 4. The device of claim 2, wherein the data interface is further configured to send the digital image to a visual display of a user interface for displaying the information on the visual display. 5. The device of claim 3, wherein the information comprises a time stamp. 6. The device of claim 1, wherein the data interface is further configured to send the information to a visual display of a user interface for displaying the information on the visual display. 7. The device of claim 1, wherein the communication interface circuitry is configured to provide the cellular telephone with information indicative of the moisture measurement for transmission of the information by the cellular telephone through a cellular network to a server. 8. A device comprising:
a moisture sensor; and a data interface connected to the moisture sensor and configured to obtain a moisture measurement of a building material forming a portion of a building structure using the moisture sensor, the device forming an apparatus for transmitting information indicative of the moisture measurement when connected to a cellular telephone such that the apparatus, in response to instructions from a controller, transmits the information from the cellular telephone through a cellular network. 9. The device of claim 8, wherein the information comprises a digital image indicative of the moisture measurement. 10. The device of claim 9, wherein the communication interface circuitry is further configured to provide the cellular telephone with a photograph captured at the building structure. 11. The device of claim 9, wherein the data interface is further configured to send the digital image to a visual display of a user interface for displaying the information on the visual display. 12. The device of claim 10, wherein the information comprises a time stamp. 13. The device of claim 8, wherein the data interface is further configured to send the information to a visual display of a user interface for displaying the information on the visual display. 14. The device of claim 8, wherein the apparatus transmits the information from the cellular telephone through the cellular network to a server. 15. A method comprising:
receiving, from a moisture sensor, moisture measurement data of a building material forming a portion of a building structure; and transmitting information indicative of the moisture measurement data through a cellular telephone. 16. The method of claim 15, wherein the information comprises a digital image indicative of the moisture measurement. 17. The method of claim 16, further comprising:
transmitting a photograph captured at the building structure through the cellular telephone. 18. The method of claim 17, further comprising:
capturing the photograph at the building structure. 19. The method of claim 18, further comprising sending the digital image to a visual display of a user interface for displaying the information on the visual display. 20. The method of claim 17, wherein the information comprises a time stamp. 21. The method of claim 15, further comprising:
generating a data message comprising the information; and transmitting the data message through a cellular network to a server. | 2,600 |
10,735 | 10,735 | 15,949,513 | 2,698 | An image sensor comprises a pixel circuit including a reset transistor and configured to output a pixel signal; and a differential comparator including a pixel input, a reference input, and a comparator output, wherein one of a source or a drain of the reset transistor is connected to the comparator output. In this manner, an active reset method may be incorporated in the image sensor. | 1. An image sensor, comprising:
a pixel circuit including a reset transistor and an amplification transistor, the pixel circuit configured to output a pixel signal; an amplifier including a pixel input and an amplifier output; a capacitor connected to one of a source or a drain of the reset transistor and to the amplification transistor; a first signal line connected to the pixel circuit and to the amplifier; and a second signal line connected to the other of the source or the drain of the reset transistor and to the amplifier output. 2. The image sensor according to claim 1, wherein the capacitor is connected to a gate of the amplification transistor. 3. The image sensor according to claim 2, wherein the amplifier is configured to receive the pixel signal at the pixel input and to receive a reference signal at a reference input. 4. The image sensor according to claim 1, further comprising a correlated double sampling (CDS) circuit, wherein:
the CDS circuit is configured to perform a P-phase measurement corresponding to a reset level of the pixel circuit, and the CDS circuit is configured to subsequently perform a D-phase measurement corresponding to a data level of the pixel circuit. 5. The image sensor according to claim 4, wherein the CDS circuit is configured to calculate a difference between the P-phase measurement and the D-phase measurement. 6. The image sensor according to claim 1, wherein a gate of the reset transistor is configured to receive a high reset level, an intermediate reset level, and a low reset level in this order. 7. The image sensor according to claim 6, wherein the high reset level and the intermediate reset level are higher than a gate threshold of the reset transistor, and the low reset level is lower than the gate threshold of the reset transistor. 8. The image sensor according to claim 1, wherein the pixel circuit further includes a photodiode and a transfer transistor. 9. The image sensor according to claim 8, wherein the pixel circuit further includes a selection transistor. 10. The image sensor according to claim 1, wherein
the pixel circuit is one of a plurality of pixel circuits arranged in a matrix, and the amplifier corresponds to a plurality of columns of the matrix. 11. An image processing method, comprising:
outputting a pixel signal via a first signal line from a pixel circuit, the pixel circuit including a reset transistor, an amplification transistor, and a capacitor connected to one of a source or a drain of the reset transistor and to the amplification transistor; and outputting an amplifier signal via a second signal line from an amplifier, the amplifier including a pixel input and an amplifier output, wherein the second signal line is connected to the other of the source or the drain of the reset transistor. 12. The image processing method according to claim 11, wherein the capacitor is connected to a gate of the amplification transistor. 13. The image processing method according to claim 12, further comprising receiving the pixel signal at the pixel input and to receiving a reference signal at a reference input of the amplifier. 14. The image processing method according to claim 11, further comprising:
performing a P-phase measurement corresponding to a reset level of the pixel circuit; and thereafter performing a D-phase measurement corresponding to a data level of the pixel circuit. 15. The image processing method according to claim 14, further comprising calculating a difference between the P-phase measurement and the D-phase measurement. 16. The image processing method according to claim 11, further comprising:
receiving, at a gate of the reset transistor, a high reset level, an intermediate reset level, and a low reset level in this order. 17. The image processing method according to claim 16, wherein the high reset level and the intermediate reset level are higher than a gate threshold of the reset transistor, and the low reset level is lower than the gate threshold of the reset transistor. 18. The image processing method according to claim 11, wherein the pixel circuit further includes a photodiode and a transfer transistor. 19. The image processing method according to claim 18, wherein the pixel circuit further includes a selection transistor. 20. The image processing method according to claim 11, wherein
the pixel circuit is one of a plurality of pixel circuits arranged in a matrix, and the amplifier corresponds to a plurality of columns of the matrix. | An image sensor comprises a pixel circuit including a reset transistor and configured to output a pixel signal; and a differential comparator including a pixel input, a reference input, and a comparator output, wherein one of a source or a drain of the reset transistor is connected to the comparator output. In this manner, an active reset method may be incorporated in the image sensor.1. An image sensor, comprising:
a pixel circuit including a reset transistor and an amplification transistor, the pixel circuit configured to output a pixel signal; an amplifier including a pixel input and an amplifier output; a capacitor connected to one of a source or a drain of the reset transistor and to the amplification transistor; a first signal line connected to the pixel circuit and to the amplifier; and a second signal line connected to the other of the source or the drain of the reset transistor and to the amplifier output. 2. The image sensor according to claim 1, wherein the capacitor is connected to a gate of the amplification transistor. 3. The image sensor according to claim 2, wherein the amplifier is configured to receive the pixel signal at the pixel input and to receive a reference signal at a reference input. 4. The image sensor according to claim 1, further comprising a correlated double sampling (CDS) circuit, wherein:
the CDS circuit is configured to perform a P-phase measurement corresponding to a reset level of the pixel circuit, and the CDS circuit is configured to subsequently perform a D-phase measurement corresponding to a data level of the pixel circuit. 5. The image sensor according to claim 4, wherein the CDS circuit is configured to calculate a difference between the P-phase measurement and the D-phase measurement. 6. The image sensor according to claim 1, wherein a gate of the reset transistor is configured to receive a high reset level, an intermediate reset level, and a low reset level in this order. 7. The image sensor according to claim 6, wherein the high reset level and the intermediate reset level are higher than a gate threshold of the reset transistor, and the low reset level is lower than the gate threshold of the reset transistor. 8. The image sensor according to claim 1, wherein the pixel circuit further includes a photodiode and a transfer transistor. 9. The image sensor according to claim 8, wherein the pixel circuit further includes a selection transistor. 10. The image sensor according to claim 1, wherein
the pixel circuit is one of a plurality of pixel circuits arranged in a matrix, and the amplifier corresponds to a plurality of columns of the matrix. 11. An image processing method, comprising:
outputting a pixel signal via a first signal line from a pixel circuit, the pixel circuit including a reset transistor, an amplification transistor, and a capacitor connected to one of a source or a drain of the reset transistor and to the amplification transistor; and outputting an amplifier signal via a second signal line from an amplifier, the amplifier including a pixel input and an amplifier output, wherein the second signal line is connected to the other of the source or the drain of the reset transistor. 12. The image processing method according to claim 11, wherein the capacitor is connected to a gate of the amplification transistor. 13. The image processing method according to claim 12, further comprising receiving the pixel signal at the pixel input and to receiving a reference signal at a reference input of the amplifier. 14. The image processing method according to claim 11, further comprising:
performing a P-phase measurement corresponding to a reset level of the pixel circuit; and thereafter performing a D-phase measurement corresponding to a data level of the pixel circuit. 15. The image processing method according to claim 14, further comprising calculating a difference between the P-phase measurement and the D-phase measurement. 16. The image processing method according to claim 11, further comprising:
receiving, at a gate of the reset transistor, a high reset level, an intermediate reset level, and a low reset level in this order. 17. The image processing method according to claim 16, wherein the high reset level and the intermediate reset level are higher than a gate threshold of the reset transistor, and the low reset level is lower than the gate threshold of the reset transistor. 18. The image processing method according to claim 11, wherein the pixel circuit further includes a photodiode and a transfer transistor. 19. The image processing method according to claim 18, wherein the pixel circuit further includes a selection transistor. 20. The image processing method according to claim 11, wherein
the pixel circuit is one of a plurality of pixel circuits arranged in a matrix, and the amplifier corresponds to a plurality of columns of the matrix. | 2,600 |
10,736 | 10,736 | 16,097,519 | 2,683 | A method and transponder in which at least one first line is formed by at least one first and at least one second antenna configured and operated in the manner of a backscatter, where the antennas are interconnected to one another such that, upon reception of a signal, the antennas scatter the signal back in the manner of a backscatter by a formed emission characteristic. | 1.-13. 14. A transponder configured to form at least one first line of at least two antennas, the transponder comprising:
antennas configured and operated in a backscatter manner, said antennas being functionally interconnected to one another such that, upon receiving a signal, said antennas scatter a signal back in the backscatter manner via a formed radiation characteristic; wherein the functional connection is configured such that individual ones of the antennas are at least one of (i) activated and (ii) deactivated. 15. The transponder as claimed in 14, wherein the antennas configured and operated in the backscatter manner scatter the signal back with the same modulation frequency in each case with a phase offset with respect to one another. 16. The transponder as claimed in 15, wherein the phase offset comprises a controllable phase offset. 17. The transponder as claimed in claim 14, wherein each of the antennas configured and operated in the backscatter manner scatter the signal back with the same phase offset with a modulation frequency with respect to one another. 18. The transponder as claimed in claim 17, wherein the modulation frequency comprises a controllable modulation frequency. 19. The transponder as claimed in claim 15, wherein each of the antennas configured and operated in the backscatter manner scatter the signal back with the same phase offset with a modulation frequency with respect to one another. 20. The transponder as claimed in claim 18, wherein the modulation frequency comprises a controllable modulation frequency. 21. The transponder as claimed in claim 17, wherein at least one third antenna functionally configured and operated in the backscatter manner is functionally and locally arranged such that a first, a second and the at least one third antennas span an area. 22. The transponder as claimed in claim 14, further comprising:
a control device connected to the antennas. 23. The transponder as claimed in claim 21, wherein the control device is implemented at least partially via logic circuits. 24. The transponder as claimed in claim 21, wherein the control device is formed as a logic circuit. 25. The transponder as claimed in claim 24, wherein the logic circuit comprises a programmable logic circuit. 26. The transponder as claimed in claim 21, wherein the control device is functionally connected to a memory device. 27. The transponder as claimed in claim 14, wherein the transponder comprises a Radio-Frequency Identification (RFID) transponder. 28. A method for operating a transponder in which at least one first line of at least one first and at least one second antenna is formed, the method comprising:
configuring and operating antennas in a backscatter manner, said antennas being functionally interconnected to one another such that, upon receiving a signal, said antennas scatter a signal back in the backscatter manner via a formed radiation characteristic; and controlling the transponder such that individual antennas are at least one of (i) activated and (ii) deactivated. 29. The method as claimed in claim 28, wherein the antennas configured and operated in the backscatter manner, upon receiving a signal, each scatter the signal back in the backscatter manner with the same modulation frequency with a phase offset with respect to one another. 30. The method as claimed in claim 29, wherein the phase offset is a controllable phase offset. 31. The method as claimed in claim 28, wherein the antennas configured and operated in the backscatter manner, upon receiving a signal, each scatter the signal back in the backscatter manner with the same phase offset with a modulation frequency with respect to one another. 32. The method as claimed in claim 31, wherein the modulation frequency is a controllable modulation frequency. 33. The method as claimed in claim 30, wherein said control is performed such that at least one of (i) activation and (ii) deactivation is alternately performed such that a differing set of antennas is respectively actively operated for a period. 34. The method as claimed in claim 32, wherein said control is performed such that at least one of (i) activation and (ii) deactivation is alternately performed such that a differing set of antennas is respectively actively operated for a period. 35. The method as claimed in claim 33, wherein said control is performed such that sets of antennas operated by at least one of (i) the activation and (ii) deactivation are cyclically repeated. 36. The method as claimed in claim 34, wherein said control is performed such that sets of antennas operated by at least one of (i) the activation and (ii) deactivation are cyclically repeated. 37. The method as claimed in claim 28, wherein an adaptation is performed such that individual antennas are at least one of (i) activated and (ii) deactivated based on a determination of transmission power. 38. The method as claimed in claim 28, wherein the transponder is a Radio-Frequency Identification (RFID) transponder. | A method and transponder in which at least one first line is formed by at least one first and at least one second antenna configured and operated in the manner of a backscatter, where the antennas are interconnected to one another such that, upon reception of a signal, the antennas scatter the signal back in the manner of a backscatter by a formed emission characteristic.1.-13. 14. A transponder configured to form at least one first line of at least two antennas, the transponder comprising:
antennas configured and operated in a backscatter manner, said antennas being functionally interconnected to one another such that, upon receiving a signal, said antennas scatter a signal back in the backscatter manner via a formed radiation characteristic; wherein the functional connection is configured such that individual ones of the antennas are at least one of (i) activated and (ii) deactivated. 15. The transponder as claimed in 14, wherein the antennas configured and operated in the backscatter manner scatter the signal back with the same modulation frequency in each case with a phase offset with respect to one another. 16. The transponder as claimed in 15, wherein the phase offset comprises a controllable phase offset. 17. The transponder as claimed in claim 14, wherein each of the antennas configured and operated in the backscatter manner scatter the signal back with the same phase offset with a modulation frequency with respect to one another. 18. The transponder as claimed in claim 17, wherein the modulation frequency comprises a controllable modulation frequency. 19. The transponder as claimed in claim 15, wherein each of the antennas configured and operated in the backscatter manner scatter the signal back with the same phase offset with a modulation frequency with respect to one another. 20. The transponder as claimed in claim 18, wherein the modulation frequency comprises a controllable modulation frequency. 21. The transponder as claimed in claim 17, wherein at least one third antenna functionally configured and operated in the backscatter manner is functionally and locally arranged such that a first, a second and the at least one third antennas span an area. 22. The transponder as claimed in claim 14, further comprising:
a control device connected to the antennas. 23. The transponder as claimed in claim 21, wherein the control device is implemented at least partially via logic circuits. 24. The transponder as claimed in claim 21, wherein the control device is formed as a logic circuit. 25. The transponder as claimed in claim 24, wherein the logic circuit comprises a programmable logic circuit. 26. The transponder as claimed in claim 21, wherein the control device is functionally connected to a memory device. 27. The transponder as claimed in claim 14, wherein the transponder comprises a Radio-Frequency Identification (RFID) transponder. 28. A method for operating a transponder in which at least one first line of at least one first and at least one second antenna is formed, the method comprising:
configuring and operating antennas in a backscatter manner, said antennas being functionally interconnected to one another such that, upon receiving a signal, said antennas scatter a signal back in the backscatter manner via a formed radiation characteristic; and controlling the transponder such that individual antennas are at least one of (i) activated and (ii) deactivated. 29. The method as claimed in claim 28, wherein the antennas configured and operated in the backscatter manner, upon receiving a signal, each scatter the signal back in the backscatter manner with the same modulation frequency with a phase offset with respect to one another. 30. The method as claimed in claim 29, wherein the phase offset is a controllable phase offset. 31. The method as claimed in claim 28, wherein the antennas configured and operated in the backscatter manner, upon receiving a signal, each scatter the signal back in the backscatter manner with the same phase offset with a modulation frequency with respect to one another. 32. The method as claimed in claim 31, wherein the modulation frequency is a controllable modulation frequency. 33. The method as claimed in claim 30, wherein said control is performed such that at least one of (i) activation and (ii) deactivation is alternately performed such that a differing set of antennas is respectively actively operated for a period. 34. The method as claimed in claim 32, wherein said control is performed such that at least one of (i) activation and (ii) deactivation is alternately performed such that a differing set of antennas is respectively actively operated for a period. 35. The method as claimed in claim 33, wherein said control is performed such that sets of antennas operated by at least one of (i) the activation and (ii) deactivation are cyclically repeated. 36. The method as claimed in claim 34, wherein said control is performed such that sets of antennas operated by at least one of (i) the activation and (ii) deactivation are cyclically repeated. 37. The method as claimed in claim 28, wherein an adaptation is performed such that individual antennas are at least one of (i) activated and (ii) deactivated based on a determination of transmission power. 38. The method as claimed in claim 28, wherein the transponder is a Radio-Frequency Identification (RFID) transponder. | 2,600 |
10,737 | 10,737 | 14,751,018 | 2,632 | Certain aspects of the present disclosure provide methods and apparatus for processing signals using separate frequency-shifted antennas in a radio frequency front-end (RFFE) of a wireless communication device. One example apparatus includes a transceiver; a first antenna configured to support communications in a first frequency range; a second antenna configured to support communications in a second frequency range different than the first frequency range, wherein the second frequency range partially overlaps the first frequency range; a first circuit block coupled to the transceiver and configured to process one or more first signals for transmission over a first bandwidth via the first antenna; and a second circuit block coupled to the transceiver and configured to process one or more second signals for reception over a second bandwidth via the second antenna, wherein the second bandwidth at least partially overlaps the first bandwidth. | 1. An apparatus for wireless communications, comprising:
a transceiver; a first antenna configured to support communications in a first frequency range; a second antenna configured to support communications in a second frequency range different than the first frequency range, wherein the second frequency range partially overlaps the first frequency range; a first circuit block coupled to the transceiver and configured to process one or more first signals for transmission over a first bandwidth via the first antenna; and a second circuit block coupled to the transceiver and configured to process one or more second signals for reception over a second bandwidth via the second antenna, wherein the second bandwidth at least partially overlaps the first bandwidth. 2. The apparatus of claim 1, wherein the first bandwidth is in the first frequency range and wherein the second bandwidth is in the second frequency range. 3. The apparatus of claim 1, wherein the first bandwidth equals the second bandwidth. 4. The apparatus of claim 1, wherein at least one of the first circuit block or the second circuit block supports carrier aggregation using multiple component carriers in at least one of the first bandwidth or the second bandwidth, respectively. 5. The apparatus of claim 1, wherein the first and second bandwidths range between 1400 MHz and 2800 MHz. 6. The apparatus of claim 1, wherein:
the first bandwidth comprises three component carriers for the transmission; and the first circuit block comprises a triplexer configured to process the one or more first signals for the transmission based on carrier aggregation using the three component carriers. 7. The apparatus of claim 1, wherein:
the second bandwidth comprises three component carriers for the reception; and the second circuit block comprises a triplexer configured to process the one or more second signals for the reception based on carrier aggregation using the three component carriers. 8. The apparatus of claim 1, further comprising:
a third circuit block coupled to the transceiver and configured to process one or more third signals for at least one of transmission or reception over a third bandwidth via the first antenna, the third bandwidth having frequencies lower than frequencies of the first bandwidth; and a fourth circuit block coupled to the transceiver and configured to process one or more fourth signals for at least one of transmission or reception over a fourth bandwidth via the second antenna, the fourth bandwidth having frequencies higher than frequencies of the second bandwidth. 9. The apparatus of claim 8, wherein at least one of the third circuit block or the fourth circuit block supports carrier aggregation using multiple component carriers in at least one of the third bandwidth or the fourth bandwidth, respectively. 10. The apparatus of claim 8, further comprising:
a first diplexer coupled to the first antenna and configured to interface the first circuit block and the third circuit block with the first antenna; and a second diplexer coupled to the second antenna and configured to interface the second circuit block and the fourth circuit block with the second antenna. 11. The apparatus of claim 8, wherein:
the third bandwidth ranges between 700 MHz and 900 MHz; and the fourth bandwidth ranges between 3.4 GHz and 6 GHz. 12. The apparatus of claim 8, wherein:
the third bandwidth comprises two component carriers for at least one of the transmission or the reception; and the third circuit block comprises a quadplexer configured to process the one or more third signals for the at least one of the transmission or the reception based on carrier aggregation using the two component carriers. 13. The apparatus of claim 8, wherein the fourth circuit block comprises:
one or more filters configured for at least one of transmission or reception over the fourth bandwidth; and a switching circuit configured to switch, within the fourth bandwidth, the at least one of the transmission or the reception from Ultra High frequency Band (UHB)-based communication to Long Term Evolution/Unlicensed (LTEU) Time Division Duplex (TDD)-based communication. 14. The apparatus of claim 13, wherein the fourth circuit block further comprises one or more passive duplexers configured to split the LTEU TDD-based communication and the UHB-based communication over bands of the fourth bandwidth. 15. The apparatus of claim 1, wherein the first antenna is disposed at a first side of the apparatus and wherein the second antenna is placed at a second side of the apparatus opposite the first side. 16. The apparatus of claim 1, wherein the first antenna and the second antenna have different sizes. 17. The apparatus of claim 1, further comprising:
a third antenna configured to support communications in the first frequency range; a fourth antenna configured to support communications in the second frequency range; a third circuit block that replicates the first circuit block coupled to the transceiver and configured to process one or more third signals for transmission over the first bandwidth via the third antenna; and a fourth circuit block that replicates the second circuit block coupled to the transceiver and configured to process one or more fourth signals for reception over the second bandwidth via the fourth antenna. 18. A method for wireless communications, comprising:
processing, by a first circuit block coupled to a transceiver, one or more first signals for transmission over a first bandwidth via a first antenna configured to support communications in a first frequency range; and processing, by a second circuit block coupled to the transceiver, one or more second signals for reception over a second bandwidth via a second antenna configured to support communications in a second frequency range different than the first frequency range, wherein the second frequency range partially overlaps the first frequency range and wherein the second bandwidth at least partially overlaps the first bandwidth. 19. The method of claim 18, wherein the first bandwidth is in the first frequency range and wherein the second bandwidth is in the second frequency range. 20. The method of claim 18, wherein the first bandwidth equals the second bandwidth. 21. The method of claim 18, wherein at least one of the first circuit block or the second circuit block supports carrier aggregation using multiple component carriers in at least one of the first bandwidth or the second bandwidth, respectively. 22. The method of claim 18, wherein the first and second bandwidths range between 1400 MHz and 2800 MHz. 23. The method of claim 18, wherein:
the first bandwidth comprises three component carriers for the transmission; and the first circuit block comprises a triplexer configured to process the one or more first signals for the transmission based on carrier aggregation using the three component carriers. 24. The method of claim 18, wherein:
the second bandwidth comprises three component carriers for the reception; and the second circuit block comprises a triplexer configured to process the one or more second signals for the reception based on carrier aggregation using the three component carriers. 25. The method of claim 18, further comprising:
processing, by a third circuit block coupled to the transceiver, one or more third signals for at least one of transmission or reception over a third bandwidth via the first antenna, the third bandwidth having frequencies lower than frequencies of the first bandwidth; and processing, by a fourth circuit block coupled to the transceiver, one or more fourth signals for at least one of transmission or reception over a fourth bandwidth via the second antenna, the fourth bandwidth having frequencies higher than frequencies of the second bandwidth. 26. The method of claim 25, wherein at least one of the third circuit block or the fourth circuit block supports carrier aggregation using multiple component carriers in at least one of the third bandwidth or the fourth bandwidth, respectively. 27. The method of claim 25, wherein:
the third bandwidth ranges between 700 MHz and 900 MHz; and the fourth bandwidth ranges between 3.4 GHz and 6 GHz. 28. The method of claim 25, wherein:
the third bandwidth comprises two component carriers for at least one of the transmission or the reception; and the third circuit block comprises a quadplexer configured to process the one or more third signals for the at least one of the transmission or the reception based on carrier aggregation using the two component carriers. 29. The method of claim 18, further comprising:
processing, by a third circuit block that replicates the first circuit block coupled to the transceiver, one or more third signals for transmission over the first bandwidth via a third antenna configured to support communications in the first frequency range; and processing, by a fourth circuit block that replicates the second circuit block coupled to the transceiver, one or more fourth signals for reception over the second bandwidth via a fourth antenna configured to support communications in the second frequency range. 30. An apparatus for wireless communications, comprising:
means for processing, coupled to a transceiver of the apparatus, one or more first signals for transmission over a first bandwidth via a first antenna configured to support communications in a first frequency range; and means for processing, coupled to the transceiver, one or more second signals for reception over a second bandwidth via a second antenna configured to support communications in a second frequency range different than the first frequency range, wherein the second frequency range partially overlaps the first frequency range and wherein the second bandwidth at least partially overlaps the first bandwidth. | Certain aspects of the present disclosure provide methods and apparatus for processing signals using separate frequency-shifted antennas in a radio frequency front-end (RFFE) of a wireless communication device. One example apparatus includes a transceiver; a first antenna configured to support communications in a first frequency range; a second antenna configured to support communications in a second frequency range different than the first frequency range, wherein the second frequency range partially overlaps the first frequency range; a first circuit block coupled to the transceiver and configured to process one or more first signals for transmission over a first bandwidth via the first antenna; and a second circuit block coupled to the transceiver and configured to process one or more second signals for reception over a second bandwidth via the second antenna, wherein the second bandwidth at least partially overlaps the first bandwidth.1. An apparatus for wireless communications, comprising:
a transceiver; a first antenna configured to support communications in a first frequency range; a second antenna configured to support communications in a second frequency range different than the first frequency range, wherein the second frequency range partially overlaps the first frequency range; a first circuit block coupled to the transceiver and configured to process one or more first signals for transmission over a first bandwidth via the first antenna; and a second circuit block coupled to the transceiver and configured to process one or more second signals for reception over a second bandwidth via the second antenna, wherein the second bandwidth at least partially overlaps the first bandwidth. 2. The apparatus of claim 1, wherein the first bandwidth is in the first frequency range and wherein the second bandwidth is in the second frequency range. 3. The apparatus of claim 1, wherein the first bandwidth equals the second bandwidth. 4. The apparatus of claim 1, wherein at least one of the first circuit block or the second circuit block supports carrier aggregation using multiple component carriers in at least one of the first bandwidth or the second bandwidth, respectively. 5. The apparatus of claim 1, wherein the first and second bandwidths range between 1400 MHz and 2800 MHz. 6. The apparatus of claim 1, wherein:
the first bandwidth comprises three component carriers for the transmission; and the first circuit block comprises a triplexer configured to process the one or more first signals for the transmission based on carrier aggregation using the three component carriers. 7. The apparatus of claim 1, wherein:
the second bandwidth comprises three component carriers for the reception; and the second circuit block comprises a triplexer configured to process the one or more second signals for the reception based on carrier aggregation using the three component carriers. 8. The apparatus of claim 1, further comprising:
a third circuit block coupled to the transceiver and configured to process one or more third signals for at least one of transmission or reception over a third bandwidth via the first antenna, the third bandwidth having frequencies lower than frequencies of the first bandwidth; and a fourth circuit block coupled to the transceiver and configured to process one or more fourth signals for at least one of transmission or reception over a fourth bandwidth via the second antenna, the fourth bandwidth having frequencies higher than frequencies of the second bandwidth. 9. The apparatus of claim 8, wherein at least one of the third circuit block or the fourth circuit block supports carrier aggregation using multiple component carriers in at least one of the third bandwidth or the fourth bandwidth, respectively. 10. The apparatus of claim 8, further comprising:
a first diplexer coupled to the first antenna and configured to interface the first circuit block and the third circuit block with the first antenna; and a second diplexer coupled to the second antenna and configured to interface the second circuit block and the fourth circuit block with the second antenna. 11. The apparatus of claim 8, wherein:
the third bandwidth ranges between 700 MHz and 900 MHz; and the fourth bandwidth ranges between 3.4 GHz and 6 GHz. 12. The apparatus of claim 8, wherein:
the third bandwidth comprises two component carriers for at least one of the transmission or the reception; and the third circuit block comprises a quadplexer configured to process the one or more third signals for the at least one of the transmission or the reception based on carrier aggregation using the two component carriers. 13. The apparatus of claim 8, wherein the fourth circuit block comprises:
one or more filters configured for at least one of transmission or reception over the fourth bandwidth; and a switching circuit configured to switch, within the fourth bandwidth, the at least one of the transmission or the reception from Ultra High frequency Band (UHB)-based communication to Long Term Evolution/Unlicensed (LTEU) Time Division Duplex (TDD)-based communication. 14. The apparatus of claim 13, wherein the fourth circuit block further comprises one or more passive duplexers configured to split the LTEU TDD-based communication and the UHB-based communication over bands of the fourth bandwidth. 15. The apparatus of claim 1, wherein the first antenna is disposed at a first side of the apparatus and wherein the second antenna is placed at a second side of the apparatus opposite the first side. 16. The apparatus of claim 1, wherein the first antenna and the second antenna have different sizes. 17. The apparatus of claim 1, further comprising:
a third antenna configured to support communications in the first frequency range; a fourth antenna configured to support communications in the second frequency range; a third circuit block that replicates the first circuit block coupled to the transceiver and configured to process one or more third signals for transmission over the first bandwidth via the third antenna; and a fourth circuit block that replicates the second circuit block coupled to the transceiver and configured to process one or more fourth signals for reception over the second bandwidth via the fourth antenna. 18. A method for wireless communications, comprising:
processing, by a first circuit block coupled to a transceiver, one or more first signals for transmission over a first bandwidth via a first antenna configured to support communications in a first frequency range; and processing, by a second circuit block coupled to the transceiver, one or more second signals for reception over a second bandwidth via a second antenna configured to support communications in a second frequency range different than the first frequency range, wherein the second frequency range partially overlaps the first frequency range and wherein the second bandwidth at least partially overlaps the first bandwidth. 19. The method of claim 18, wherein the first bandwidth is in the first frequency range and wherein the second bandwidth is in the second frequency range. 20. The method of claim 18, wherein the first bandwidth equals the second bandwidth. 21. The method of claim 18, wherein at least one of the first circuit block or the second circuit block supports carrier aggregation using multiple component carriers in at least one of the first bandwidth or the second bandwidth, respectively. 22. The method of claim 18, wherein the first and second bandwidths range between 1400 MHz and 2800 MHz. 23. The method of claim 18, wherein:
the first bandwidth comprises three component carriers for the transmission; and the first circuit block comprises a triplexer configured to process the one or more first signals for the transmission based on carrier aggregation using the three component carriers. 24. The method of claim 18, wherein:
the second bandwidth comprises three component carriers for the reception; and the second circuit block comprises a triplexer configured to process the one or more second signals for the reception based on carrier aggregation using the three component carriers. 25. The method of claim 18, further comprising:
processing, by a third circuit block coupled to the transceiver, one or more third signals for at least one of transmission or reception over a third bandwidth via the first antenna, the third bandwidth having frequencies lower than frequencies of the first bandwidth; and processing, by a fourth circuit block coupled to the transceiver, one or more fourth signals for at least one of transmission or reception over a fourth bandwidth via the second antenna, the fourth bandwidth having frequencies higher than frequencies of the second bandwidth. 26. The method of claim 25, wherein at least one of the third circuit block or the fourth circuit block supports carrier aggregation using multiple component carriers in at least one of the third bandwidth or the fourth bandwidth, respectively. 27. The method of claim 25, wherein:
the third bandwidth ranges between 700 MHz and 900 MHz; and the fourth bandwidth ranges between 3.4 GHz and 6 GHz. 28. The method of claim 25, wherein:
the third bandwidth comprises two component carriers for at least one of the transmission or the reception; and the third circuit block comprises a quadplexer configured to process the one or more third signals for the at least one of the transmission or the reception based on carrier aggregation using the two component carriers. 29. The method of claim 18, further comprising:
processing, by a third circuit block that replicates the first circuit block coupled to the transceiver, one or more third signals for transmission over the first bandwidth via a third antenna configured to support communications in the first frequency range; and processing, by a fourth circuit block that replicates the second circuit block coupled to the transceiver, one or more fourth signals for reception over the second bandwidth via a fourth antenna configured to support communications in the second frequency range. 30. An apparatus for wireless communications, comprising:
means for processing, coupled to a transceiver of the apparatus, one or more first signals for transmission over a first bandwidth via a first antenna configured to support communications in a first frequency range; and means for processing, coupled to the transceiver, one or more second signals for reception over a second bandwidth via a second antenna configured to support communications in a second frequency range different than the first frequency range, wherein the second frequency range partially overlaps the first frequency range and wherein the second bandwidth at least partially overlaps the first bandwidth. | 2,600 |
10,738 | 10,738 | 16,726,236 | 2,698 | A device for supporting a smartphone having a camera or supporting an action camera has a holder configured for supporting a smartphone having a camera or supporting an action camera, a handle configured to be held by a user, and components for receiving audio signals from an audio signal source and reproducing the received audio signals in the vicinity of the handle. | 1. A device for supporting a smartphone having a camera or supporting an action camera, comprising
a holder supporting the smartphone having the camera that provides for shooting and recording video or supporting the action camera that provides for shooting and recording video; a handle configured to be held by a user and connected with said holder; and means provided in the handle for receiving audio signals from an audio signal source and reproducing the received audio signals in the vicinity of the handle during recording by the camera, so that the video recording and the receiving of the audio signals can be are actuated at the same time, and therefore the video recorded by the camera includes the received audio signals. 2. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, and further comprising at least two motors arranged between the handle and the holder and operative for moving the holder and thereby moving the smartphone with a camera held in the holder or moving a camera held in the holder relative to the handle for stabilizing a position of the smartphone with a camera or a position of the camera. 3. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, and further comprising means for selecting the outside audio signal source for receiving the audio signals from the selected outside audio signal source and reproducing the received audio signal in the vicinity of the handle; and means for controlling a position of the smartphone having the camera or a position of the action camera, wherein the handle is elongated, said means for selecting and said means for controlling being both located on the handle at two different locations which are spaced from each other as considered in a direction of elongation of the handle. 4. (canceled) 5. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, wherein said means include a screen display for showing wavelengths of the audio signals, adjusting buttons for adjusting a volume of received radio signals up and down correspondingly, a knob for switching between AM radio waves and FM radio waves of the audio signals, a knob for dialing a corresponding wavelength within an AM and FM ranges of the audio signals, and a knob for adjusting a volume of a speaker which generates corresponding audio signals. 6. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, wherein said means include a receiver selected from the group consisting of an AM receiver and an FM receiver, a transmitter selected from the group consisting of an AM transmitter and an FM transmitter, and a speaker. 7. (canceled) 8. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 6, and further comprising at least two motors arranged between the handle and the holder and operative for moving the holder and thereby the smartphone with a camera held in the holder or a camera held in the holder relative to the handle for stabilizing a position of the smartphone with a camera or a position of the camera. 9. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 7, and further comprising at least two motors arranged between the handle and the holder and operative for moving the holder and thereby the smartphone with a camera held in the holder or a camera held in the holder relative to the handle for stabilizing a position of the smartphone with a camera or a position of the camera. 10. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, wherein said means include an antenna for transmitting and receiving radio signals, a receiver circuitry and a transmitter circuitry both connected to a processor circuitry, and a control processor which controls an operation said means, a microphone, a keypad, and a display. 11. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, wherein said handle is provided with an opening for attaching the device to a support and having an inner thread for screwing in a threaded end of the support, wherein the opening being provided in a part connectable with a remaining part of the handle by a threaded connection. 12. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, and further comprising a part which has a threaded portion provided with an outer thread and screwable into a lower part of the handle which is hollow and provided with an inner thread, and the handle having an inner opening which is exposed when the part is unscrewed and removed from a remaining part of the handle and which is used for inserting of a battery. 13. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 2, wherein the handle is connected with remaining components of the device via a multi-part telescoping connecting unit composed of a plurality of handle elements which are telescopingly connectable with one another and are displaceable relative to one another, so that by extending or retracting of the telescoping connecting unit a distance of the holder from the handle can be reduced or increased. 14. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, and further comprising a tilting motor and a rolling motor applying correspondingly a tilting movement and a rolling movement to the holder, and a foldable arm having one end connected with the tilting motor and another opposite end which is pivotably connected with the pan motor so that the device can be folded to occupy a substantially small space, which can be convenient for its storage and transportation. 15. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, and further comprising a rolling motor, a tilting motor, and a pan or yaw motor for imparting corresponding movements to the holder; a multi-part telescoping connecting unit composed of a plurality of elements which are telescopingly connectable with one another and are displaceable relative to one another and connects the handle with the pan or yaw motor so that by extending or retracting of the telescoping connecting unit a distance from the holder to the handle can be reduced or increased; and a foldable arm having one end connected with the tilting motor and another opposite end pivotably connected with the pan motor, so that the device can be folded to occupy a substantially small space which can be convenient for its storage and transportation. 16. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, and further comprising a rolling motor and a tilting motor for imparting corresponding movements to the holder and thereby to a smartphone or to an action camera held in the holder, wherein the handle has a substantially rectangular cross section and an additional portion provided with said components which allow receiving audio signals from audio sources and reproducing them in the vicinity of the handle and therefore in the vicinity of a supported smartphone having a camera or of a supported camera before, during and after recording by a supported camera. 17. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, wherein the holder includes a housing which has an additional portion provided with components which allow receiving audio signals from audio sources and reproducing them in the vicinity of the handle and therefore in the vicinity of a supported smartphone having a camera or of a supported camera before, during and after recording by the camera, the housing being supported on a tripod having adjustable legs, connected with the housing via a ball joint and fixed by a screw in its desired position. 18. A device for supporting a smartphone having a camera or an action camera as defined in claim 1, and further comprising a rolling motor, a tilting motor, and pan or yaw motor for imparting corresponding movements to the holder, and arms connecting the corresponding motors with one another, wherein the handle is provided with an on/off power button for turning the operation of the device on or off, a joystick button for controlling the movement of the smartphone or of the action camera, a button for operating corresponding modes, and a button for video and photo recording, the handle capable of being supported on a tripod. 19. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, and further comprising a rolling motor, a tilting motor, and pan or yaw motor for imparting corresponding movements to the holder, arms connecting the corresponding motors with one another, the handle being provided with an on/off power button for turning the operation of the device on or off, a joystick button for controlling the movement of the smartphone or the action camera, a button for operating corresponding modes, and a button for video and photo recording, the handle having a built in telescoping extension support composed of interconnected components which are movable toward one another and away from one another to reduce or to increase a length of the support, for convenience of user's operation with the smartphone having a camera or with an action camera received in the holder. 20. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, wherein the handle has an additional portion provided with the components allowing receiving audio signals from audio sources and reproducing them in the vicinity of the device and therefore in the vicinity of a supported smartphone having a camera or of a supported camera before, during and after recording by a supported camera, the handle being telescoping and has a plurality of individual portions telescopingly connected with each other to increase or reduce the length of the handle, with one of the individual portions which is the closest to the holder having an element which is turnably connected with an element supporting the holder. 21. A device for a smartphone having a camera or supporting an action camera, comprising a holder supporting the smartphone having a camera or supporting the action camera; a handle configured to be held by a user and connected with said holder; and means for receiving audio signals from an audio signal source and reproducing the received audio signals in the vicinity of the handle during recording by the camera, and further comprising a casing supporting a camera and forming the holder, the casing having an objective and a viewfinder display and having a portion provided with the components which allow receiving the audio signals from the audio source and reproducing them in the vicinity of the device during recording by the supported camera. 22. A device for supporting a smartphone having a camera or an action camera as defined in claim 1, wherein the handle has an additional portion and provided with the components which allow receiving audio signals from audio sources and reproducing them in the vicinity of the device and designed for supporting of the camera which is a large camera. 23. A device for supporting a smartphone having a camera or supporting an action camera, comprising a holder supporting the smartphone having a camera or supporting the action camera; a handle configured to be held by a user and connected with said holder; and means for receiving audio signals from an audio signal source and reproducing the received audio signals in the vicinity of the handle during recording by the camera, wherein the device is constructed as a pocket-size device with a casing which forms the handle of the device, the casing having a viewfinder with a touch screen and an objective, a rolling motor, a tilting motor, and pan or yaw motor for imparting corresponding movements to an objective of the camera and having a portion provided with the components which allow receiving the audio signals from the audio source and reproducing them in the vicinity of the device, and another portion with a thread for connecting to a support. 24. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 3, wherein said means for selecting the outside audio signal source for receiving the audio signals from the selected outside audio signal source and reproducing the received audio signal in the vicinity of the handle are located on the handle between said holder and said means for controlling a position of the smartphone having the camera or a position of the action camera. 25. A device for supporting a smartphone having a camera or supporting an action camera, comprising
a holder supporting the smartphone having a camera that provides for shooting and recording a video or supporting an action camera that provides for shooting and recording a video; a handle configured to be held by a user and connected with said holder; and means provided in the handle for receiving audio signals from an audio signal source and reproducing the received audio signals in the vicinity of the handle during recording by the camera, thereby allowing the recording of the video and the receiving of the audio signals at the same time. 26. A device as defined in claim 25, and further comprising a speaker which reproduces the received audio signals, adjusting buttons for adjusting up or down the volume of the received audio signals reproduced in the speaker, and a separate knob for adjusting the volume of the speaker which reproduces the audio signals. | A device for supporting a smartphone having a camera or supporting an action camera has a holder configured for supporting a smartphone having a camera or supporting an action camera, a handle configured to be held by a user, and components for receiving audio signals from an audio signal source and reproducing the received audio signals in the vicinity of the handle.1. A device for supporting a smartphone having a camera or supporting an action camera, comprising
a holder supporting the smartphone having the camera that provides for shooting and recording video or supporting the action camera that provides for shooting and recording video; a handle configured to be held by a user and connected with said holder; and means provided in the handle for receiving audio signals from an audio signal source and reproducing the received audio signals in the vicinity of the handle during recording by the camera, so that the video recording and the receiving of the audio signals can be are actuated at the same time, and therefore the video recorded by the camera includes the received audio signals. 2. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, and further comprising at least two motors arranged between the handle and the holder and operative for moving the holder and thereby moving the smartphone with a camera held in the holder or moving a camera held in the holder relative to the handle for stabilizing a position of the smartphone with a camera or a position of the camera. 3. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, and further comprising means for selecting the outside audio signal source for receiving the audio signals from the selected outside audio signal source and reproducing the received audio signal in the vicinity of the handle; and means for controlling a position of the smartphone having the camera or a position of the action camera, wherein the handle is elongated, said means for selecting and said means for controlling being both located on the handle at two different locations which are spaced from each other as considered in a direction of elongation of the handle. 4. (canceled) 5. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, wherein said means include a screen display for showing wavelengths of the audio signals, adjusting buttons for adjusting a volume of received radio signals up and down correspondingly, a knob for switching between AM radio waves and FM radio waves of the audio signals, a knob for dialing a corresponding wavelength within an AM and FM ranges of the audio signals, and a knob for adjusting a volume of a speaker which generates corresponding audio signals. 6. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, wherein said means include a receiver selected from the group consisting of an AM receiver and an FM receiver, a transmitter selected from the group consisting of an AM transmitter and an FM transmitter, and a speaker. 7. (canceled) 8. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 6, and further comprising at least two motors arranged between the handle and the holder and operative for moving the holder and thereby the smartphone with a camera held in the holder or a camera held in the holder relative to the handle for stabilizing a position of the smartphone with a camera or a position of the camera. 9. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 7, and further comprising at least two motors arranged between the handle and the holder and operative for moving the holder and thereby the smartphone with a camera held in the holder or a camera held in the holder relative to the handle for stabilizing a position of the smartphone with a camera or a position of the camera. 10. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, wherein said means include an antenna for transmitting and receiving radio signals, a receiver circuitry and a transmitter circuitry both connected to a processor circuitry, and a control processor which controls an operation said means, a microphone, a keypad, and a display. 11. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, wherein said handle is provided with an opening for attaching the device to a support and having an inner thread for screwing in a threaded end of the support, wherein the opening being provided in a part connectable with a remaining part of the handle by a threaded connection. 12. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, and further comprising a part which has a threaded portion provided with an outer thread and screwable into a lower part of the handle which is hollow and provided with an inner thread, and the handle having an inner opening which is exposed when the part is unscrewed and removed from a remaining part of the handle and which is used for inserting of a battery. 13. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 2, wherein the handle is connected with remaining components of the device via a multi-part telescoping connecting unit composed of a plurality of handle elements which are telescopingly connectable with one another and are displaceable relative to one another, so that by extending or retracting of the telescoping connecting unit a distance of the holder from the handle can be reduced or increased. 14. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, and further comprising a tilting motor and a rolling motor applying correspondingly a tilting movement and a rolling movement to the holder, and a foldable arm having one end connected with the tilting motor and another opposite end which is pivotably connected with the pan motor so that the device can be folded to occupy a substantially small space, which can be convenient for its storage and transportation. 15. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, and further comprising a rolling motor, a tilting motor, and a pan or yaw motor for imparting corresponding movements to the holder; a multi-part telescoping connecting unit composed of a plurality of elements which are telescopingly connectable with one another and are displaceable relative to one another and connects the handle with the pan or yaw motor so that by extending or retracting of the telescoping connecting unit a distance from the holder to the handle can be reduced or increased; and a foldable arm having one end connected with the tilting motor and another opposite end pivotably connected with the pan motor, so that the device can be folded to occupy a substantially small space which can be convenient for its storage and transportation. 16. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, and further comprising a rolling motor and a tilting motor for imparting corresponding movements to the holder and thereby to a smartphone or to an action camera held in the holder, wherein the handle has a substantially rectangular cross section and an additional portion provided with said components which allow receiving audio signals from audio sources and reproducing them in the vicinity of the handle and therefore in the vicinity of a supported smartphone having a camera or of a supported camera before, during and after recording by a supported camera. 17. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, wherein the holder includes a housing which has an additional portion provided with components which allow receiving audio signals from audio sources and reproducing them in the vicinity of the handle and therefore in the vicinity of a supported smartphone having a camera or of a supported camera before, during and after recording by the camera, the housing being supported on a tripod having adjustable legs, connected with the housing via a ball joint and fixed by a screw in its desired position. 18. A device for supporting a smartphone having a camera or an action camera as defined in claim 1, and further comprising a rolling motor, a tilting motor, and pan or yaw motor for imparting corresponding movements to the holder, and arms connecting the corresponding motors with one another, wherein the handle is provided with an on/off power button for turning the operation of the device on or off, a joystick button for controlling the movement of the smartphone or of the action camera, a button for operating corresponding modes, and a button for video and photo recording, the handle capable of being supported on a tripod. 19. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, and further comprising a rolling motor, a tilting motor, and pan or yaw motor for imparting corresponding movements to the holder, arms connecting the corresponding motors with one another, the handle being provided with an on/off power button for turning the operation of the device on or off, a joystick button for controlling the movement of the smartphone or the action camera, a button for operating corresponding modes, and a button for video and photo recording, the handle having a built in telescoping extension support composed of interconnected components which are movable toward one another and away from one another to reduce or to increase a length of the support, for convenience of user's operation with the smartphone having a camera or with an action camera received in the holder. 20. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 1, wherein the handle has an additional portion provided with the components allowing receiving audio signals from audio sources and reproducing them in the vicinity of the device and therefore in the vicinity of a supported smartphone having a camera or of a supported camera before, during and after recording by a supported camera, the handle being telescoping and has a plurality of individual portions telescopingly connected with each other to increase or reduce the length of the handle, with one of the individual portions which is the closest to the holder having an element which is turnably connected with an element supporting the holder. 21. A device for a smartphone having a camera or supporting an action camera, comprising a holder supporting the smartphone having a camera or supporting the action camera; a handle configured to be held by a user and connected with said holder; and means for receiving audio signals from an audio signal source and reproducing the received audio signals in the vicinity of the handle during recording by the camera, and further comprising a casing supporting a camera and forming the holder, the casing having an objective and a viewfinder display and having a portion provided with the components which allow receiving the audio signals from the audio source and reproducing them in the vicinity of the device during recording by the supported camera. 22. A device for supporting a smartphone having a camera or an action camera as defined in claim 1, wherein the handle has an additional portion and provided with the components which allow receiving audio signals from audio sources and reproducing them in the vicinity of the device and designed for supporting of the camera which is a large camera. 23. A device for supporting a smartphone having a camera or supporting an action camera, comprising a holder supporting the smartphone having a camera or supporting the action camera; a handle configured to be held by a user and connected with said holder; and means for receiving audio signals from an audio signal source and reproducing the received audio signals in the vicinity of the handle during recording by the camera, wherein the device is constructed as a pocket-size device with a casing which forms the handle of the device, the casing having a viewfinder with a touch screen and an objective, a rolling motor, a tilting motor, and pan or yaw motor for imparting corresponding movements to an objective of the camera and having a portion provided with the components which allow receiving the audio signals from the audio source and reproducing them in the vicinity of the device, and another portion with a thread for connecting to a support. 24. A device for supporting a smartphone having a camera or supporting an action camera as defined in claim 3, wherein said means for selecting the outside audio signal source for receiving the audio signals from the selected outside audio signal source and reproducing the received audio signal in the vicinity of the handle are located on the handle between said holder and said means for controlling a position of the smartphone having the camera or a position of the action camera. 25. A device for supporting a smartphone having a camera or supporting an action camera, comprising
a holder supporting the smartphone having a camera that provides for shooting and recording a video or supporting an action camera that provides for shooting and recording a video; a handle configured to be held by a user and connected with said holder; and means provided in the handle for receiving audio signals from an audio signal source and reproducing the received audio signals in the vicinity of the handle during recording by the camera, thereby allowing the recording of the video and the receiving of the audio signals at the same time. 26. A device as defined in claim 25, and further comprising a speaker which reproduces the received audio signals, adjusting buttons for adjusting up or down the volume of the received audio signals reproduced in the speaker, and a separate knob for adjusting the volume of the speaker which reproduces the audio signals. | 2,600 |
10,739 | 10,739 | 16,150,398 | 2,697 | A method and apparatus for activating a camera is provided herein. During operation a first camera will be activated (set to record). The first camera may be activated by a manual activation, or activated by reception of a particular network identification (ID) being received. The trigger that causes the first camera to activate will also cause the first camera to begin transmitting the network to trigger other cameras to record. | 1. A camera comprising:
a radio frequency (RF) transmitter; an RF receiver configured to receive a first network identification (ID) via an over-the-air RF transmission; logic circuitry configured to determine that the first network ID was received and matches a predetermined network ID; and an image sensor that is triggered to begin recording upon the RF receiver receiving the first network ID. 2. The camera of claim 1 wherein the logic circuitry is also configured to:
instruct the transmitter to begin transmitting a second network ID when the first network ID matches the predetermined network ID, wherein reception of the second network ID by other cameras triggers the other cameras to begin recording. 3. The camera of claim 2 wherein the first network ID has a same name as the second network ID. 4. The camera of claim 2 further comprising:
a graphical-user interface (GUI) configured to receive a “stop recording” command from a user;
wherein the logic circuitry is configured to instruct the image sensor to stop gathering image data when the “stop recording” command is received; and
wherein the logic circuitry is configured to instruct the transmitter to stop transmitting the second network ID when the “stop recording” command is received. 5. The camera of claim 1 further comprising:
memory; and
wherein the logic circuitry determines that the first network ID matches the predetermined network ID by accessing the memory and determining if the first network ID matches a network ID stored in the memory. 6. The camera of claim 1 further comprising:
a graphical-user interface (GUI) configured to receive a “stop recording” command from a user; and
wherein the logic circuitry is configured to instruct the image sensor to stop gathering image data when the “stop recording” command is received. 7. The camera of claim 1 wherein the network ID comprises a network name and/or a device name. 8. The camera of claim 1 wherein the logic circuitry is also configured to determine that the first network ID is no longer being received by the RF receiver and wherein the logic circuitry is configured to instruct the image sensor to stop gathering image data when the first network ID is no longer being received by the RF receiver. 9. A method comprising the steps of:
receiving a first network identification (ID) via an over-the-air RF transmission; determining that the first network ID matches a predetermined network ID; triggering a camera to record upon the reception of the first network ID. 10. The method of claim 9 further comprising the step of:
transmitting a second network ID when the first network ID matches the predetermined network ID, wherein reception of the second network ID by cameras causes the cameras to begin gathering image data. 11. The method of claim 10 wherein the first network ID has a same name as the second network ID. 12. The method of claim 10 further comprising the steps of:
receiving a “stop recording” command from a user;
stopping gathering the image data when the “stop recording” command is received; and
stopping the transmitting of the second network ID when the “stop recording” command is received. 13. The method of claim 9 further comprising:
wherein the step of determining that the first network ID matches the predetermined network ID comprises the steps of accessing a memory and determining if the first network ID matches a network ID stored in the memory. 14. The method of claim 9 further comprising the steps of:
receiving a “stop recording” command from a user; and
stopping the gathering of image data when the “stop recording” command is received. 15. The method of claim 9 wherein the network ID comprises a network name and/or a device name. | A method and apparatus for activating a camera is provided herein. During operation a first camera will be activated (set to record). The first camera may be activated by a manual activation, or activated by reception of a particular network identification (ID) being received. The trigger that causes the first camera to activate will also cause the first camera to begin transmitting the network to trigger other cameras to record.1. A camera comprising:
a radio frequency (RF) transmitter; an RF receiver configured to receive a first network identification (ID) via an over-the-air RF transmission; logic circuitry configured to determine that the first network ID was received and matches a predetermined network ID; and an image sensor that is triggered to begin recording upon the RF receiver receiving the first network ID. 2. The camera of claim 1 wherein the logic circuitry is also configured to:
instruct the transmitter to begin transmitting a second network ID when the first network ID matches the predetermined network ID, wherein reception of the second network ID by other cameras triggers the other cameras to begin recording. 3. The camera of claim 2 wherein the first network ID has a same name as the second network ID. 4. The camera of claim 2 further comprising:
a graphical-user interface (GUI) configured to receive a “stop recording” command from a user;
wherein the logic circuitry is configured to instruct the image sensor to stop gathering image data when the “stop recording” command is received; and
wherein the logic circuitry is configured to instruct the transmitter to stop transmitting the second network ID when the “stop recording” command is received. 5. The camera of claim 1 further comprising:
memory; and
wherein the logic circuitry determines that the first network ID matches the predetermined network ID by accessing the memory and determining if the first network ID matches a network ID stored in the memory. 6. The camera of claim 1 further comprising:
a graphical-user interface (GUI) configured to receive a “stop recording” command from a user; and
wherein the logic circuitry is configured to instruct the image sensor to stop gathering image data when the “stop recording” command is received. 7. The camera of claim 1 wherein the network ID comprises a network name and/or a device name. 8. The camera of claim 1 wherein the logic circuitry is also configured to determine that the first network ID is no longer being received by the RF receiver and wherein the logic circuitry is configured to instruct the image sensor to stop gathering image data when the first network ID is no longer being received by the RF receiver. 9. A method comprising the steps of:
receiving a first network identification (ID) via an over-the-air RF transmission; determining that the first network ID matches a predetermined network ID; triggering a camera to record upon the reception of the first network ID. 10. The method of claim 9 further comprising the step of:
transmitting a second network ID when the first network ID matches the predetermined network ID, wherein reception of the second network ID by cameras causes the cameras to begin gathering image data. 11. The method of claim 10 wherein the first network ID has a same name as the second network ID. 12. The method of claim 10 further comprising the steps of:
receiving a “stop recording” command from a user;
stopping gathering the image data when the “stop recording” command is received; and
stopping the transmitting of the second network ID when the “stop recording” command is received. 13. The method of claim 9 further comprising:
wherein the step of determining that the first network ID matches the predetermined network ID comprises the steps of accessing a memory and determining if the first network ID matches a network ID stored in the memory. 14. The method of claim 9 further comprising the steps of:
receiving a “stop recording” command from a user; and
stopping the gathering of image data when the “stop recording” command is received. 15. The method of claim 9 wherein the network ID comprises a network name and/or a device name. | 2,600 |
10,740 | 10,740 | 16,227,704 | 2,643 | In some examples, a configurator device maps a configuration attribute received from a wireless device to a credential attribute, the credential attribute to be mapped to a network policy. The configurator device sends the credential attribute to the wireless device, the credential attribute useable by the wireless device to access an access point (AP), and useable by the AP to obtain the network policy to apply to a communication of the wireless device. | 1. A method comprising:
accessing, by a configurator device, a first mapping comprising information that maps between configuration attributes and respective credential attributes; accessing, by the configurator device, a second mapping comprising information that maps between credential attributes and respective network policies; sending, by the configurator device, a first credential attribute to a wireless device, the first credential attribute mapped using the first mapping to a configuration attribute received from the wireless device, and the first credential attribute useable by the wireless device to access an access point (AP); and sending, by the configurator device to the AP, the second mapping for configuring the AP. 2. The method of claim 1, wherein the second mapping sent to the AP by the configurator device is for use by the AP in obtaining, responsive to the first credential attribute received by the AP from the wireless device, a corresponding network policy to apply to a communication of the wireless device, the corresponding network policy mapped to the first credential by the second mapping. 3. The method of claim 1, wherein the configuration attribute from the wireless device comprises a Device Provisioning Protocol (DPP) configuration attribute, and the first credential attribute comprises a DPP Connector attribute. 4. The method of claim 1, wherein the configuration attribute from the wireless device is in a configuration request received from the wireless device, and the first credential attribute sent to the wireless device is in a configuration response sent to the wireless device. 5. The method of claim 1, wherein the sending of the first credential attribute sent to the wireless device is part of a configuration process of the wireless device by the configurator device, and the sending of the second mapping is part of a configuration process of the AP by the configurator device. 6. The method of claim 5, wherein the configuration process of the wireless device by the configurator device is performed without an authentication server different from the configurator device. 7. The method of claim 1, wherein a network policy of the network policies is selected from among a communication filtering policy, a quality of service policy, a location-based resource access policy, a time-based resource access policy, and a connection duration policy. 8. The method of claim 1, further comprising:
sending, by the configurator device to the AP, an update of the second mapping. 9. The method of claim 1, wherein the configurator device is a first configurator device, the method further comprising:
configuring, by the first configurator device, a second configurator device to use the second mapping. 10. The method of claim 9, wherein the configuring of the second configurator device further comprises configuring the second configurator device to use a common set of attributes as the first configurator device. 11. The method of claim 9, wherein the configuring of the second configurator device further comprises configuring, by the first configurator device, the second configurator device to use the first mapping. 12. The method of claim 1, further comprising:
providing, to the wireless device, a list of authorized configurator devices, wherein the list of authorized configurator devices includes a scrambling of identities of the authorized configurator devices. 13. The method of claim 1, wherein the configuration attribute from the wireless device describes a property of the wireless device, and the first credential attribute is an attribute for use by the wireless device in gaining connectivity to the AP. 14. The method of claim 1, further comprising:
as part of a configuration process of the AP by the configurator device:
receiving, by the configurator device from the AP, a configuration request including a configuration attribute of the AP,
wherein the sending of the second mapping by the configurator device to the AP is in response to the configuration request received from the AP. 15. A configurator device comprising:
a communication transceiver to communicate with an access point (AP) and a wireless device; and at least one processor configured to:
access a first mapping comprising information that maps between configuration attributes and respective credential attributes;
access a second mapping comprising information that maps between credential attributes and respective network policies;
send a first credential attribute to the wireless device, the first credential attribute mapped using the first mapping to a configuration attribute received from the wireless device, and the first credential attribute useable by the wireless device to access an access point (AP); and
send, through the communication transceiver to the AP, the second mapping for configuring the AP. 16. The configurator device of claim 15, wherein the configuration attribute from the wireless device describes a property of the wireless device, and the first credential attribute is an attribute for use by the wireless device in gaining connectivity to the AP. 17. The configurator device of claim 15, wherein the at least one processor is configured to further:
as part of a configuration process of the AP by the configurator device:
receive, through the communication transceiver from the AP, a configuration request including a configuration attribute of the AP,
wherein the sending of the second mapping by the configurator device to the AP is in response to the configuration request received from the AP. 18. The configurator device of claim 15, wherein the second mapping is for use by the AP in obtaining a corresponding network policy to apply to a communication of the wireless device wirelessly connected to the AP, the obtaining of the corresponding network policy based on mapping, by the AP using the second mapping, the first credential attribute received by the AP from the wireless device to the corresponding network policy. 19. The configurator device of claim 15, wherein the configuration attribute from the wireless device is in a configuration request from the wireless device, and the first credential attribute sent to the wireless device is in a configuration response sent to the wireless device. 20. A non-transitory machine-readable storage medium comprising instructions that upon execution cause a configurator device to:
access a first mapping comprising information that maps between configuration attributes and respective credential attributes; access a second mapping comprising information that maps between credential attributes and respective network policies; send a first credential attribute to the wireless device, the first credential attribute mapped using the first mapping to a configuration attribute received from the wireless device, and the first credential attribute useable by the wireless device to access an access point (AP); and send, to the AP, the second mapping for configuring the AP. | In some examples, a configurator device maps a configuration attribute received from a wireless device to a credential attribute, the credential attribute to be mapped to a network policy. The configurator device sends the credential attribute to the wireless device, the credential attribute useable by the wireless device to access an access point (AP), and useable by the AP to obtain the network policy to apply to a communication of the wireless device.1. A method comprising:
accessing, by a configurator device, a first mapping comprising information that maps between configuration attributes and respective credential attributes; accessing, by the configurator device, a second mapping comprising information that maps between credential attributes and respective network policies; sending, by the configurator device, a first credential attribute to a wireless device, the first credential attribute mapped using the first mapping to a configuration attribute received from the wireless device, and the first credential attribute useable by the wireless device to access an access point (AP); and sending, by the configurator device to the AP, the second mapping for configuring the AP. 2. The method of claim 1, wherein the second mapping sent to the AP by the configurator device is for use by the AP in obtaining, responsive to the first credential attribute received by the AP from the wireless device, a corresponding network policy to apply to a communication of the wireless device, the corresponding network policy mapped to the first credential by the second mapping. 3. The method of claim 1, wherein the configuration attribute from the wireless device comprises a Device Provisioning Protocol (DPP) configuration attribute, and the first credential attribute comprises a DPP Connector attribute. 4. The method of claim 1, wherein the configuration attribute from the wireless device is in a configuration request received from the wireless device, and the first credential attribute sent to the wireless device is in a configuration response sent to the wireless device. 5. The method of claim 1, wherein the sending of the first credential attribute sent to the wireless device is part of a configuration process of the wireless device by the configurator device, and the sending of the second mapping is part of a configuration process of the AP by the configurator device. 6. The method of claim 5, wherein the configuration process of the wireless device by the configurator device is performed without an authentication server different from the configurator device. 7. The method of claim 1, wherein a network policy of the network policies is selected from among a communication filtering policy, a quality of service policy, a location-based resource access policy, a time-based resource access policy, and a connection duration policy. 8. The method of claim 1, further comprising:
sending, by the configurator device to the AP, an update of the second mapping. 9. The method of claim 1, wherein the configurator device is a first configurator device, the method further comprising:
configuring, by the first configurator device, a second configurator device to use the second mapping. 10. The method of claim 9, wherein the configuring of the second configurator device further comprises configuring the second configurator device to use a common set of attributes as the first configurator device. 11. The method of claim 9, wherein the configuring of the second configurator device further comprises configuring, by the first configurator device, the second configurator device to use the first mapping. 12. The method of claim 1, further comprising:
providing, to the wireless device, a list of authorized configurator devices, wherein the list of authorized configurator devices includes a scrambling of identities of the authorized configurator devices. 13. The method of claim 1, wherein the configuration attribute from the wireless device describes a property of the wireless device, and the first credential attribute is an attribute for use by the wireless device in gaining connectivity to the AP. 14. The method of claim 1, further comprising:
as part of a configuration process of the AP by the configurator device:
receiving, by the configurator device from the AP, a configuration request including a configuration attribute of the AP,
wherein the sending of the second mapping by the configurator device to the AP is in response to the configuration request received from the AP. 15. A configurator device comprising:
a communication transceiver to communicate with an access point (AP) and a wireless device; and at least one processor configured to:
access a first mapping comprising information that maps between configuration attributes and respective credential attributes;
access a second mapping comprising information that maps between credential attributes and respective network policies;
send a first credential attribute to the wireless device, the first credential attribute mapped using the first mapping to a configuration attribute received from the wireless device, and the first credential attribute useable by the wireless device to access an access point (AP); and
send, through the communication transceiver to the AP, the second mapping for configuring the AP. 16. The configurator device of claim 15, wherein the configuration attribute from the wireless device describes a property of the wireless device, and the first credential attribute is an attribute for use by the wireless device in gaining connectivity to the AP. 17. The configurator device of claim 15, wherein the at least one processor is configured to further:
as part of a configuration process of the AP by the configurator device:
receive, through the communication transceiver from the AP, a configuration request including a configuration attribute of the AP,
wherein the sending of the second mapping by the configurator device to the AP is in response to the configuration request received from the AP. 18. The configurator device of claim 15, wherein the second mapping is for use by the AP in obtaining a corresponding network policy to apply to a communication of the wireless device wirelessly connected to the AP, the obtaining of the corresponding network policy based on mapping, by the AP using the second mapping, the first credential attribute received by the AP from the wireless device to the corresponding network policy. 19. The configurator device of claim 15, wherein the configuration attribute from the wireless device is in a configuration request from the wireless device, and the first credential attribute sent to the wireless device is in a configuration response sent to the wireless device. 20. A non-transitory machine-readable storage medium comprising instructions that upon execution cause a configurator device to:
access a first mapping comprising information that maps between configuration attributes and respective credential attributes; access a second mapping comprising information that maps between credential attributes and respective network policies; send a first credential attribute to the wireless device, the first credential attribute mapped using the first mapping to a configuration attribute received from the wireless device, and the first credential attribute useable by the wireless device to access an access point (AP); and send, to the AP, the second mapping for configuring the AP. | 2,600 |
10,741 | 10,741 | 15,676,954 | 2,625 | An electronic device displays one or more views. A first view includes a plurality of gesture recognizers. The plurality of gesture recognizers in the first view includes one or more proxy gesture recognizers and one or more non-proxy gesture recognizers. Each gesture recognizer indicates one of a plurality of predefined states. A first proxy gesture recognizer in the first view indicates a state that corresponds to a state of a respective non-proxy gesture recognizer that is not in the first view. The device delivers a respective sub-event to the respective non-proxy gesture recognizer that is not in the first view and at least a subset of the one or more non-proxy gesture recognizers in the first view. The device processes the respective sub-event in accordance with states of the first proxy gesture recognizer and at least the subset of the one or more non-proxy gesture recognizers in the first view. | 1. (canceled) 2. A method comprising:
at an electronic device with a touch-sensitive surface:
detecting an input that includes a swiping movement of a contact that originates from a location adjacent to an edge of the touch-sensitive surface; and
in response to detecting the input:
processing the input with a first gesture recognizer, associated with an operating system application, to determine whether the first gesture recognizer, associated with the operating system application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface;
processing the input with a second gesture recognizer, associate with a first software application that is distinct from the operating system application, to determine whether the second gesture recognizer, associated with the first software application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface;
in accordance with a determination that the first gesture recognizer, associated with the operating system application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface, performing an operation defined by the operating system application and transitioning the second gesture recognizer into an event impossible state; and
in accordance with a determination that the second gesture recognizer, associated with the first software application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface, delaying performance of an operation defined by the first software application for the input until the first gesture recognizer indicates that the input does not match a gesture definition of the first gesture recognizer. 3. The method of claim 2, further comprising:
displaying one or more views of the first software application while detecting the input that includes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface. 4. The method of claim 2, wherein:
the first gesture recognizer and the second gesture recognizer are non-proxy gesture recognizers; the first software application is associated with a proxy gesture recognizer that has a state corresponding to a state of the first gesture recognizer; and processing the input with the second gesture recognizer includes determining the state of the proxy gesture recognizer and processing the input with the second gesture recognizer in accordance with the state of the proxy gesture recognizer. 5. The method of claim 4, wherein:
the second gesture recognizer is configured to wait for the proxy gesture recognizer to enter into a particular predefined state before performing the operation defined by the first software application for the input. 6. The method of claim 5, wherein:
the particular predefined state is one of: an event impossible state or an event canceled state. 7. The method of claim 2, further comprising:
in accordance with the determination that the second gesture recognizer, associated with the first software application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface, performing the operation defined by the first software application for the input after the first gesture recognizer indicates that the input does not match with the gesture definition of the first gesture recognizer. 8. An electronic device, comprising:
a touch-sensitive surface; one or more processors; and memory storing one or more programs for execution by the one or more processors, the one or more programs including instructions for:
detecting an input that includes a swiping movement of a contact that originates from a location adjacent to an edge of the touch-sensitive surface; and
in response to detecting the input:
processing the input with a first gesture recognizer, associated with an operating system application, to determine whether the first gesture recognizer, associated with the operating system application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface;
processing the input with a second gesture recognizer, associate with a first software application that is distinct from the operating system application, to determine whether the second gesture recognizer, associated with the first software application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface;
in accordance with a determination that the first gesture recognizer, associated with the operating system application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface, performing an operation defined by the operating system application and transitioning the second gesture recognizer into an event impossible state; and
in accordance with a determination that the second gesture recognizer, associated with the first software application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface, delaying performance of an operation defined by the first software application for the input until the first gesture recognizer indicates that the input does not match a gesture definition of the first gesture recognizer. 9. The device of claim 8, wherein the one or more programs include instructions for:
displaying one or more views of the first software application while detecting the input that includes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface. 10. The device of claim 8, wherein:
the first gesture recognizer and the second gesture recognizer are non-proxy gesture recognizers; the first software application is associated with a proxy gesture recognizer that has a state corresponding to a state of the first gesture recognizer; and processing the input with the second gesture recognizer includes determining the state of the proxy gesture recognizer and processing the input with the second gesture recognizer in accordance with the state of the proxy gesture recognizer. 11. The device of claim 10, wherein:
the second gesture recognizer is configured to wait for the proxy gesture recognizer to enter into a particular predefined state before performing the operation defined by the first software application for the input. 12. The device of claim 11, wherein:
the particular predefined state is one of: an event impossible state or an event canceled state. 13. The device of claim 8, further comprising:
in accordance with the determination that the second gesture recognizer, associated with the first software application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface, performing the operation defined by the first software application for the input after the first gesture recognizer indicates that the input does not match with the gesture definition of the first gesture recognizer. 14. The device of claim 8, wherein:
the operating system application is an application launcher or a settings application. 15. A non-transitory computer readable storage medium, storing one or more programs, which, when executed by one or more processors of an electronic device with a touch-sensitive surface, cause the electronic device to:
detect an input that includes a swiping movement of a contact that originates from a location adjacent to an edge of the touch-sensitive surface; and in response to detecting the input:
process the input with a first gesture recognizer, associated with an operating system application, to determine whether the first gesture recognizer, associated with the operating system application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface;
process the input with a second gesture recognizer, associate with a first software application that is distinct from the operating system application, to determine whether the second gesture recognizer, associated with the first software application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface;
in accordance with a determination that the first gesture recognizer, associated with the operating system application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface, perform an operation defined by the operating system application and transitioning the second gesture recognizer into an event impossible state; and
in accordance with a determination that the second gesture recognizer, associated with the first software application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface, delay performance of an operation defined by the first software application for the input until the first gesture recognizer indicates that the input does not match a gesture definition of the first gesture recognizer. 16. The computer readable storage medium of claim 15, wherein the one or more programs, when executed by the one or more processors of the electronic device, cause the electronic device to:
display one or more views of the first software application while detecting the input that includes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface. 17. The computer readable storage medium of claim 15, wherein:
the first gesture recognizer and the second gesture recognizer are non-proxy gesture recognizers; the first software application is associated with a proxy gesture recognizer that has a state corresponding to a state of the first gesture recognizer; and processing the input with the second gesture recognizer includes determining the state of the proxy gesture recognizer and processing the input with the second gesture recognizer in accordance with the state of the proxy gesture recognizer. 18. The computer readable storage medium of claim 17, wherein:
the second gesture recognizer is configured to wait for the proxy gesture recognizer to enter into a particular predefined state before performing the operation defined by the first software application for the input. 19. The computer readable storage medium of claim 18, wherein:
the particular predefined state is one of: an event impossible state or an event canceled state. 20. The computer readable storage medium of claim 15, wherein the one or more programs, when executed by the one or more processors of the electronic device, cause the electronic device to:
in accordance with the determination that the second gesture recognizer, associated with the first software application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface, perform the operation defined by the first software application for the input after the first gesture recognizer indicates that the input does not match with the gesture definition of the first gesture recognizer. 21. The computer readable storage medium of claim 15, wherein:
the operating system application is an application launcher or a settings application. | An electronic device displays one or more views. A first view includes a plurality of gesture recognizers. The plurality of gesture recognizers in the first view includes one or more proxy gesture recognizers and one or more non-proxy gesture recognizers. Each gesture recognizer indicates one of a plurality of predefined states. A first proxy gesture recognizer in the first view indicates a state that corresponds to a state of a respective non-proxy gesture recognizer that is not in the first view. The device delivers a respective sub-event to the respective non-proxy gesture recognizer that is not in the first view and at least a subset of the one or more non-proxy gesture recognizers in the first view. The device processes the respective sub-event in accordance with states of the first proxy gesture recognizer and at least the subset of the one or more non-proxy gesture recognizers in the first view.1. (canceled) 2. A method comprising:
at an electronic device with a touch-sensitive surface:
detecting an input that includes a swiping movement of a contact that originates from a location adjacent to an edge of the touch-sensitive surface; and
in response to detecting the input:
processing the input with a first gesture recognizer, associated with an operating system application, to determine whether the first gesture recognizer, associated with the operating system application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface;
processing the input with a second gesture recognizer, associate with a first software application that is distinct from the operating system application, to determine whether the second gesture recognizer, associated with the first software application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface;
in accordance with a determination that the first gesture recognizer, associated with the operating system application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface, performing an operation defined by the operating system application and transitioning the second gesture recognizer into an event impossible state; and
in accordance with a determination that the second gesture recognizer, associated with the first software application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface, delaying performance of an operation defined by the first software application for the input until the first gesture recognizer indicates that the input does not match a gesture definition of the first gesture recognizer. 3. The method of claim 2, further comprising:
displaying one or more views of the first software application while detecting the input that includes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface. 4. The method of claim 2, wherein:
the first gesture recognizer and the second gesture recognizer are non-proxy gesture recognizers; the first software application is associated with a proxy gesture recognizer that has a state corresponding to a state of the first gesture recognizer; and processing the input with the second gesture recognizer includes determining the state of the proxy gesture recognizer and processing the input with the second gesture recognizer in accordance with the state of the proxy gesture recognizer. 5. The method of claim 4, wherein:
the second gesture recognizer is configured to wait for the proxy gesture recognizer to enter into a particular predefined state before performing the operation defined by the first software application for the input. 6. The method of claim 5, wherein:
the particular predefined state is one of: an event impossible state or an event canceled state. 7. The method of claim 2, further comprising:
in accordance with the determination that the second gesture recognizer, associated with the first software application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface, performing the operation defined by the first software application for the input after the first gesture recognizer indicates that the input does not match with the gesture definition of the first gesture recognizer. 8. An electronic device, comprising:
a touch-sensitive surface; one or more processors; and memory storing one or more programs for execution by the one or more processors, the one or more programs including instructions for:
detecting an input that includes a swiping movement of a contact that originates from a location adjacent to an edge of the touch-sensitive surface; and
in response to detecting the input:
processing the input with a first gesture recognizer, associated with an operating system application, to determine whether the first gesture recognizer, associated with the operating system application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface;
processing the input with a second gesture recognizer, associate with a first software application that is distinct from the operating system application, to determine whether the second gesture recognizer, associated with the first software application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface;
in accordance with a determination that the first gesture recognizer, associated with the operating system application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface, performing an operation defined by the operating system application and transitioning the second gesture recognizer into an event impossible state; and
in accordance with a determination that the second gesture recognizer, associated with the first software application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface, delaying performance of an operation defined by the first software application for the input until the first gesture recognizer indicates that the input does not match a gesture definition of the first gesture recognizer. 9. The device of claim 8, wherein the one or more programs include instructions for:
displaying one or more views of the first software application while detecting the input that includes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface. 10. The device of claim 8, wherein:
the first gesture recognizer and the second gesture recognizer are non-proxy gesture recognizers; the first software application is associated with a proxy gesture recognizer that has a state corresponding to a state of the first gesture recognizer; and processing the input with the second gesture recognizer includes determining the state of the proxy gesture recognizer and processing the input with the second gesture recognizer in accordance with the state of the proxy gesture recognizer. 11. The device of claim 10, wherein:
the second gesture recognizer is configured to wait for the proxy gesture recognizer to enter into a particular predefined state before performing the operation defined by the first software application for the input. 12. The device of claim 11, wherein:
the particular predefined state is one of: an event impossible state or an event canceled state. 13. The device of claim 8, further comprising:
in accordance with the determination that the second gesture recognizer, associated with the first software application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface, performing the operation defined by the first software application for the input after the first gesture recognizer indicates that the input does not match with the gesture definition of the first gesture recognizer. 14. The device of claim 8, wherein:
the operating system application is an application launcher or a settings application. 15. A non-transitory computer readable storage medium, storing one or more programs, which, when executed by one or more processors of an electronic device with a touch-sensitive surface, cause the electronic device to:
detect an input that includes a swiping movement of a contact that originates from a location adjacent to an edge of the touch-sensitive surface; and in response to detecting the input:
process the input with a first gesture recognizer, associated with an operating system application, to determine whether the first gesture recognizer, associated with the operating system application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface;
process the input with a second gesture recognizer, associate with a first software application that is distinct from the operating system application, to determine whether the second gesture recognizer, associated with the first software application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface;
in accordance with a determination that the first gesture recognizer, associated with the operating system application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface, perform an operation defined by the operating system application and transitioning the second gesture recognizer into an event impossible state; and
in accordance with a determination that the second gesture recognizer, associated with the first software application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface, delay performance of an operation defined by the first software application for the input until the first gesture recognizer indicates that the input does not match a gesture definition of the first gesture recognizer. 16. The computer readable storage medium of claim 15, wherein the one or more programs, when executed by the one or more processors of the electronic device, cause the electronic device to:
display one or more views of the first software application while detecting the input that includes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface. 17. The computer readable storage medium of claim 15, wherein:
the first gesture recognizer and the second gesture recognizer are non-proxy gesture recognizers; the first software application is associated with a proxy gesture recognizer that has a state corresponding to a state of the first gesture recognizer; and processing the input with the second gesture recognizer includes determining the state of the proxy gesture recognizer and processing the input with the second gesture recognizer in accordance with the state of the proxy gesture recognizer. 18. The computer readable storage medium of claim 17, wherein:
the second gesture recognizer is configured to wait for the proxy gesture recognizer to enter into a particular predefined state before performing the operation defined by the first software application for the input. 19. The computer readable storage medium of claim 18, wherein:
the particular predefined state is one of: an event impossible state or an event canceled state. 20. The computer readable storage medium of claim 15, wherein the one or more programs, when executed by the one or more processors of the electronic device, cause the electronic device to:
in accordance with the determination that the second gesture recognizer, associated with the first software application, recognizes the swiping movement of the contact that originates from the location adjacent to an edge of the touch-sensitive surface, perform the operation defined by the first software application for the input after the first gesture recognizer indicates that the input does not match with the gesture definition of the first gesture recognizer. 21. The computer readable storage medium of claim 15, wherein:
the operating system application is an application launcher or a settings application. | 2,600 |
10,742 | 10,742 | 15,296,443 | 2,625 | An electronic device, method and computer program product are provided. The electronic device comprises a main body unit including a user input, memory to store program instructions, and a processor to execute the program instructions. A display unit is moveably coupled to the main body unit. The display unit comprises a flexible display layer having primary and secondary viewing regions formed as a monolithic structure. The secondary viewing region is foldable relative to the primary viewing region. The processor defines boundaries for the primary and secondary viewing regions. The processor displays content on the primary and secondary viewing regions within the corresponding boundaries. | 1. An electronic device, comprising:
a main body unit including a user input, a memory to store program instructions, and a processor to execute the program instructions; a display unit moveably coupled to the main body unit, the display unit comprising a flexible display layer having primary and secondary viewing regions formed as a monolithic structure, the secondary viewing region foldable relative to the primary viewing region; the processor to define boundaries for the primary and secondary viewing regions; and the processor to display content on the primary and secondary viewing regions within the corresponding boundaries. 2. The device of claim 1, wherein the flexible display layer is rotatably coupled to the main body unit proximate to a first boundary of the primary viewing region, the secondary viewing region foldable along a fold line proximate to a second boundary of the primary viewing region. 3. The device of claim 1, wherein the display unit is foldable about a primary lateral axis that extends laterally relative the user input and primary viewing region. 4. The device of claim 3, wherein the secondary viewing region is foldable, relative to the primary viewing region, about a secondary lateral axis that is oriented orthogonal to the primary lateral axis. 5. The device of claim 4, wherein the primary and secondary viewing regions are arranged in a stacked configuration with the primary and secondary lateral axes extending parallel to one another and located along bottom and top boundary of the primary viewing region. 6. The device of claim 3, wherein the primary and secondary viewing regions are arranged in a side-by-side configuration with the secondary viewing region foldable, relative to the primary viewing region, about a vertical axis that is oriented perpendicular to the primary lateral axis. 7. The device of claim 1, wherein the secondary viewing region is divided into first and second viewing regions that are formed as a monolithic structure with the primary viewing region, the first and second viewing regions are provided on opposite lateral sides of the primary viewing region. 8. The device of claim 1, wherein the flexible display layer comprises an intermediate region between the primary and secondary viewing regions, the intermediate region having a fold clearance area, the intermediate region enabling the secondary viewing region to be folded entirely inward until abutting against the primary viewing region and to be folded entirely outward until rear surfaces of the primary and secondary viewing regions are located proximate to one another. 9. The device of claim 1, further comprising a touch sensitive layer located over at least one of the first or secondary viewing regions of the flexible display layer, the touch sensitive layer to provide inputs to the processor. 10. A method, comprising:
providing an electronic device comprising a display unit moveably coupled to a main body unit, the display unit comprising a flexible display layer having primary and secondary viewing regions formed as a monolithic structure, the secondary viewing region foldable relative to the primary viewing region; under control of one or more processors configured with specific executable program instructions, displaying content on the primary and secondary viewing regions, respectively. 11. The method of claim 10, further comprising enabling the display unit to be foldable about a primary lateral axis that extends laterally relative the primary viewing region; and enabling the secondary viewing region to be foldable, relative to the primary viewing region, about a secondary lateral axis that is oriented orthogonal to the primary lateral axis. 12. The method of claim 10, further comprising enabling the secondary viewing region to be foldable entirely outward until rear surfaces of the primary and secondary viewing regions are located proximate to one another such that the primary and secondary viewing regions face in opposite directions. 13. The method of claim 10, further comprising:
arranging the primary and secondary viewing regions in a configuration in which the primary viewing region is folded to a closed position against the main base unit, corresponding to an intermediate folded position, while the secondary viewing region remains visible; and operating the secondary viewing region in a tablet mode when in the intermediate folded position. 14. The method of claim 10, arranging the primary and secondary viewing regions to be folded to closed positions against front and back surfaces of the main base unit. 15. The method of claim 14, wherein the primary and secondary viewing regions wrap about top and bottom edges of the main base unit when in the closed position. 16. The method of claim 14, wherein the primary and secondary viewing regions wrap about top and side edges of the main base unit when in the closed position. 17. The method of claim 10 wherein the secondary viewing region includes first and second viewing regions provided along opposite sides of the primary viewing region, the first and second viewing regions wrapping about opposite side edges of the main base unit when in the closed position. 18. A computer program product comprising a non-signal computer readable storage medium comprising computer executable code to:
map sections of a display memory to primary and secondary viewing regions of a flexible display layer, the primary and secondary viewing regions formed as a monolithic structure, the secondary viewing region foldable relative to the primary viewing region; and write content to corresponding sections of the display memory in connection with displaying the content on the primary and secondary viewing regions. 19. The computer program product of claim 18, further comprising executable code to identify a mode of operation and map the sections of the display memory based on the mode of operation. 20. The computer program product of claim 18, further comprising executable code to activate a touch sensitive layer proximate to the secondary viewing region based on a mode of operation. | An electronic device, method and computer program product are provided. The electronic device comprises a main body unit including a user input, memory to store program instructions, and a processor to execute the program instructions. A display unit is moveably coupled to the main body unit. The display unit comprises a flexible display layer having primary and secondary viewing regions formed as a monolithic structure. The secondary viewing region is foldable relative to the primary viewing region. The processor defines boundaries for the primary and secondary viewing regions. The processor displays content on the primary and secondary viewing regions within the corresponding boundaries.1. An electronic device, comprising:
a main body unit including a user input, a memory to store program instructions, and a processor to execute the program instructions; a display unit moveably coupled to the main body unit, the display unit comprising a flexible display layer having primary and secondary viewing regions formed as a monolithic structure, the secondary viewing region foldable relative to the primary viewing region; the processor to define boundaries for the primary and secondary viewing regions; and the processor to display content on the primary and secondary viewing regions within the corresponding boundaries. 2. The device of claim 1, wherein the flexible display layer is rotatably coupled to the main body unit proximate to a first boundary of the primary viewing region, the secondary viewing region foldable along a fold line proximate to a second boundary of the primary viewing region. 3. The device of claim 1, wherein the display unit is foldable about a primary lateral axis that extends laterally relative the user input and primary viewing region. 4. The device of claim 3, wherein the secondary viewing region is foldable, relative to the primary viewing region, about a secondary lateral axis that is oriented orthogonal to the primary lateral axis. 5. The device of claim 4, wherein the primary and secondary viewing regions are arranged in a stacked configuration with the primary and secondary lateral axes extending parallel to one another and located along bottom and top boundary of the primary viewing region. 6. The device of claim 3, wherein the primary and secondary viewing regions are arranged in a side-by-side configuration with the secondary viewing region foldable, relative to the primary viewing region, about a vertical axis that is oriented perpendicular to the primary lateral axis. 7. The device of claim 1, wherein the secondary viewing region is divided into first and second viewing regions that are formed as a monolithic structure with the primary viewing region, the first and second viewing regions are provided on opposite lateral sides of the primary viewing region. 8. The device of claim 1, wherein the flexible display layer comprises an intermediate region between the primary and secondary viewing regions, the intermediate region having a fold clearance area, the intermediate region enabling the secondary viewing region to be folded entirely inward until abutting against the primary viewing region and to be folded entirely outward until rear surfaces of the primary and secondary viewing regions are located proximate to one another. 9. The device of claim 1, further comprising a touch sensitive layer located over at least one of the first or secondary viewing regions of the flexible display layer, the touch sensitive layer to provide inputs to the processor. 10. A method, comprising:
providing an electronic device comprising a display unit moveably coupled to a main body unit, the display unit comprising a flexible display layer having primary and secondary viewing regions formed as a monolithic structure, the secondary viewing region foldable relative to the primary viewing region; under control of one or more processors configured with specific executable program instructions, displaying content on the primary and secondary viewing regions, respectively. 11. The method of claim 10, further comprising enabling the display unit to be foldable about a primary lateral axis that extends laterally relative the primary viewing region; and enabling the secondary viewing region to be foldable, relative to the primary viewing region, about a secondary lateral axis that is oriented orthogonal to the primary lateral axis. 12. The method of claim 10, further comprising enabling the secondary viewing region to be foldable entirely outward until rear surfaces of the primary and secondary viewing regions are located proximate to one another such that the primary and secondary viewing regions face in opposite directions. 13. The method of claim 10, further comprising:
arranging the primary and secondary viewing regions in a configuration in which the primary viewing region is folded to a closed position against the main base unit, corresponding to an intermediate folded position, while the secondary viewing region remains visible; and operating the secondary viewing region in a tablet mode when in the intermediate folded position. 14. The method of claim 10, arranging the primary and secondary viewing regions to be folded to closed positions against front and back surfaces of the main base unit. 15. The method of claim 14, wherein the primary and secondary viewing regions wrap about top and bottom edges of the main base unit when in the closed position. 16. The method of claim 14, wherein the primary and secondary viewing regions wrap about top and side edges of the main base unit when in the closed position. 17. The method of claim 10 wherein the secondary viewing region includes first and second viewing regions provided along opposite sides of the primary viewing region, the first and second viewing regions wrapping about opposite side edges of the main base unit when in the closed position. 18. A computer program product comprising a non-signal computer readable storage medium comprising computer executable code to:
map sections of a display memory to primary and secondary viewing regions of a flexible display layer, the primary and secondary viewing regions formed as a monolithic structure, the secondary viewing region foldable relative to the primary viewing region; and write content to corresponding sections of the display memory in connection with displaying the content on the primary and secondary viewing regions. 19. The computer program product of claim 18, further comprising executable code to identify a mode of operation and map the sections of the display memory based on the mode of operation. 20. The computer program product of claim 18, further comprising executable code to activate a touch sensitive layer proximate to the secondary viewing region based on a mode of operation. | 2,600 |
10,743 | 10,743 | 15,915,177 | 2,642 | The invention provides a solution to accessing for a geographical location information-based service in a server of a machine type communication based communication system, where firstly a server broadcasts or multicasts a content request message, the content request message comprising information on requested content and information on a target geographical location; then the server receives a response message from at least one user equipment, the response message indicating that the at least one user equipment possesses the requested content and the at least one user equipment being located within the target geographical location; and finally the server acquires the requested content from the at least one user equipment. | 1. A method of accessing for a geographical location information-based service in a server of a machine type communication based communication system, the method comprising:
(A) broadcasting or multicasting a content request message, the content request message comprising information on requested content and information indicating a target geographical location; (B) receiving a response message from at least one user equipment, the response message indicating that the at least one user equipment possesses the requested content, and the at least one user equipment being located within the target geographical location; and (C) acquiring the requested content from the at least one user equipment; wherein acquiring the requested content comprises: acquiring the requested content by establishing a connection with the at least one user equipment; and wherein after (C), the method further comprises: releasing the connection with the at least one user equipment. 2. The method according to claim 1, wherein before acquiring requested content, the method further comprises:
selecting at least one user equipment from the at least one user equipment based upon a predetermined rule; and wherein acquiring and requesting content further comprises: acquiring the requested content from the selected at least one user equipment. 3. The method according to claim 1, wherein after acquiring the requested content, the method further comprises:
receiving a notification message from the selected at least one user equipment, the notification message notifying the server that the selected at least one user equipment has moved out of the target geographical location; and repeating the stops A, B and C. 4. (canceled) 5. The method according to claim 1, wherein before broadcasting or multicasting the content request message, the method further comprises:
receiving the content request message from a network device, the content request message comprising the information on the target geographical location and the information on the requested content; and wherein after C, the method further comprises: sending the acquired content to the network device. 6. A method of accessing for a geographical location information-based service in a user equipment of a machine type communication based communication system, the method comprising:
(I) receiving a content request message from a server, the content request message comprising information on requested content and is information indicating a target geographical location; (II) determining whether the user equipment is located within the target geographical location and determining whether the user equipment possesses the requested content; and (III) sending a response message to the server, if the user equipment is located within the target geographical location and possesses the requested content, the response message comprising information indicating that the user equipment possesses the content requested by the server; (IV) transmitting the requested content to the server once a connection between the user equipment and the server is established. 7. The method according to claim 6, wherein the response message further comprises information indicating GPS coordinates of the geographical location where the user equipment is located. 8. (canceled) 9. The method according to claim 6, wherein transmitting the content comprises:
transmitting the requested content to the server by establishing a connection with the server. 10. The method according to claim 6, further comprising:
sending a notification message to the server, if the user equipment has moved out of the target geographical location before the transmission of the requested content is completed, the notification message notifying the server that the user equipment has moved out of the target geographical location. 11. A first accessing apparatus for a geographical location information-based service in a server of a machine type communication based communication system, the apparatus comprising:
a broadcasting or multicasting device, for broadcasting or multicasting a content request message, the content request message comprising information on requested content and information indicating a target geographical location; a first receiving device, for receiving a response message from at least one user equipment, the response message indicating that the at least one user equipment possesses the requested content, and the at least one user equipment being located within the target geographical location; and an acquiring device, for establishing a connection with the user equipment acquiring the requested content from the at least one user equipment using said connection. 12. The first accessing apparatus according to claim 11, wherein the first receiving device is further configured for:
receiving a notification message from the selected at least one user equipment, the notification message notifying the server that the selected at least one user equipment has moved out of the target geographical location; and wherein the first accessing apparatus further comprises: a control device, for controlling the broadcasting or multicasting device, the receiving device and the acquiring device to repeat the foregoing processes after the receiving device receives the notification message. 13. A second accessing apparatus for a geographical location information-based service in a user equipment of a machine type communication based communication system, the apparatus comprising:
a second receiving device, for receiving a content request message from a server, the content request message comprising information on requested content and information indicating a target geographical location; a determining device, for determining whether the user equipment is located within the target geographical location and determining whether the user equipment possesses the requested content; a sending device, for sending a response message to the server, if the user equipment is located within the target geographical location and possesses the requested content, the response message comprising information indicating that the user equipment possesses the content requested by the server; and a transmitting device, for transmitting the requested content to the server once a connection between the user equipment and the server is established. 14. (canceled) 15. The second accessing apparatus according to claim 13, wherein the sending device is further configured for:
sending a notification message to the server, if the user equipment has moved out of the target geographical location before the transmission of the requested content is completed, the notification message notifying the server that the user equipment has moved out of the target geographical location. | The invention provides a solution to accessing for a geographical location information-based service in a server of a machine type communication based communication system, where firstly a server broadcasts or multicasts a content request message, the content request message comprising information on requested content and information on a target geographical location; then the server receives a response message from at least one user equipment, the response message indicating that the at least one user equipment possesses the requested content and the at least one user equipment being located within the target geographical location; and finally the server acquires the requested content from the at least one user equipment.1. A method of accessing for a geographical location information-based service in a server of a machine type communication based communication system, the method comprising:
(A) broadcasting or multicasting a content request message, the content request message comprising information on requested content and information indicating a target geographical location; (B) receiving a response message from at least one user equipment, the response message indicating that the at least one user equipment possesses the requested content, and the at least one user equipment being located within the target geographical location; and (C) acquiring the requested content from the at least one user equipment; wherein acquiring the requested content comprises: acquiring the requested content by establishing a connection with the at least one user equipment; and wherein after (C), the method further comprises: releasing the connection with the at least one user equipment. 2. The method according to claim 1, wherein before acquiring requested content, the method further comprises:
selecting at least one user equipment from the at least one user equipment based upon a predetermined rule; and wherein acquiring and requesting content further comprises: acquiring the requested content from the selected at least one user equipment. 3. The method according to claim 1, wherein after acquiring the requested content, the method further comprises:
receiving a notification message from the selected at least one user equipment, the notification message notifying the server that the selected at least one user equipment has moved out of the target geographical location; and repeating the stops A, B and C. 4. (canceled) 5. The method according to claim 1, wherein before broadcasting or multicasting the content request message, the method further comprises:
receiving the content request message from a network device, the content request message comprising the information on the target geographical location and the information on the requested content; and wherein after C, the method further comprises: sending the acquired content to the network device. 6. A method of accessing for a geographical location information-based service in a user equipment of a machine type communication based communication system, the method comprising:
(I) receiving a content request message from a server, the content request message comprising information on requested content and is information indicating a target geographical location; (II) determining whether the user equipment is located within the target geographical location and determining whether the user equipment possesses the requested content; and (III) sending a response message to the server, if the user equipment is located within the target geographical location and possesses the requested content, the response message comprising information indicating that the user equipment possesses the content requested by the server; (IV) transmitting the requested content to the server once a connection between the user equipment and the server is established. 7. The method according to claim 6, wherein the response message further comprises information indicating GPS coordinates of the geographical location where the user equipment is located. 8. (canceled) 9. The method according to claim 6, wherein transmitting the content comprises:
transmitting the requested content to the server by establishing a connection with the server. 10. The method according to claim 6, further comprising:
sending a notification message to the server, if the user equipment has moved out of the target geographical location before the transmission of the requested content is completed, the notification message notifying the server that the user equipment has moved out of the target geographical location. 11. A first accessing apparatus for a geographical location information-based service in a server of a machine type communication based communication system, the apparatus comprising:
a broadcasting or multicasting device, for broadcasting or multicasting a content request message, the content request message comprising information on requested content and information indicating a target geographical location; a first receiving device, for receiving a response message from at least one user equipment, the response message indicating that the at least one user equipment possesses the requested content, and the at least one user equipment being located within the target geographical location; and an acquiring device, for establishing a connection with the user equipment acquiring the requested content from the at least one user equipment using said connection. 12. The first accessing apparatus according to claim 11, wherein the first receiving device is further configured for:
receiving a notification message from the selected at least one user equipment, the notification message notifying the server that the selected at least one user equipment has moved out of the target geographical location; and wherein the first accessing apparatus further comprises: a control device, for controlling the broadcasting or multicasting device, the receiving device and the acquiring device to repeat the foregoing processes after the receiving device receives the notification message. 13. A second accessing apparatus for a geographical location information-based service in a user equipment of a machine type communication based communication system, the apparatus comprising:
a second receiving device, for receiving a content request message from a server, the content request message comprising information on requested content and information indicating a target geographical location; a determining device, for determining whether the user equipment is located within the target geographical location and determining whether the user equipment possesses the requested content; a sending device, for sending a response message to the server, if the user equipment is located within the target geographical location and possesses the requested content, the response message comprising information indicating that the user equipment possesses the content requested by the server; and a transmitting device, for transmitting the requested content to the server once a connection between the user equipment and the server is established. 14. (canceled) 15. The second accessing apparatus according to claim 13, wherein the sending device is further configured for:
sending a notification message to the server, if the user equipment has moved out of the target geographical location before the transmission of the requested content is completed, the notification message notifying the server that the user equipment has moved out of the target geographical location. | 2,600 |
10,744 | 10,744 | 15,941,960 | 2,672 | System and methods for printing a multi-media document from an image file are provided. The method includes a printing device receiving a data holding image file. A processor of the printing device parses and extracts document data for printing the multi-media document from the data holding image file. The processor processes the document data to receive instruction data for one or more printing operation components of the printing device. The method also includes sending the instruction data to one or more printer operation components of the printing device. Also, the method includes each of the one or more printer operation components performing a printer operation onto a multi-media document. | 1. A method for printing a multi-media document using a printing device, the method comprising:
the printing device receiving a data holding image file; printing the multi-media document using document data extracted from the data holding image file. 2. The method of claim 1, wherein the data holding image file contains personalization data associated with an intended holder of the multi-media document. 3. The method of claim 1, further comprising:
a processor of the printing device parsing and extracting the document data for printing the multi-media document; the processor processing the document data to receive instruction data for one or more printing operation components of the printing device; and sending the instruction data to one or more printer operation components of the printing device, wherein printing the multi-media document using data extracted from the data holding image file includes at least one of the one or more printer operation components performing a printer operation onto a multi-media document. 4. The method of claim 3, wherein parsing and extracting the processed data includes parsing one or more of personalization data, instruction data, and security data. 5. The method of claim 4, wherein parsing the instruction data includes extracting different printer operation instruction data for a plurality of printing operations to be performed on the multi-media document. 6. The method of claim 4, wherein parsing the security data includes authenticating whether the printing device is authorized to access the document data using a key reference stored as part of the security data. 7. The method of claim 1, wherein the printing device is one of a central card issuance system and a desktop card printer. 8. The method of claim 1, wherein the printing device is a smart device, and
wherein printing the multi-media document using the document data extracted from the data holding image file includes rendering an image of the multi-media document for display on the smart device as the multi-media document would appear when physically printed. 9. A method for preparing a data holding image file for use in printing a multi-media document using a printing device, the method comprising:
receiving document data for performing one or more printing operations onto the multi-media document; receiving an image file; storing the document data into the image file to form a data holding image file; and sending the data holding image file to the printing device. 10. The method of claim 9, further comprising processing the document data for storage into the image file. 11. The method of claim 9, wherein the data holding image file is stored in a standard image format. 12. The method of claim 9, further comprising digitally signing the processed document data in the data holding image file. 13. The method of claim 9, wherein the printing device is one of a central card issuance system and a desktop card printer. 14. The method of claim 9, wherein the printing device is a smart device, and the method further comprising rendering an image of the multi-media document for display on the smart device as the multi-media document would appear when physically printed using the document data stored in the data holding image file. 15. A printing device comprising:
a network input/output that receives a data holding image file; and a plurality of printer operation components, wherein one or more of the printer operation components performs a printing operation onto the multi-media document using document data extracted from the data holding image file. 16. The printing device of claim 15, wherein the data holding image file contains personalization data associated with an intended holder of the multi-media document. 17. The printing device of claim 15, wherein the data holding image file contains document data for performing a plurality of printing operations onto the multi-media document and the device further comprises:
a processor that parses and extracts the document data from the data holding image file, processes the document data to receive instruction data for one or more of a plurality of printing operation components, and sends the instruction data to the one or more of the plurality of printing operation components. 18. The printing device of claim 17, wherein the processor processes the document data to receive the instruction data, personalization data, and security data. 19. The printing device of claim 18, wherein the processor extracts different printer operation instruction data for a plurality of printing operations to be performed on the multi-media document. 20. The printing device of claim 18, wherein the processor authenticates whether the printing device is authorized to access the document data using a key reference stored as part of the security data. | System and methods for printing a multi-media document from an image file are provided. The method includes a printing device receiving a data holding image file. A processor of the printing device parses and extracts document data for printing the multi-media document from the data holding image file. The processor processes the document data to receive instruction data for one or more printing operation components of the printing device. The method also includes sending the instruction data to one or more printer operation components of the printing device. Also, the method includes each of the one or more printer operation components performing a printer operation onto a multi-media document.1. A method for printing a multi-media document using a printing device, the method comprising:
the printing device receiving a data holding image file; printing the multi-media document using document data extracted from the data holding image file. 2. The method of claim 1, wherein the data holding image file contains personalization data associated with an intended holder of the multi-media document. 3. The method of claim 1, further comprising:
a processor of the printing device parsing and extracting the document data for printing the multi-media document; the processor processing the document data to receive instruction data for one or more printing operation components of the printing device; and sending the instruction data to one or more printer operation components of the printing device, wherein printing the multi-media document using data extracted from the data holding image file includes at least one of the one or more printer operation components performing a printer operation onto a multi-media document. 4. The method of claim 3, wherein parsing and extracting the processed data includes parsing one or more of personalization data, instruction data, and security data. 5. The method of claim 4, wherein parsing the instruction data includes extracting different printer operation instruction data for a plurality of printing operations to be performed on the multi-media document. 6. The method of claim 4, wherein parsing the security data includes authenticating whether the printing device is authorized to access the document data using a key reference stored as part of the security data. 7. The method of claim 1, wherein the printing device is one of a central card issuance system and a desktop card printer. 8. The method of claim 1, wherein the printing device is a smart device, and
wherein printing the multi-media document using the document data extracted from the data holding image file includes rendering an image of the multi-media document for display on the smart device as the multi-media document would appear when physically printed. 9. A method for preparing a data holding image file for use in printing a multi-media document using a printing device, the method comprising:
receiving document data for performing one or more printing operations onto the multi-media document; receiving an image file; storing the document data into the image file to form a data holding image file; and sending the data holding image file to the printing device. 10. The method of claim 9, further comprising processing the document data for storage into the image file. 11. The method of claim 9, wherein the data holding image file is stored in a standard image format. 12. The method of claim 9, further comprising digitally signing the processed document data in the data holding image file. 13. The method of claim 9, wherein the printing device is one of a central card issuance system and a desktop card printer. 14. The method of claim 9, wherein the printing device is a smart device, and the method further comprising rendering an image of the multi-media document for display on the smart device as the multi-media document would appear when physically printed using the document data stored in the data holding image file. 15. A printing device comprising:
a network input/output that receives a data holding image file; and a plurality of printer operation components, wherein one or more of the printer operation components performs a printing operation onto the multi-media document using document data extracted from the data holding image file. 16. The printing device of claim 15, wherein the data holding image file contains personalization data associated with an intended holder of the multi-media document. 17. The printing device of claim 15, wherein the data holding image file contains document data for performing a plurality of printing operations onto the multi-media document and the device further comprises:
a processor that parses and extracts the document data from the data holding image file, processes the document data to receive instruction data for one or more of a plurality of printing operation components, and sends the instruction data to the one or more of the plurality of printing operation components. 18. The printing device of claim 17, wherein the processor processes the document data to receive the instruction data, personalization data, and security data. 19. The printing device of claim 18, wherein the processor extracts different printer operation instruction data for a plurality of printing operations to be performed on the multi-media document. 20. The printing device of claim 18, wherein the processor authenticates whether the printing device is authorized to access the document data using a key reference stored as part of the security data. | 2,600 |
10,745 | 10,745 | 15,788,088 | 2,685 | Examples provide a method, a component, a tire-mounted TPMS module, a TPMS system and a machine readable storage or computer program for determining a duration of at least one contact patch event of a rolling tire. A method for determining a duration of at least one contact patch event of a rolling tire, comprises obtaining a sequence of acceleration measurement samples of the rolling tire from a tire-mounted acceleration sensor; and determining the duration of the contact patch event based on acceleration measurement samples of the sequence between a first time instance when the acceleration measurement samples cross a first threshold and a second time instance when the acceleration measurement samples cross a second threshold. | 1. A method for determining a duration of a contact patch event of a rolling tire, comprising:
obtaining a sequence of acceleration measurement samples of the rolling tire from a tire-mounted acceleration sensor; and determining the duration of the contact patch event based on acceleration measurement samples of the sequence between a first time instance when the acceleration measurement samples cross a first threshold and a second time instance when the acceleration measurement samples cross a second threshold. 2. The method of claim 1, wherein a slope of the sequence of acceleration measurement samples crossing the first threshold is of different sign than the slope of the acceleration measurement samples crossing the second threshold. 3. The method of claim 1, wherein at least one of the first and the second threshold corresponds to an average value of the acceleration measurement samples obtained during one or more revolutions of the rolling tire. 4. The method of claim 1, wherein the first and the second threshold are different from each other. 5. The method of claim 1, wherein the first time instance is smaller than the second time instance and wherein the first threshold has a smaller absolute value than the second threshold. 6. The method of claim 1, wherein determining the duration comprises
determining the first time instance when the acceleration measurement samples cross the first threshold, determining the second time instance when the acceleration measurement samples cross the second threshold after the first time instance, and determining the duration from a difference between the first and the second time instance. 7. The method of claim 1, wherein the duration is determined based on the number of samples between the first and the second time instance and a known sampling rate. 8. The method of claim 1, wherein the first threshold equals the second threshold, wherein determining the duration comprises:
determining a difference between the first or the second threshold and each sample of the sequence of acceleration measurement samples; accumulating the difference into an accumulated sum; setting the accumulated sum to zero whenever the accumulated sum is negative; stopping accumulating the accumulated sum when the sequence of acceleration measurement samples reaches the second time instance; and dividing the accumulated sum by the difference between the second threshold and an acceleration value corresponding to zero acceleration. 9. The method of claim 8, wherein the first time instance is updated to the time corresponding to the sample that caused the accumulated sum to be set to zero; and wherein the duration is determined from the difference between the first and second time instances after the accumulation has stopped. 10. The method of claim 1, wherein determining the duration comprises determining a weighted integral of the acceleration measurement samples between the first and the second time instance. 11. The method of claim 1,
wherein the first threshold equals the second threshold, and wherein determining the duration of the contact patch event comprises:
determining the first and the second time instance by extremizing an integral of the sequence of acceleration measurement samples; and
determining the duration of the contact patch event by dividing the value of the integral by a difference between the first or the second threshold and an acceleration value corresponding to zero acceleration. 12. The method of claim 1, wherein obtaining a sequence of acceleration measurement samples comprises obtaining, processing, and discarding a first set of measurement samples before a subsequent set is obtained. 13. The method of claim 8, wherein determining the duration comprises obtaining, processing, and discarding a first set of measurement samples before a subsequent set is obtained. 14. The method of claim 1, further comprising:
estimating a time window of the subsequent contact patch event based on the sequence of acceleration measurement samples, wherein the estimated time window comprises at least two time instances corresponding to the first and second time instance of a subsequent contact patch event; and increasing a sample rate of the sequence of acceleration measurement samples during the estimated time window with respect to a reduced sample rate outside the estimated time window. 15. The method of claim 14, wherein estimating the time window of the subsequent contact patch event of the rolling tire comprises:
determining a rotational rate of the tire; identifying a sample within the sequence of acceleration measurement samples of the rolling tire, indicative of a minimum radial acceleration; and estimating a time window of the subsequent contact patch event of the rolling tire based on the identified sample and the rotational rate of the tire. 16. The method of claim 15, further comprising validating the estimated time window, wherein the method is aborted if the time window exceeds a predetermined threshold. 17. The method of claim 1, further comprising validating the sequence of samples, wherein the method is aborted if the samples exceed a predetermined threshold. 18. The method of claim 1, further comprising validating the determination of the duration of the contact patch event, wherein the method is aborted if the duration exceeds a predetermined threshold. 19. The method of claim 18, wherein validating the determination of the duration of the contact patch event further comprises:
comparing at least two determinations of the duration of at least one contact patch event wherein each of the at least two determinations is obtained with a different method; and aborting the method if the at least two determinations differ by more than a predetermined threshold. 20. The method of claim 19, wherein the at least two determinations of the duration comprise:
a first determination obtained by:
determining the first time instance when the acceleration measurement samples cross the first threshold,
determining the second time instance when the acceleration measurement samples cross the second threshold after the first time instance, and
determining the duration from a difference between the first and the second time instance; and
a second determination obtained by:
determining a difference between the first or the second threshold and each sample of the sequence of acceleration measurement samples;
accumulating the difference into an accumulated sum;
setting the accumulated sum to zero whenever the accumulated sum is negative;
stopping the accumulation of the accumulated sum when the sequence of acceleration measurement samples reaches the second time instance
wherein the determination of the duration of the contact patch event comprises:
dividing the accumulated sum by the difference between the first threshold and an acceleration value corresponding to zero acceleration. 21. A tire-mounted TPMS component, comprising:
a tire-mounted acceleration sensor, the acceleration sensor being configured to generate a sequence of acceleration measurement samples of the rolling tire; and an electronic control unit configured to determine the duration of the contact patch event based on acceleration measurement samples of the sequence between a first time instance when the acceleration measurement samples cross a first threshold and a second time instance when the acceleration measurement samples cross a second threshold. 22. The tire-mounted TPMS component of claim 21, further comprising:
wherein the electronic control unit is further configured to:
estimate a time window of the subsequent contact patch event based on the sequence of acceleration measurement samples, wherein the estimated time window comprises at least two time instances corresponding to the first and second time instance of a subsequent contact patch event; and
wherein the sensor is further configured to:
increase a sample rate of the sequence of acceleration measurement samples during the estimated time window with respect to a reduced sample rate outside the estimated time window. 23. A machine readable non-transitory storage including machine readable instructions to determine a duration of a contact patch event of a rolling tire, that when executed:
obtains a sequence of acceleration measurement samples of the rolling tire; and determines the duration of the contact patch event based on acceleration measurement samples of the sequence between a first time instance when the acceleration measurement samples cross a first threshold and a second time instance when the acceleration measurement samples cross a second threshold. | Examples provide a method, a component, a tire-mounted TPMS module, a TPMS system and a machine readable storage or computer program for determining a duration of at least one contact patch event of a rolling tire. A method for determining a duration of at least one contact patch event of a rolling tire, comprises obtaining a sequence of acceleration measurement samples of the rolling tire from a tire-mounted acceleration sensor; and determining the duration of the contact patch event based on acceleration measurement samples of the sequence between a first time instance when the acceleration measurement samples cross a first threshold and a second time instance when the acceleration measurement samples cross a second threshold.1. A method for determining a duration of a contact patch event of a rolling tire, comprising:
obtaining a sequence of acceleration measurement samples of the rolling tire from a tire-mounted acceleration sensor; and determining the duration of the contact patch event based on acceleration measurement samples of the sequence between a first time instance when the acceleration measurement samples cross a first threshold and a second time instance when the acceleration measurement samples cross a second threshold. 2. The method of claim 1, wherein a slope of the sequence of acceleration measurement samples crossing the first threshold is of different sign than the slope of the acceleration measurement samples crossing the second threshold. 3. The method of claim 1, wherein at least one of the first and the second threshold corresponds to an average value of the acceleration measurement samples obtained during one or more revolutions of the rolling tire. 4. The method of claim 1, wherein the first and the second threshold are different from each other. 5. The method of claim 1, wherein the first time instance is smaller than the second time instance and wherein the first threshold has a smaller absolute value than the second threshold. 6. The method of claim 1, wherein determining the duration comprises
determining the first time instance when the acceleration measurement samples cross the first threshold, determining the second time instance when the acceleration measurement samples cross the second threshold after the first time instance, and determining the duration from a difference between the first and the second time instance. 7. The method of claim 1, wherein the duration is determined based on the number of samples between the first and the second time instance and a known sampling rate. 8. The method of claim 1, wherein the first threshold equals the second threshold, wherein determining the duration comprises:
determining a difference between the first or the second threshold and each sample of the sequence of acceleration measurement samples; accumulating the difference into an accumulated sum; setting the accumulated sum to zero whenever the accumulated sum is negative; stopping accumulating the accumulated sum when the sequence of acceleration measurement samples reaches the second time instance; and dividing the accumulated sum by the difference between the second threshold and an acceleration value corresponding to zero acceleration. 9. The method of claim 8, wherein the first time instance is updated to the time corresponding to the sample that caused the accumulated sum to be set to zero; and wherein the duration is determined from the difference between the first and second time instances after the accumulation has stopped. 10. The method of claim 1, wherein determining the duration comprises determining a weighted integral of the acceleration measurement samples between the first and the second time instance. 11. The method of claim 1,
wherein the first threshold equals the second threshold, and wherein determining the duration of the contact patch event comprises:
determining the first and the second time instance by extremizing an integral of the sequence of acceleration measurement samples; and
determining the duration of the contact patch event by dividing the value of the integral by a difference between the first or the second threshold and an acceleration value corresponding to zero acceleration. 12. The method of claim 1, wherein obtaining a sequence of acceleration measurement samples comprises obtaining, processing, and discarding a first set of measurement samples before a subsequent set is obtained. 13. The method of claim 8, wherein determining the duration comprises obtaining, processing, and discarding a first set of measurement samples before a subsequent set is obtained. 14. The method of claim 1, further comprising:
estimating a time window of the subsequent contact patch event based on the sequence of acceleration measurement samples, wherein the estimated time window comprises at least two time instances corresponding to the first and second time instance of a subsequent contact patch event; and increasing a sample rate of the sequence of acceleration measurement samples during the estimated time window with respect to a reduced sample rate outside the estimated time window. 15. The method of claim 14, wherein estimating the time window of the subsequent contact patch event of the rolling tire comprises:
determining a rotational rate of the tire; identifying a sample within the sequence of acceleration measurement samples of the rolling tire, indicative of a minimum radial acceleration; and estimating a time window of the subsequent contact patch event of the rolling tire based on the identified sample and the rotational rate of the tire. 16. The method of claim 15, further comprising validating the estimated time window, wherein the method is aborted if the time window exceeds a predetermined threshold. 17. The method of claim 1, further comprising validating the sequence of samples, wherein the method is aborted if the samples exceed a predetermined threshold. 18. The method of claim 1, further comprising validating the determination of the duration of the contact patch event, wherein the method is aborted if the duration exceeds a predetermined threshold. 19. The method of claim 18, wherein validating the determination of the duration of the contact patch event further comprises:
comparing at least two determinations of the duration of at least one contact patch event wherein each of the at least two determinations is obtained with a different method; and aborting the method if the at least two determinations differ by more than a predetermined threshold. 20. The method of claim 19, wherein the at least two determinations of the duration comprise:
a first determination obtained by:
determining the first time instance when the acceleration measurement samples cross the first threshold,
determining the second time instance when the acceleration measurement samples cross the second threshold after the first time instance, and
determining the duration from a difference between the first and the second time instance; and
a second determination obtained by:
determining a difference between the first or the second threshold and each sample of the sequence of acceleration measurement samples;
accumulating the difference into an accumulated sum;
setting the accumulated sum to zero whenever the accumulated sum is negative;
stopping the accumulation of the accumulated sum when the sequence of acceleration measurement samples reaches the second time instance
wherein the determination of the duration of the contact patch event comprises:
dividing the accumulated sum by the difference between the first threshold and an acceleration value corresponding to zero acceleration. 21. A tire-mounted TPMS component, comprising:
a tire-mounted acceleration sensor, the acceleration sensor being configured to generate a sequence of acceleration measurement samples of the rolling tire; and an electronic control unit configured to determine the duration of the contact patch event based on acceleration measurement samples of the sequence between a first time instance when the acceleration measurement samples cross a first threshold and a second time instance when the acceleration measurement samples cross a second threshold. 22. The tire-mounted TPMS component of claim 21, further comprising:
wherein the electronic control unit is further configured to:
estimate a time window of the subsequent contact patch event based on the sequence of acceleration measurement samples, wherein the estimated time window comprises at least two time instances corresponding to the first and second time instance of a subsequent contact patch event; and
wherein the sensor is further configured to:
increase a sample rate of the sequence of acceleration measurement samples during the estimated time window with respect to a reduced sample rate outside the estimated time window. 23. A machine readable non-transitory storage including machine readable instructions to determine a duration of a contact patch event of a rolling tire, that when executed:
obtains a sequence of acceleration measurement samples of the rolling tire; and determines the duration of the contact patch event based on acceleration measurement samples of the sequence between a first time instance when the acceleration measurement samples cross a first threshold and a second time instance when the acceleration measurement samples cross a second threshold. | 2,600 |
10,746 | 10,746 | 14,656,903 | 2,698 | A modular camera system includes a camera body having a housing, which has a rearward facing end, a forward facing end opposite the rearward facing end, and a plurality of mounting surfaces extending between the rearward facing end and the forward facing end. A photosensor and lens mount are mounted on the forward facing end of the camera body, the lens mount for removably attaching a lens assembly configured to direct photons from an area outside the camera body to the photosensor. A processor within the camera body converts a signal from the photosensor to a video signal. A display assembly removably attachable to the rearward facing end of the camera body includes a display screen configured to display the video signal received from the processor in human viewable form. Each of the plurality of mounting surfaces includes one or more fastener elements for removably and interchangeably attaching an accessory module. | 1. A modular camera system, comprising:
a camera body having a first housing, the first housing having a rearward facing end, a forward facing end opposite the rearward facing end, and a plurality of mounting surfaces extending between the rearward facing end and the forward facing end; a photosensor mounted on the forward facing end of the camera body; a lens mount attached to the forward facing end of the camera body for removably attaching a lens assembly configured to direct photons from an area outside the camera body to the photosensor; a processor within the camera body operably coupled to the photosensor and configured to convert a signal received from the photosensor to a video signal; a display assembly configured to be removably attached to the rearward facing end of the camera body, the display assembly including a display screen configured to display the video signal received from the processor in human viewable form; and each of the plurality of mounting surfaces includes one or more fastener elements for removably and interchangeably attaching an accessory module. 2. The modular camera system of claim 1, wherein at least one of said mounting surfaces includes an electrical connector configured to provide an electrical coupling between the camera body and an attached accessory module. 3. The modular camera system of claim 1, wherein the photosensor is sensitive to radiation in the short wave infrared (SWIR) region. 4. The modular camera system of claim 1, wherein the processor includes a digital signal processor for converting the signal received from the photosensor to a digital video signal. 5. The modular camera system of claim 1, further comprising:
said display assembly removably attached to the camera body; a video connector on the rearward facing end of the camera body mating with an aligned video connector on the display assembly. 6. The modular camera system of claim 1, further comprising a plurality of lens assemblies interchangeably attachable to the lens mount. 7. The modular camera system of claim 1, wherein each of the mounting surfaces includes a pair of opposing axially extending channels, each axially extending channel configured to receive a complementary slide rail on an attached accessory device. 8. The modular camera system of claim 1, further comprising:
a keypad module including a second housing configured to be removably attached to a selected one of said mounting surfaces and a plurality of keys, the keypad module operatively coupled to the camera body for controlling operation of the modular camera system. 9. The modular camera system of claim 8, further comprising:
a laser light source attached to the keypad module, the keypad module including one or more keys for controlling operation of the laser light source. 10. The modular camera system of claim 1, further comprising:
a laser module including a second housing configured to be removably attached to a selected one of said mounting surfaces and one or more laser light sources mounted within the second housing, wherein the laser module is configured to operate as one or more of a laser designator, laser pointer, and a laser illuminator. 11. The modular camera system of claim 10, wherein the photosensor is sensitive to a wavelength corresponding to a wavelength of at least one of said one or more laser light sources. 12. The modular camera system of claim 1, further comprising:
a mounting shoe module having a second housing configured to be removably attached to a selected one of the mounting surfaces; and a mounting shoe disposed on the second housing. 13. The modular camera system of claim 12, further comprising:
a first set of electrical contacts on the mounting shoe module contacting a second set of electrical contacts on at least one of said mounting surfaces when the mounting module is attached to said at least one of said mounting surfaces; and a third set of electrical contacts on the mounting shoe electrically coupled to the first set of electrical contacts for electrically coupling the first set of electrical contacts to a power supply. 14. The modular camera system of claim 13, further comprising the power supply module, said power supply having a third housing and one or more electric batteries received within the third housing, the power supply module including a receptacle for removably receiving the mounting shoe and a fourth set of electrical contacts within said receptacle and electrically coupled to said one or more electric batteries. 15. The modular camera system of claim 12, wherein the mounting shoe is a dovetail mounting shoe. 16. The modular camera system of claim 12, further comprising:
a helmet mount adapter having a receptacle for removably receiving the mounting shoe for coupling the camera body to a head worn mounting system. 17. The modular camera system of claim 16, wherein the helmet mount adapter includes a lateral adjustment mechanism for moving the camera body to a desired transverse position. 18. The modular camera system of claim 16, further comprising:
an electrical connector on the helmet mount adapter for electrically coupling the helmet mount adapter to a power supply remotely located power supply. 19. The modular camera system of claim 12, further comprising:
a handgrip module having a first side including a receptacle for removably receiving the mounting shoe and a second side opposite the first side defining a contoured gripping surface for grasping by a user during hand held use of the modular camera system. 20. The modular camera system of claim 12, further comprising:
a rail clamp module having a first fastener and a second fastener; the first fastener including a receptacle for removably receiving the mounting shoe; and the second fastener including a rail clamp for removably attaching the rail clamp module to a weapon accessory rail interface. 21. The modular camera system of claim 20, wherein the first fastener is pivotally attached to the second fastener about a pivot axis which extends parallel to an optical axis of the camera body. 22. The modular camera system of claim 20, further comprising an optical magnifier for providing a magnified view of said display screen. 23. The modular camera system of claim 20, further comprising a laser range finder coupled to the camera body, the camera body configured to receive range information, a ballistics computation, or both, from the laser range finder. 24. The modular camera system of claim 23, further comprising:
executable instructions stored in an electronic memory associate with the processor configured to display one or more on-screen indicia on the display, the one or more on-screen indicia being positioned on the display screen to assist a user in aiming an associated weapon at a target. 25. The modular camera system of claim 1, further comprising:
a digital video recording module configured to be removably interposed between the rearward facing end of the camera body and the display assembly; a first video output connector on the rearward facing surface of the camera body configured to send the video signal to a first input video connector on the digital video recording module; and an electronic storage medium in the digital video recording module for storing a digital representation of the video signal. 26. The modular camera system of claim 1, further comprising:
a wireless communication module configured to perform one or both of (a) transmitting data representative of the video signal over a wireless communications network and (b) receiving data over the wireless communications network for display on the display screen. | A modular camera system includes a camera body having a housing, which has a rearward facing end, a forward facing end opposite the rearward facing end, and a plurality of mounting surfaces extending between the rearward facing end and the forward facing end. A photosensor and lens mount are mounted on the forward facing end of the camera body, the lens mount for removably attaching a lens assembly configured to direct photons from an area outside the camera body to the photosensor. A processor within the camera body converts a signal from the photosensor to a video signal. A display assembly removably attachable to the rearward facing end of the camera body includes a display screen configured to display the video signal received from the processor in human viewable form. Each of the plurality of mounting surfaces includes one or more fastener elements for removably and interchangeably attaching an accessory module.1. A modular camera system, comprising:
a camera body having a first housing, the first housing having a rearward facing end, a forward facing end opposite the rearward facing end, and a plurality of mounting surfaces extending between the rearward facing end and the forward facing end; a photosensor mounted on the forward facing end of the camera body; a lens mount attached to the forward facing end of the camera body for removably attaching a lens assembly configured to direct photons from an area outside the camera body to the photosensor; a processor within the camera body operably coupled to the photosensor and configured to convert a signal received from the photosensor to a video signal; a display assembly configured to be removably attached to the rearward facing end of the camera body, the display assembly including a display screen configured to display the video signal received from the processor in human viewable form; and each of the plurality of mounting surfaces includes one or more fastener elements for removably and interchangeably attaching an accessory module. 2. The modular camera system of claim 1, wherein at least one of said mounting surfaces includes an electrical connector configured to provide an electrical coupling between the camera body and an attached accessory module. 3. The modular camera system of claim 1, wherein the photosensor is sensitive to radiation in the short wave infrared (SWIR) region. 4. The modular camera system of claim 1, wherein the processor includes a digital signal processor for converting the signal received from the photosensor to a digital video signal. 5. The modular camera system of claim 1, further comprising:
said display assembly removably attached to the camera body; a video connector on the rearward facing end of the camera body mating with an aligned video connector on the display assembly. 6. The modular camera system of claim 1, further comprising a plurality of lens assemblies interchangeably attachable to the lens mount. 7. The modular camera system of claim 1, wherein each of the mounting surfaces includes a pair of opposing axially extending channels, each axially extending channel configured to receive a complementary slide rail on an attached accessory device. 8. The modular camera system of claim 1, further comprising:
a keypad module including a second housing configured to be removably attached to a selected one of said mounting surfaces and a plurality of keys, the keypad module operatively coupled to the camera body for controlling operation of the modular camera system. 9. The modular camera system of claim 8, further comprising:
a laser light source attached to the keypad module, the keypad module including one or more keys for controlling operation of the laser light source. 10. The modular camera system of claim 1, further comprising:
a laser module including a second housing configured to be removably attached to a selected one of said mounting surfaces and one or more laser light sources mounted within the second housing, wherein the laser module is configured to operate as one or more of a laser designator, laser pointer, and a laser illuminator. 11. The modular camera system of claim 10, wherein the photosensor is sensitive to a wavelength corresponding to a wavelength of at least one of said one or more laser light sources. 12. The modular camera system of claim 1, further comprising:
a mounting shoe module having a second housing configured to be removably attached to a selected one of the mounting surfaces; and a mounting shoe disposed on the second housing. 13. The modular camera system of claim 12, further comprising:
a first set of electrical contacts on the mounting shoe module contacting a second set of electrical contacts on at least one of said mounting surfaces when the mounting module is attached to said at least one of said mounting surfaces; and a third set of electrical contacts on the mounting shoe electrically coupled to the first set of electrical contacts for electrically coupling the first set of electrical contacts to a power supply. 14. The modular camera system of claim 13, further comprising the power supply module, said power supply having a third housing and one or more electric batteries received within the third housing, the power supply module including a receptacle for removably receiving the mounting shoe and a fourth set of electrical contacts within said receptacle and electrically coupled to said one or more electric batteries. 15. The modular camera system of claim 12, wherein the mounting shoe is a dovetail mounting shoe. 16. The modular camera system of claim 12, further comprising:
a helmet mount adapter having a receptacle for removably receiving the mounting shoe for coupling the camera body to a head worn mounting system. 17. The modular camera system of claim 16, wherein the helmet mount adapter includes a lateral adjustment mechanism for moving the camera body to a desired transverse position. 18. The modular camera system of claim 16, further comprising:
an electrical connector on the helmet mount adapter for electrically coupling the helmet mount adapter to a power supply remotely located power supply. 19. The modular camera system of claim 12, further comprising:
a handgrip module having a first side including a receptacle for removably receiving the mounting shoe and a second side opposite the first side defining a contoured gripping surface for grasping by a user during hand held use of the modular camera system. 20. The modular camera system of claim 12, further comprising:
a rail clamp module having a first fastener and a second fastener; the first fastener including a receptacle for removably receiving the mounting shoe; and the second fastener including a rail clamp for removably attaching the rail clamp module to a weapon accessory rail interface. 21. The modular camera system of claim 20, wherein the first fastener is pivotally attached to the second fastener about a pivot axis which extends parallel to an optical axis of the camera body. 22. The modular camera system of claim 20, further comprising an optical magnifier for providing a magnified view of said display screen. 23. The modular camera system of claim 20, further comprising a laser range finder coupled to the camera body, the camera body configured to receive range information, a ballistics computation, or both, from the laser range finder. 24. The modular camera system of claim 23, further comprising:
executable instructions stored in an electronic memory associate with the processor configured to display one or more on-screen indicia on the display, the one or more on-screen indicia being positioned on the display screen to assist a user in aiming an associated weapon at a target. 25. The modular camera system of claim 1, further comprising:
a digital video recording module configured to be removably interposed between the rearward facing end of the camera body and the display assembly; a first video output connector on the rearward facing surface of the camera body configured to send the video signal to a first input video connector on the digital video recording module; and an electronic storage medium in the digital video recording module for storing a digital representation of the video signal. 26. The modular camera system of claim 1, further comprising:
a wireless communication module configured to perform one or both of (a) transmitting data representative of the video signal over a wireless communications network and (b) receiving data over the wireless communications network for display on the display screen. | 2,600 |
10,747 | 10,747 | 15,607,791 | 2,674 | An electronic digital assistant (EDA) detects a user's acoustical environment and substantively varys a content of a generated auditory output to the user as a function of the detected acoustical environment. The EDA receives an indication of an acoustic environment in which an auditory output will be provided to a user. The EDA then generates an auditory output having a substantive content that is varied as a function of the indicated acoustic environment. The EDA then provides the auditory output to an electronic output transducer associated with the user for reproduction to the user in the acoustic environment. | 1. A method at an electronic digital assistant computing device for detecting a user's acoustical environment and substantively varying a content of a generated auditory output to the user as a function of the detected acoustical environment, the method comprising:
receiving, at an electronic digital assistant computing device, an indication of an acoustic environment in which an auditory output will be provided by the electronic digital assistant computing device to a user; generating, at the electronic digital assistant computing device, an auditory output having a substantive content that is varied as a function of the indication of the acoustic environment; and providing, by the electronic digital assistant computing device, the auditory output to an electronic output transducer associated with the user for reproduction to the user in the acoustic environment. 2. The method of claim 1, wherein the indication of the acoustic environment includes one or more of a noise level, a pitch, and a periodicity of noise measured at a noise level sensor associated with the user. 3. The method of claim 2, wherein the indication of the acoustic environment is a numerical value measured in decibels. 4. The method of claim 2, wherein when the noise level is below a first threshold level, the substantive content of the auditory output is shortened to decrease a time to playback the auditory output to the user. 5. The method of claim 4, wherein the substantive content is shortened by one of: using acronyms where possible instead of using underlying terms that the acronym represents, accessing a thesaurus and choosing terms having fewer syllables than other terms having more syllables and substantially a same meaning as one another, using 10-codes instead of underlying text descriptions of such 10-codes, using pronouns to refer to people, places, or things instead of proper names, using contractions instead of underlying terms with which the contractions are short for, and using abbreviations for terms instead of underlying terms with which the abbreviations are short for. 6. The method of claim 4, wherein the substantive content is shortened by all of: using acronyms where possible instead of using underlying terms that the acronym represents, accessing a thesaurus database and choosing terms having fewer syllables than other terms having more syllables and substantially a same meaning as one another, using 10-codes instead of underlying text descriptions of such 10-codes, using pronouns to refer to people, places, or things instead of proper names, using contractions instead of underlying terms with which the contractions are short for, and using abbreviations for terms instead of underlying terms with which the abbreviations are short for. 7. The method of claim 4, wherein the first threshold level is between 75 and 85 dB. 8. The method of claim 2, wherein when the noise level is below a first threshold level, the substantive content of the auditory output modified by:
accessing a thesaurus database wherein synonyms having substantially a same meaning as one another are also assigned hardness ratings based on a measured change in air pressure when reproducing the synonym relative to other synonymous terms; and choosing shorter terms having fewer syllables or terms more closely matching an intended meaning, independent of, or having a higher priority than, the hardness ratings assigned to the terms. 9. The method of claim 2, wherein when the noise level is above a second threshold level, the substantive content of the auditory output is lengthened to increase a time to playback the auditory output to the user. 10. The method of claim 9, wherein the substantive content is lengthened by one of: not using acronyms where possible and instead using underlying terms that the acronym represents, accessing a thesaurus database and choosing terms having more syllables than other terms having fewer syllables and substantially a same meaning as one another, not using 10-codes and instead using text descriptions of such 10-codes, using proper names instead of pronouns to refer to people, places, or things, not using contractions and instead using underlying terms with which the contractions are short for, and not using abbreviations for terms and instead using underlying terms with which the abbreviations are short for. 11. The method of claim 9, wherein the substantive content is shortened by all of: not using acronyms where possible and instead using underlying terms that the acronym represents, accessing a thesaurus and choosing terms having more syllables than other terms having fewer syllables and substantially a same meaning as one another, not using 10-codes and instead using text descriptions of such 10-codes, using proper names instead of pronouns to refer to people, places, or things, not using contractions and instead using underlying terms with which the contractions are short for, and not using abbreviations for terms and instead using underlying terms with which the abbreviations are short for. 12. The method of claim 9, wherein the second threshold level is between 95 and 105 dB. 13. The method of claim 2, wherein when the noise level is above a second threshold level, the substantive content of the auditory output modified by:
accessing a thesaurus database wherein synonyms having substantially a same meaning as one another are also assigned hardness ratings based on a measured change in air pressure when reproducing the synonym relative to other synonymous terms; and choosing a synonymous term having a higher hardness rating assigned to the term relative to other synonymous terms independent of, or having a higher priority than, length or how closely the term matches an intended meaning. 14. The method of claim 1, wherein the electronic digital assistant computing device is a mobile computing device associated with the user and the mobile computing device further including the electronic output transducer. 15. The method of claim 1, wherein the electronic digital assistant computing device is a computing device remote from the user and that communicates with a mobile computing device associated with the user and including the electronic output transducer via a wireless radio access network. 16. The method of claim 1, wherein the electronic digital assistant computing device is a distributed computing device that includes computing components disposed at a mobile computing device associated with the user, the mobile computing device further including the electronic output transducer, and computing components disposed at a remote computing device that communicates with the mobile computing device via a wireless radio access network. 17. The method of claim 1, the method further comprising:
receiving, at the electronic digital assistant computing device, indications of a plurality of respective acoustic environments in which a plurality of users to which an auditory output will be provided by the electronic digital assistant computing device, the plurality of users forming a talkgroup of users; generating, at the electronic digital assistant computing device, an auditory output having a substantive content that is generated as a function of the indications of the acoustic environments; and providing, by the electronic digital assistant computing device, the auditory output to electronic output transducers associated with each user for reproduction to the user in the respective acoustic environments via a talkgroup session. 18. The method of claim 17, wherein the substantive content of the auditory output is generated based on an average of the indications of the plurality of respective acoustic environments across the talkgroup of users. 19. The method of claim 17, wherein the substantive content of the auditory output is generated based on a worst-case acoustic environment indication out of the plurality of respective acoustic environments across the talkgroup of users. 20. A computing device implementing an electronic digital assistant for detecting a user's acoustical environment and substantively varying a content of an auditory output to the user as a function of the detected acoustical environment, the electronic computing device comprising:
a memory storing non-transitory computer-readable instructions; a transceiver; and one or more processors configured to, in response to executing the non-transitory computer-readable instructions, perform a first set of functions comprising:
receive, via one of the transceiver and a sensor communicably coupled to the electronic digital assistant computing device, an indication of an acoustic environment in which an auditory output will be provided by the electronic digital assistant computing device to a user;
generating an auditory output having a substantive content that is varied as a function of the indication of the acoustic environment; and
providing, via one of an electronic output transducer communicably coupled to the electronic digital assistant computing device and the transceiver, the auditory output for reproduction to the user in the acoustic environment. | An electronic digital assistant (EDA) detects a user's acoustical environment and substantively varys a content of a generated auditory output to the user as a function of the detected acoustical environment. The EDA receives an indication of an acoustic environment in which an auditory output will be provided to a user. The EDA then generates an auditory output having a substantive content that is varied as a function of the indicated acoustic environment. The EDA then provides the auditory output to an electronic output transducer associated with the user for reproduction to the user in the acoustic environment.1. A method at an electronic digital assistant computing device for detecting a user's acoustical environment and substantively varying a content of a generated auditory output to the user as a function of the detected acoustical environment, the method comprising:
receiving, at an electronic digital assistant computing device, an indication of an acoustic environment in which an auditory output will be provided by the electronic digital assistant computing device to a user; generating, at the electronic digital assistant computing device, an auditory output having a substantive content that is varied as a function of the indication of the acoustic environment; and providing, by the electronic digital assistant computing device, the auditory output to an electronic output transducer associated with the user for reproduction to the user in the acoustic environment. 2. The method of claim 1, wherein the indication of the acoustic environment includes one or more of a noise level, a pitch, and a periodicity of noise measured at a noise level sensor associated with the user. 3. The method of claim 2, wherein the indication of the acoustic environment is a numerical value measured in decibels. 4. The method of claim 2, wherein when the noise level is below a first threshold level, the substantive content of the auditory output is shortened to decrease a time to playback the auditory output to the user. 5. The method of claim 4, wherein the substantive content is shortened by one of: using acronyms where possible instead of using underlying terms that the acronym represents, accessing a thesaurus and choosing terms having fewer syllables than other terms having more syllables and substantially a same meaning as one another, using 10-codes instead of underlying text descriptions of such 10-codes, using pronouns to refer to people, places, or things instead of proper names, using contractions instead of underlying terms with which the contractions are short for, and using abbreviations for terms instead of underlying terms with which the abbreviations are short for. 6. The method of claim 4, wherein the substantive content is shortened by all of: using acronyms where possible instead of using underlying terms that the acronym represents, accessing a thesaurus database and choosing terms having fewer syllables than other terms having more syllables and substantially a same meaning as one another, using 10-codes instead of underlying text descriptions of such 10-codes, using pronouns to refer to people, places, or things instead of proper names, using contractions instead of underlying terms with which the contractions are short for, and using abbreviations for terms instead of underlying terms with which the abbreviations are short for. 7. The method of claim 4, wherein the first threshold level is between 75 and 85 dB. 8. The method of claim 2, wherein when the noise level is below a first threshold level, the substantive content of the auditory output modified by:
accessing a thesaurus database wherein synonyms having substantially a same meaning as one another are also assigned hardness ratings based on a measured change in air pressure when reproducing the synonym relative to other synonymous terms; and choosing shorter terms having fewer syllables or terms more closely matching an intended meaning, independent of, or having a higher priority than, the hardness ratings assigned to the terms. 9. The method of claim 2, wherein when the noise level is above a second threshold level, the substantive content of the auditory output is lengthened to increase a time to playback the auditory output to the user. 10. The method of claim 9, wherein the substantive content is lengthened by one of: not using acronyms where possible and instead using underlying terms that the acronym represents, accessing a thesaurus database and choosing terms having more syllables than other terms having fewer syllables and substantially a same meaning as one another, not using 10-codes and instead using text descriptions of such 10-codes, using proper names instead of pronouns to refer to people, places, or things, not using contractions and instead using underlying terms with which the contractions are short for, and not using abbreviations for terms and instead using underlying terms with which the abbreviations are short for. 11. The method of claim 9, wherein the substantive content is shortened by all of: not using acronyms where possible and instead using underlying terms that the acronym represents, accessing a thesaurus and choosing terms having more syllables than other terms having fewer syllables and substantially a same meaning as one another, not using 10-codes and instead using text descriptions of such 10-codes, using proper names instead of pronouns to refer to people, places, or things, not using contractions and instead using underlying terms with which the contractions are short for, and not using abbreviations for terms and instead using underlying terms with which the abbreviations are short for. 12. The method of claim 9, wherein the second threshold level is between 95 and 105 dB. 13. The method of claim 2, wherein when the noise level is above a second threshold level, the substantive content of the auditory output modified by:
accessing a thesaurus database wherein synonyms having substantially a same meaning as one another are also assigned hardness ratings based on a measured change in air pressure when reproducing the synonym relative to other synonymous terms; and choosing a synonymous term having a higher hardness rating assigned to the term relative to other synonymous terms independent of, or having a higher priority than, length or how closely the term matches an intended meaning. 14. The method of claim 1, wherein the electronic digital assistant computing device is a mobile computing device associated with the user and the mobile computing device further including the electronic output transducer. 15. The method of claim 1, wherein the electronic digital assistant computing device is a computing device remote from the user and that communicates with a mobile computing device associated with the user and including the electronic output transducer via a wireless radio access network. 16. The method of claim 1, wherein the electronic digital assistant computing device is a distributed computing device that includes computing components disposed at a mobile computing device associated with the user, the mobile computing device further including the electronic output transducer, and computing components disposed at a remote computing device that communicates with the mobile computing device via a wireless radio access network. 17. The method of claim 1, the method further comprising:
receiving, at the electronic digital assistant computing device, indications of a plurality of respective acoustic environments in which a plurality of users to which an auditory output will be provided by the electronic digital assistant computing device, the plurality of users forming a talkgroup of users; generating, at the electronic digital assistant computing device, an auditory output having a substantive content that is generated as a function of the indications of the acoustic environments; and providing, by the electronic digital assistant computing device, the auditory output to electronic output transducers associated with each user for reproduction to the user in the respective acoustic environments via a talkgroup session. 18. The method of claim 17, wherein the substantive content of the auditory output is generated based on an average of the indications of the plurality of respective acoustic environments across the talkgroup of users. 19. The method of claim 17, wherein the substantive content of the auditory output is generated based on a worst-case acoustic environment indication out of the plurality of respective acoustic environments across the talkgroup of users. 20. A computing device implementing an electronic digital assistant for detecting a user's acoustical environment and substantively varying a content of an auditory output to the user as a function of the detected acoustical environment, the electronic computing device comprising:
a memory storing non-transitory computer-readable instructions; a transceiver; and one or more processors configured to, in response to executing the non-transitory computer-readable instructions, perform a first set of functions comprising:
receive, via one of the transceiver and a sensor communicably coupled to the electronic digital assistant computing device, an indication of an acoustic environment in which an auditory output will be provided by the electronic digital assistant computing device to a user;
generating an auditory output having a substantive content that is varied as a function of the indication of the acoustic environment; and
providing, via one of an electronic output transducer communicably coupled to the electronic digital assistant computing device and the transceiver, the auditory output for reproduction to the user in the acoustic environment. | 2,600 |
10,748 | 10,748 | 15,077,573 | 2,645 | The embodiments disclosed herein can provide a user-friendly media sharing mechanism for sharing media information amongst multiple mobile devices over a communication network. In particular, the disclosed media sharing mechanism can enable a mobile device to display the media information above the lock-screen native to the operating system (e.g., provided as part of the operating system) on an on-demand basis. For example, when a user receives media information at a mobile device, the mobile device can display the media information above the lock-screen native to the operating system. | 1. A method for providing media information at a mobile device, the method comprising:
receiving, at a lock screen listener module of a mobile device, a notification from a receiver, indicating that media information has been received from a communications network; in response to receiving the notification, instructing, by the lock screen listener module, a lock screen display module to display the media information above a lock screen on a display of the mobile device; and providing interactive tools above the lock screen for editing the media information displayed above the lock screen. 2. The method of claim 1, further comprising monitoring a Multimedia Messaging Service (MMS) stack, by the lock screen listener module, to determine whether the media information has been received from the communications network. 3. The method of claim 2, further comprising intercepting a message on the MMS stack to retrieve the media information. 4. The method of claim 1, wherein instructing the lock screen display module to display the media information comprises providing the media information to the lock screen display module. 5. The method of claim 1, wherein the receiver comprises a software application running independently of the lock screen listener module and the lock screen display module. 6. (canceled) 7. The method of claim 6, further comprising sending the edited media information to another mobile device over the communications network. 8. The method of claim 6, further comprising sending the edited media information to a group of mobile devices over the communications network. 9. A mobile device configured to provide media information received from a communications network, the mobile device comprising:
a display configured to display a lock screen and media information; non-transitory memory storing computer readable instructions associated with a receiver, a lock screen listener module and a lock screen display module; one or more interfaces for receiving the media information from the communications network; and a processor in communication with the one or more interfaces and the non-transitory memory, wherein the computer readable instructions are configured to cause the processor to:
receive, at the lock screen listener module, a notification from the receiver, indicating that media information has been received from a communications network;
in response to receiving the notification, instruct, by the lock screen listener module, the lock screen display module to display the media information above the lock screen on the display; and
provide interactive tools above the lock screen for editing the media information displayed above the lock screen. 10. The mobile device of claim 9, wherein the computer readable instructions are configured to cause the processor to monitor a MMS stack, by the lock screen listener module, to determine whether media information has been received from the communication network. 11. The mobile device of claim 10, wherein the computer readable instructions are configured to cause the processor to intercept a message on the MMS stack to retrieve the media information. 12. The mobile device of claim 9, wherein the computer readable instructions are configured to cause the processor to provide the media information to the lock screen display module to instruct the lock screen display module to display the media information above the lock screen on the display. 13. The mobile device of claim 9, wherein the receiver comprises a software application running independently of the lock screen listener module and the lock screen display module. 14. The mobile device of claim 9, wherein the computer readable instructions are configured to cause the processor to instruct the lock screen display module to display the media information over the entire lock screen. 15. (canceled) 16. The mobile device of claim 15, wherein the computer readable instructions are configured to cause the processor to send the edited media information to another mobile device via the one or more interfaces over the communications network. 17. The mobile device of claim 16, wherein the computer readable instructions are configured to cause the processor to send the edited media information to a group of mobile devices via the one or more interfaces over the communications network. 18. A non-transitory computer readable medium having executable instructions for providing media information, wherein the executable instructions are operable to cause a processor of a mobile device to:
receive, at a lock screen listener module of the mobile device, a notification from a receiver, indicating that media information has been received from a communications network; in response to receiving the notification, instruct, by the lock screen listener module, the lock screen display module to display the media information above a lock screen on a display of the mobile device; and provide interactive tools for editing the media information displayed above the lock screen. 19. The computer readable medium of claim 18, further comprising executable instructions operable to cause the processor to monitor a MMS stack to determine whether the media information has been received from the communications network. 20. (canceled) | The embodiments disclosed herein can provide a user-friendly media sharing mechanism for sharing media information amongst multiple mobile devices over a communication network. In particular, the disclosed media sharing mechanism can enable a mobile device to display the media information above the lock-screen native to the operating system (e.g., provided as part of the operating system) on an on-demand basis. For example, when a user receives media information at a mobile device, the mobile device can display the media information above the lock-screen native to the operating system.1. A method for providing media information at a mobile device, the method comprising:
receiving, at a lock screen listener module of a mobile device, a notification from a receiver, indicating that media information has been received from a communications network; in response to receiving the notification, instructing, by the lock screen listener module, a lock screen display module to display the media information above a lock screen on a display of the mobile device; and providing interactive tools above the lock screen for editing the media information displayed above the lock screen. 2. The method of claim 1, further comprising monitoring a Multimedia Messaging Service (MMS) stack, by the lock screen listener module, to determine whether the media information has been received from the communications network. 3. The method of claim 2, further comprising intercepting a message on the MMS stack to retrieve the media information. 4. The method of claim 1, wherein instructing the lock screen display module to display the media information comprises providing the media information to the lock screen display module. 5. The method of claim 1, wherein the receiver comprises a software application running independently of the lock screen listener module and the lock screen display module. 6. (canceled) 7. The method of claim 6, further comprising sending the edited media information to another mobile device over the communications network. 8. The method of claim 6, further comprising sending the edited media information to a group of mobile devices over the communications network. 9. A mobile device configured to provide media information received from a communications network, the mobile device comprising:
a display configured to display a lock screen and media information; non-transitory memory storing computer readable instructions associated with a receiver, a lock screen listener module and a lock screen display module; one or more interfaces for receiving the media information from the communications network; and a processor in communication with the one or more interfaces and the non-transitory memory, wherein the computer readable instructions are configured to cause the processor to:
receive, at the lock screen listener module, a notification from the receiver, indicating that media information has been received from a communications network;
in response to receiving the notification, instruct, by the lock screen listener module, the lock screen display module to display the media information above the lock screen on the display; and
provide interactive tools above the lock screen for editing the media information displayed above the lock screen. 10. The mobile device of claim 9, wherein the computer readable instructions are configured to cause the processor to monitor a MMS stack, by the lock screen listener module, to determine whether media information has been received from the communication network. 11. The mobile device of claim 10, wherein the computer readable instructions are configured to cause the processor to intercept a message on the MMS stack to retrieve the media information. 12. The mobile device of claim 9, wherein the computer readable instructions are configured to cause the processor to provide the media information to the lock screen display module to instruct the lock screen display module to display the media information above the lock screen on the display. 13. The mobile device of claim 9, wherein the receiver comprises a software application running independently of the lock screen listener module and the lock screen display module. 14. The mobile device of claim 9, wherein the computer readable instructions are configured to cause the processor to instruct the lock screen display module to display the media information over the entire lock screen. 15. (canceled) 16. The mobile device of claim 15, wherein the computer readable instructions are configured to cause the processor to send the edited media information to another mobile device via the one or more interfaces over the communications network. 17. The mobile device of claim 16, wherein the computer readable instructions are configured to cause the processor to send the edited media information to a group of mobile devices via the one or more interfaces over the communications network. 18. A non-transitory computer readable medium having executable instructions for providing media information, wherein the executable instructions are operable to cause a processor of a mobile device to:
receive, at a lock screen listener module of the mobile device, a notification from a receiver, indicating that media information has been received from a communications network; in response to receiving the notification, instruct, by the lock screen listener module, the lock screen display module to display the media information above a lock screen on a display of the mobile device; and provide interactive tools for editing the media information displayed above the lock screen. 19. The computer readable medium of claim 18, further comprising executable instructions operable to cause the processor to monitor a MMS stack to determine whether the media information has been received from the communications network. 20. (canceled) | 2,600 |
10,749 | 10,749 | 14,799,935 | 2,643 | A method for operating a phased array antenna for a wireless communication system serving an area in which communications demands from a plurality of mobile communication devices change as a function of time, the method involving: for each time of a plurality of successive times, (1) obtaining information indicative of a total mobile communications demand density as a function of beam direction for that time; and (2) with the phased array antenna, electronically generating a communication beam directed in a direction for which total mobile communications demand density is high for that time relative to other beam directions. | 1. A method for operating a phased array antenna for a wireless communication system serving an area in which communications demands from a plurality of mobile communication devices change as a function of time, said method comprising:
for each time of a plurality of successive times, (1) obtaining information indicative of a total mobile communications demand density as a function of beam direction for that time; and (2) with the phased array antenna, electronically generating a communication beam directed in a direction for which total mobile communications demand density is high for that time relative to other beam directions. 2. The method of claim 1, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises scanning a probe beam over a range of directions and measuring the total mobile communications demand as a function of probe beam direction. 3. The method of claim 2, wherein the probe beam is a narrow beam 4. The method of claim 2, wherein the range of directions over which the probe beam is scanned vary both in azimuth and elevation. 5. The method of claim 1, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises referencing a database that provides information about expected geographical distribution of mobile communication devices as a function of time. 6. The method of claim 1, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises obtaining information about a geographical distribution of the plurality of mobile communication devices. 7. The method of claim 1, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises scanning a probe beam over a range of directions and measuring instantaneous spectrum efficiency as a function of probe beam direction. 8. The method of claim 1, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises referring to stored source of information that indicates instantaneous spectrum efficiency as a function of beam direction. 9. The method of claim 1, wherein the communication beam is a narrow beam. 10. The method of claim 1, wherein the communication beam has a shape that is selected based on details concerning clustering of the total mobile communications demand density. 11. The method of claim 1, where the generated communication beam is a transmit beam. 12. The method of claim 1, wherein the generated communication beam is a receive beam. 13. The method of claim 1, further comprising:
with the phased array antenna, and for each time of the plurality of successive times, electronically generating a plurality of communication beams each directed toward a plurality of different directions for which total mobile communications demand density is high for that time relative to other beam directions, wherein the first-mentioned communication beam is among the plurality of communication beams. 14. The method of claim 1, further comprising:
with the phased array antenna, and for each time of the plurality of successive times, electronically generating a plurality of communication beams each directed toward a plurality of different directions for which total mobile communications demand density exhibits clustering, wherein the first-mentioned communication beam is among the plurality of communication beams. 15. The method of claim 14, wherein shapes of beams of the plurality of communication beams are selected to match the shapes of the clusters. 16. The method of claim 13, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises scanning a probe beam over a range of directions and measuring the total mobile communications demand as a function of probe beam direction. 17. The method of claim 16, wherein the probe beam is a narrow beam. 18. The method of claim 16, wherein the range of directions over which the probe beam is scanned vary both in azimuth and elevation. 19. The method of claim 13, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises referencing a database that provides information about expected geographical distribution of mobile communication devices as a function of time. 20. The method of claim 13, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises obtaining information about a geographical distribution of the plurality of mobile communication devices. 21. The method of claim 13, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises scanning a probe beam over a range of directions and measuring instantaneous spectrum efficiency as a function of probe beam direction. 22. The method of claim 13, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises referring to stored source of information that indicates instantaneous spectrum efficiency as a function of beam direction. | A method for operating a phased array antenna for a wireless communication system serving an area in which communications demands from a plurality of mobile communication devices change as a function of time, the method involving: for each time of a plurality of successive times, (1) obtaining information indicative of a total mobile communications demand density as a function of beam direction for that time; and (2) with the phased array antenna, electronically generating a communication beam directed in a direction for which total mobile communications demand density is high for that time relative to other beam directions.1. A method for operating a phased array antenna for a wireless communication system serving an area in which communications demands from a plurality of mobile communication devices change as a function of time, said method comprising:
for each time of a plurality of successive times, (1) obtaining information indicative of a total mobile communications demand density as a function of beam direction for that time; and (2) with the phased array antenna, electronically generating a communication beam directed in a direction for which total mobile communications demand density is high for that time relative to other beam directions. 2. The method of claim 1, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises scanning a probe beam over a range of directions and measuring the total mobile communications demand as a function of probe beam direction. 3. The method of claim 2, wherein the probe beam is a narrow beam 4. The method of claim 2, wherein the range of directions over which the probe beam is scanned vary both in azimuth and elevation. 5. The method of claim 1, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises referencing a database that provides information about expected geographical distribution of mobile communication devices as a function of time. 6. The method of claim 1, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises obtaining information about a geographical distribution of the plurality of mobile communication devices. 7. The method of claim 1, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises scanning a probe beam over a range of directions and measuring instantaneous spectrum efficiency as a function of probe beam direction. 8. The method of claim 1, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises referring to stored source of information that indicates instantaneous spectrum efficiency as a function of beam direction. 9. The method of claim 1, wherein the communication beam is a narrow beam. 10. The method of claim 1, wherein the communication beam has a shape that is selected based on details concerning clustering of the total mobile communications demand density. 11. The method of claim 1, where the generated communication beam is a transmit beam. 12. The method of claim 1, wherein the generated communication beam is a receive beam. 13. The method of claim 1, further comprising:
with the phased array antenna, and for each time of the plurality of successive times, electronically generating a plurality of communication beams each directed toward a plurality of different directions for which total mobile communications demand density is high for that time relative to other beam directions, wherein the first-mentioned communication beam is among the plurality of communication beams. 14. The method of claim 1, further comprising:
with the phased array antenna, and for each time of the plurality of successive times, electronically generating a plurality of communication beams each directed toward a plurality of different directions for which total mobile communications demand density exhibits clustering, wherein the first-mentioned communication beam is among the plurality of communication beams. 15. The method of claim 14, wherein shapes of beams of the plurality of communication beams are selected to match the shapes of the clusters. 16. The method of claim 13, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises scanning a probe beam over a range of directions and measuring the total mobile communications demand as a function of probe beam direction. 17. The method of claim 16, wherein the probe beam is a narrow beam. 18. The method of claim 16, wherein the range of directions over which the probe beam is scanned vary both in azimuth and elevation. 19. The method of claim 13, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises referencing a database that provides information about expected geographical distribution of mobile communication devices as a function of time. 20. The method of claim 13, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises obtaining information about a geographical distribution of the plurality of mobile communication devices. 21. The method of claim 13, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises scanning a probe beam over a range of directions and measuring instantaneous spectrum efficiency as a function of probe beam direction. 22. The method of claim 13, wherein obtaining information indicative of the total mobile communications demand density as a function of beam direction comprises referring to stored source of information that indicates instantaneous spectrum efficiency as a function of beam direction. | 2,600 |
10,750 | 10,750 | 15,989,637 | 2,651 | An audio surround processing system receives an audio source signal having at least two audio channels and generates a number of additional surround sound signals in which an amount of artificially generated ambient energy is controlled in real-time at least in part by an estimate of ambient energy that is contained in the audio source signal. The system may divide the audio source signal into two sets of components; a first set of components and a second set of components. The first set of components may be in a range of frequency that is less than a range of frequency of the second set of components. An ambience estimate control coefficient may be generated using the transformed first set of components. An overall gain may be determined using the ambience estimate control coefficient. The overall gain may be used in generation of the additional surround sound signals. | 1. An audio surround processing system comprising:
a memory; and an audio signal processor in communication with the memory and configured to:
divide a source audio signal having at least two audio channels into a first set of components in a first frequency range and a second set of components in a second frequency range,
transform the first set of components from a time domain to a frequency domain; and
estimate an ambient energy level using only the first set of components with the first set of components being in the frequency domain. 2. The audio surround processing system of claim 1, wherein the first frequency range of the first set of components is lower than the second frequency range of the second set of components. 3. The audio surround processing system of claim 1, wherein the audio signal processor is further configured to:
generate an ambience estimate control coefficient using the estimated ambient energy level; and determine a gain factor of a plurality of synthesized surround sound signals using the ambience estimate control coefficient. 4. The audio surround processing system of claim 3, where the source audio signal has a predetermined source sample rate, and the first set of components is sampled at predetermined sample rate that is less than the source sample rate to estimate the ambient energy level and to generate the ambience estimate control coefficient and where the audio signal processor is further configured to transform the second set of components from a time domain to a frequency domain using the predetermined sample rate. 5. The audio surround processing system of claim 4, where the audio signal processor is further configured to transform the first set of components and second set of components from a time domain to a frequency domain by computation of a Short Time Fourier Transform (STFT) of the first set of components and the second set of components using the predetermined sample rate. 6. The audio surround processing system of claim 1, where the audio signal processor is further configured to extract a first center audio signal from the first set of components, extract a second center audio signal from the second set of components, and combine the first center audio signal and the second center audio signal to generate a center channel output signal. 7. The audio surround processing system of claim 1, where the audio signal processor is further configured to extract a center channel signal from the source audio signal, and a width matrix to receive the source audio signal and the center channel signal as inputs, generate at least two surround sound signals, and adjust a width of a listener perceived sound stage by adjustment and output of an adjusted source audio signal, the center channel signal and the at least two surround sound signals. 8. The audio surround processing system of claim 3, further comprising:
an overall gain controller to apply the gain factor to at least one synthesized surround sound signal, a magnitude of gain being controlled in accordance with the ambience estimate control coefficient, and a non-linear mapping controller configured to determine the overall gain using a nonlinear mapping function and the ambience estimate control coefficient. 9. The audio surround processing system of claim 3, where the audio signal processor is further configured to determine the ambience estimate control coefficient by time smoothing an output from an estimate of the ambient energy level in the first frequency range of the first set of components, which is lower than the second frequency range. 10. A non-transitory computer-readable medium comprising a plurality of instructions executable by a processor, the computer-readable medium comprising:
instructions to divide a source audio signal having at least two channels into a first set of components in a first frequency range and a second set of components in a second frequency range; instructions to transform the first set of components from a time domain to a frequency domain; and instructions to generate an ambience estimate control coefficient using an estimated ambient energy contained in only the first set of components, the first set of components being in the frequency domain. 11. The computer-readable medium of claim 10, further comprising instructions to determine a gain factor of a plurality of synthesized surround sound signals using the ambience estimate control coefficient. 12. The computer-readable medium of claim 11, further comprising instructions to transform the second set of components from a time domain to a frequency domain. 13. The computer-readable medium of claim 12, further comprising instructions to generate a first set of center audio data from the first set of transformed components, generate a second set of center audio data from the second set of transformed components, combine the first set of center audio data and the second set of center audio data, and transform the combined first and second sets of center audio data from a frequency domain to a time domain to generate a center output channel. 14. The computer-readable medium of claim 13, further comprising instructions to generate at least two additional surround channels using a matrix having the source audio signal and the generated center channel as inputs. 15. The computer-readable medium of claim 10, further comprising instructions to generate the ambience estimate control coefficient using a predefined parameter representing an automation level. 16. The computer-readable medium of claim 10, further comprising instructions to determine the overall gain using a nonlinear mapping function. 17. The computer-readable medium of claim 11, further comprising:
instructions to extract a center channel signal from the first set of components and the second set of components; instructions to generate a surround sound signal from the source audio signal and the extracted center channel signal; and instructions to combine the surround sound signal with at least one of the synthesized surround sound signals to generate a surround sound output signal. 18. A method for audio signal processing in an audio surround processing system, the method comprising:
dividing a source audio signal having at least two channels into a first set of components in a first frequency range and a second set of components in a second frequency range transforming the first set of components from a time domain to a frequency domain; and generating an ambience estimate control coefficient using an estimated ambient energy contained in only the first set of components, the first set of components being in the frequency domain. 19. The method of claim 18, further comprising using a predefined parameter representing an automation level to generate the ambience estimate control coefficient. 20. The method of claim 18, further comprising determining the overall gain using a nonlinear mapping function. | An audio surround processing system receives an audio source signal having at least two audio channels and generates a number of additional surround sound signals in which an amount of artificially generated ambient energy is controlled in real-time at least in part by an estimate of ambient energy that is contained in the audio source signal. The system may divide the audio source signal into two sets of components; a first set of components and a second set of components. The first set of components may be in a range of frequency that is less than a range of frequency of the second set of components. An ambience estimate control coefficient may be generated using the transformed first set of components. An overall gain may be determined using the ambience estimate control coefficient. The overall gain may be used in generation of the additional surround sound signals.1. An audio surround processing system comprising:
a memory; and an audio signal processor in communication with the memory and configured to:
divide a source audio signal having at least two audio channels into a first set of components in a first frequency range and a second set of components in a second frequency range,
transform the first set of components from a time domain to a frequency domain; and
estimate an ambient energy level using only the first set of components with the first set of components being in the frequency domain. 2. The audio surround processing system of claim 1, wherein the first frequency range of the first set of components is lower than the second frequency range of the second set of components. 3. The audio surround processing system of claim 1, wherein the audio signal processor is further configured to:
generate an ambience estimate control coefficient using the estimated ambient energy level; and determine a gain factor of a plurality of synthesized surround sound signals using the ambience estimate control coefficient. 4. The audio surround processing system of claim 3, where the source audio signal has a predetermined source sample rate, and the first set of components is sampled at predetermined sample rate that is less than the source sample rate to estimate the ambient energy level and to generate the ambience estimate control coefficient and where the audio signal processor is further configured to transform the second set of components from a time domain to a frequency domain using the predetermined sample rate. 5. The audio surround processing system of claim 4, where the audio signal processor is further configured to transform the first set of components and second set of components from a time domain to a frequency domain by computation of a Short Time Fourier Transform (STFT) of the first set of components and the second set of components using the predetermined sample rate. 6. The audio surround processing system of claim 1, where the audio signal processor is further configured to extract a first center audio signal from the first set of components, extract a second center audio signal from the second set of components, and combine the first center audio signal and the second center audio signal to generate a center channel output signal. 7. The audio surround processing system of claim 1, where the audio signal processor is further configured to extract a center channel signal from the source audio signal, and a width matrix to receive the source audio signal and the center channel signal as inputs, generate at least two surround sound signals, and adjust a width of a listener perceived sound stage by adjustment and output of an adjusted source audio signal, the center channel signal and the at least two surround sound signals. 8. The audio surround processing system of claim 3, further comprising:
an overall gain controller to apply the gain factor to at least one synthesized surround sound signal, a magnitude of gain being controlled in accordance with the ambience estimate control coefficient, and a non-linear mapping controller configured to determine the overall gain using a nonlinear mapping function and the ambience estimate control coefficient. 9. The audio surround processing system of claim 3, where the audio signal processor is further configured to determine the ambience estimate control coefficient by time smoothing an output from an estimate of the ambient energy level in the first frequency range of the first set of components, which is lower than the second frequency range. 10. A non-transitory computer-readable medium comprising a plurality of instructions executable by a processor, the computer-readable medium comprising:
instructions to divide a source audio signal having at least two channels into a first set of components in a first frequency range and a second set of components in a second frequency range; instructions to transform the first set of components from a time domain to a frequency domain; and instructions to generate an ambience estimate control coefficient using an estimated ambient energy contained in only the first set of components, the first set of components being in the frequency domain. 11. The computer-readable medium of claim 10, further comprising instructions to determine a gain factor of a plurality of synthesized surround sound signals using the ambience estimate control coefficient. 12. The computer-readable medium of claim 11, further comprising instructions to transform the second set of components from a time domain to a frequency domain. 13. The computer-readable medium of claim 12, further comprising instructions to generate a first set of center audio data from the first set of transformed components, generate a second set of center audio data from the second set of transformed components, combine the first set of center audio data and the second set of center audio data, and transform the combined first and second sets of center audio data from a frequency domain to a time domain to generate a center output channel. 14. The computer-readable medium of claim 13, further comprising instructions to generate at least two additional surround channels using a matrix having the source audio signal and the generated center channel as inputs. 15. The computer-readable medium of claim 10, further comprising instructions to generate the ambience estimate control coefficient using a predefined parameter representing an automation level. 16. The computer-readable medium of claim 10, further comprising instructions to determine the overall gain using a nonlinear mapping function. 17. The computer-readable medium of claim 11, further comprising:
instructions to extract a center channel signal from the first set of components and the second set of components; instructions to generate a surround sound signal from the source audio signal and the extracted center channel signal; and instructions to combine the surround sound signal with at least one of the synthesized surround sound signals to generate a surround sound output signal. 18. A method for audio signal processing in an audio surround processing system, the method comprising:
dividing a source audio signal having at least two channels into a first set of components in a first frequency range and a second set of components in a second frequency range transforming the first set of components from a time domain to a frequency domain; and generating an ambience estimate control coefficient using an estimated ambient energy contained in only the first set of components, the first set of components being in the frequency domain. 19. The method of claim 18, further comprising using a predefined parameter representing an automation level to generate the ambience estimate control coefficient. 20. The method of claim 18, further comprising determining the overall gain using a nonlinear mapping function. | 2,600 |
10,751 | 10,751 | 16,276,594 | 2,651 | A method provides binaural sound to a listener while the listener watches a movie so sounds from the movie localize to a location of a character in the movie. Sound is convolved with head related transfer functions (HRTFs) of the listener, and the convolved sound is provided to the listener who wears a wearable electronic device. | 1.-20. (canceled) 21. A method that provides binaural sound to a listener watching a movie in virtual reality (VR), the method comprising:
displaying, with a head mounted display (HMD) worn by the listener, a VR movie theater with a VR movie screen and a VR listener that represents the listener wearing the HMD to watch the movie that includes a character; tracking, with the HMD worn by the listener, head movements of the listener to determine a head orientation of the listener with respect to a location of the character on the VR movie screen; and convolving, with a processor in the HMD and based on the head orientation, a voice of the character to originate as the binaural sound at a sound localization point (SLP) that exists in empty space not occupied by a tangible object but at a location on the VR movie screen where the character appears on the VR movie screen such that from a point-of-view of the listener the voice of the character externally localizes where the character is on the VR movie screen as opposed to localizing to speakers located at a perimeter of the VR movie theater; and providing, with the HMD worn by the listener, the voice of the character to originate as the binaural sound at the SLP that exists in empty space not occupied by the tangible object but at the location on the VR movie screen where the character appears on the VR movie screen such that from the point-of-view of the listener the voice of the character externally localizes where the character is on the VR movie screen. 22. The method of claim 21 further comprising:
calculating a distance between the VR listener and the character on the VR movie screen; and
adjusting a loudness of the voice of the character to be commensurate with the distance between the VR listener and the character on the VR movie screen. 23. The method of claim 21 further comprising:
calculating azimuth angles between a line-of-sight of the listener and different locations where the character is on the VR movie screen as the character moves across the VR movie screen; and
convolving, based on the azimuth angles between the line-of-sight of the listener and the different locations where the character is on the VR movie screen, the voice of the character with head-related transfer functions (HRTFs). 24. The method of claim 21 further comprising:
determining a size and shape of the VR movie theater where the listener watches the movie; and
convolving the voice of the character with room impulse responses (RIRs) based on the size and the shape of the VR movie theater. 25. The method of claim 21 further comprising:
solving a problem of sound being played at a higher volume to compensate for listeners seated at distances from the VR movie screen by playing the voice of the character according to individual preferences of the listeners in the VR movie theater. 26. The method of claim 21 further comprising:
providing the movie to multiple listeners seated at different locations in the VR movie theater; and
convolving the voice of the character for each of the multiple listeners watching the movie in the VR movie theater based on the different locations where each of the multiple listeners are seated in the VR movie theater. 27. The method of claim 21, wherein the SLP where the listener hears the voice of the character follows movements of the character moves such that the listener continues to hear the voice of the character originate from an image of the character as the image of the character moves across the VR movie screen. 28. A non-transitory computer-readable storage medium that stores instructions that one or more electronic devices execute as a method that provides binaural sound to listeners watching a movie in virtual reality (VR), the method comprising:
displaying, with a head mounted display (HMD), a VR movie theater that includes a plurality of VR seats, a VR movie screen, and VR listeners seated in the plurality of VR seats watching the movie in the VR movie theater; displaying, with the HMD, a character in the movie at a location on the VR movie screen; processing a voice of the character so the voice of the character originates as the binaural sound to the listeners at a sound localization point (SLP) at the location on the VR movie screen where the character is located; and processing the voice of the character so the SLP of the voice of the character moves and follows the location of the character as the character moves to different locations on the VR movie screen. 29. The non-transitory computer-readable storage medium of claim 28 further comprising:
processing the voice of the character with different sets of head-related transfer functions (HRTFs) for each one of the VR listeners seated in the plurality of the VR seats. 30. The non-transitory computer-readable storage medium of claim 28 further comprising:
calculating a distance between each of the VR listeners and the character on the VR movie screen; and
adjusting a loudness of the voice of the character to vary inversely with a square of the distance between each of the VR listeners and the character on the VR movie screen. 31. The non-transitory computer-readable storage medium of claim 28 further comprising:
calculating an azimuth angle between each of the VR listeners and the location on the VR movie screen where the character is located; and
processing the voice of the character for each of the VR listeners based on the azimuth angle between each of the VR listeners and the location on the VR movie screen where the character is located. 32. The non-transitory computer-readable storage medium of claim 28 further comprising:
tracking head movements of the VR listeners with respect to the location of the character; and
changing, based on the head movements, transfer functions processing the voice of the character so the voice of the character continues to emanate from the SLP at the location on the VR movie screen where the character is located while head movements of the VR listeners change. 33. The non-transitory computer-readable storage medium of claim 28 further comprising:
distinguishing a voice of a narrator in the movie between the voice of the character by providing the voice of the narrator in stereo sound that internally localizes and the voice of the character as the binaural sound that externally localizes. 34. The non-transitory computer-readable storage medium of claim 28, wherein the voice of the character originates in empty space on a same plane as the VR movie screen. 35. The non-transitory computer-readable storage medium of claim 28 further comprising:
processing a sound in the movie so different listeners simultaneously hear the sound that externally localizes to different locations relative to the VR movie screen, wherein the different locations include on the VR movie screen and off the VR movie screen. 36. A method that provides binaural sound to a listener watching a movie in virtual reality (VR), the method comprising:
displaying, with a head mounted display (HMD), a VR movie theater that includes VR seats, a VR movie screen, and a VR listener watching the movie in the VR movie theater; tracking, with the HMD worn by the listener, head movements of the listener to determine a head orientation of the listener with respect to a location of a character on the VR movie screen; processing, with one or more processors and based on the head movements, a voice of character as the binaural sound that originates at a sound localization point (SLP) that exists at a location on the VR movie screen where the character appears on the VR movie screen such that from a point-of-view of the listener the voice of the character externally localizes to the location where the character is on the VR movie screen; and processing, with the one or more processors and based on the head movements, the voice of character to move and to follow movements of the character as the character moves on the VR movie screen such that the voice of the character continues to originate from the SLP that exists at the location on the VR movie screen where the character appears as the character moves. 37. The method of claim 36 further comprising:
determining a distance between the VR listener and the VR movie screen; and
adjusting a loudness of the voice of the character based on the distance between the VR listener and the VR movie screen. 38. The method of claim 36 further comprising:
processing the voice of the character with head-related transfer functions (HRTFs) to provide the voice of the character as the binaural sound; and
selecting the HRTFs based on which one of the VR seats the VR listener sits in. 39. The method of claim 36 further comprising:
processing the voice of the character with head-related transfer functions (HRTFs) in response to which one of the VR seats the VR listener sits in; and
changing the HRTFs processing the voice of the character in response to the VR listener moving to another one of the VR seats. 40. The method of claim 36 further comprising:
increasing a level of realism that the listener experiences while watching the movie by providing a sound to originate at a SLP in empty space behind the listener when the sound originates behind the character. | A method provides binaural sound to a listener while the listener watches a movie so sounds from the movie localize to a location of a character in the movie. Sound is convolved with head related transfer functions (HRTFs) of the listener, and the convolved sound is provided to the listener who wears a wearable electronic device.1.-20. (canceled) 21. A method that provides binaural sound to a listener watching a movie in virtual reality (VR), the method comprising:
displaying, with a head mounted display (HMD) worn by the listener, a VR movie theater with a VR movie screen and a VR listener that represents the listener wearing the HMD to watch the movie that includes a character; tracking, with the HMD worn by the listener, head movements of the listener to determine a head orientation of the listener with respect to a location of the character on the VR movie screen; and convolving, with a processor in the HMD and based on the head orientation, a voice of the character to originate as the binaural sound at a sound localization point (SLP) that exists in empty space not occupied by a tangible object but at a location on the VR movie screen where the character appears on the VR movie screen such that from a point-of-view of the listener the voice of the character externally localizes where the character is on the VR movie screen as opposed to localizing to speakers located at a perimeter of the VR movie theater; and providing, with the HMD worn by the listener, the voice of the character to originate as the binaural sound at the SLP that exists in empty space not occupied by the tangible object but at the location on the VR movie screen where the character appears on the VR movie screen such that from the point-of-view of the listener the voice of the character externally localizes where the character is on the VR movie screen. 22. The method of claim 21 further comprising:
calculating a distance between the VR listener and the character on the VR movie screen; and
adjusting a loudness of the voice of the character to be commensurate with the distance between the VR listener and the character on the VR movie screen. 23. The method of claim 21 further comprising:
calculating azimuth angles between a line-of-sight of the listener and different locations where the character is on the VR movie screen as the character moves across the VR movie screen; and
convolving, based on the azimuth angles between the line-of-sight of the listener and the different locations where the character is on the VR movie screen, the voice of the character with head-related transfer functions (HRTFs). 24. The method of claim 21 further comprising:
determining a size and shape of the VR movie theater where the listener watches the movie; and
convolving the voice of the character with room impulse responses (RIRs) based on the size and the shape of the VR movie theater. 25. The method of claim 21 further comprising:
solving a problem of sound being played at a higher volume to compensate for listeners seated at distances from the VR movie screen by playing the voice of the character according to individual preferences of the listeners in the VR movie theater. 26. The method of claim 21 further comprising:
providing the movie to multiple listeners seated at different locations in the VR movie theater; and
convolving the voice of the character for each of the multiple listeners watching the movie in the VR movie theater based on the different locations where each of the multiple listeners are seated in the VR movie theater. 27. The method of claim 21, wherein the SLP where the listener hears the voice of the character follows movements of the character moves such that the listener continues to hear the voice of the character originate from an image of the character as the image of the character moves across the VR movie screen. 28. A non-transitory computer-readable storage medium that stores instructions that one or more electronic devices execute as a method that provides binaural sound to listeners watching a movie in virtual reality (VR), the method comprising:
displaying, with a head mounted display (HMD), a VR movie theater that includes a plurality of VR seats, a VR movie screen, and VR listeners seated in the plurality of VR seats watching the movie in the VR movie theater; displaying, with the HMD, a character in the movie at a location on the VR movie screen; processing a voice of the character so the voice of the character originates as the binaural sound to the listeners at a sound localization point (SLP) at the location on the VR movie screen where the character is located; and processing the voice of the character so the SLP of the voice of the character moves and follows the location of the character as the character moves to different locations on the VR movie screen. 29. The non-transitory computer-readable storage medium of claim 28 further comprising:
processing the voice of the character with different sets of head-related transfer functions (HRTFs) for each one of the VR listeners seated in the plurality of the VR seats. 30. The non-transitory computer-readable storage medium of claim 28 further comprising:
calculating a distance between each of the VR listeners and the character on the VR movie screen; and
adjusting a loudness of the voice of the character to vary inversely with a square of the distance between each of the VR listeners and the character on the VR movie screen. 31. The non-transitory computer-readable storage medium of claim 28 further comprising:
calculating an azimuth angle between each of the VR listeners and the location on the VR movie screen where the character is located; and
processing the voice of the character for each of the VR listeners based on the azimuth angle between each of the VR listeners and the location on the VR movie screen where the character is located. 32. The non-transitory computer-readable storage medium of claim 28 further comprising:
tracking head movements of the VR listeners with respect to the location of the character; and
changing, based on the head movements, transfer functions processing the voice of the character so the voice of the character continues to emanate from the SLP at the location on the VR movie screen where the character is located while head movements of the VR listeners change. 33. The non-transitory computer-readable storage medium of claim 28 further comprising:
distinguishing a voice of a narrator in the movie between the voice of the character by providing the voice of the narrator in stereo sound that internally localizes and the voice of the character as the binaural sound that externally localizes. 34. The non-transitory computer-readable storage medium of claim 28, wherein the voice of the character originates in empty space on a same plane as the VR movie screen. 35. The non-transitory computer-readable storage medium of claim 28 further comprising:
processing a sound in the movie so different listeners simultaneously hear the sound that externally localizes to different locations relative to the VR movie screen, wherein the different locations include on the VR movie screen and off the VR movie screen. 36. A method that provides binaural sound to a listener watching a movie in virtual reality (VR), the method comprising:
displaying, with a head mounted display (HMD), a VR movie theater that includes VR seats, a VR movie screen, and a VR listener watching the movie in the VR movie theater; tracking, with the HMD worn by the listener, head movements of the listener to determine a head orientation of the listener with respect to a location of a character on the VR movie screen; processing, with one or more processors and based on the head movements, a voice of character as the binaural sound that originates at a sound localization point (SLP) that exists at a location on the VR movie screen where the character appears on the VR movie screen such that from a point-of-view of the listener the voice of the character externally localizes to the location where the character is on the VR movie screen; and processing, with the one or more processors and based on the head movements, the voice of character to move and to follow movements of the character as the character moves on the VR movie screen such that the voice of the character continues to originate from the SLP that exists at the location on the VR movie screen where the character appears as the character moves. 37. The method of claim 36 further comprising:
determining a distance between the VR listener and the VR movie screen; and
adjusting a loudness of the voice of the character based on the distance between the VR listener and the VR movie screen. 38. The method of claim 36 further comprising:
processing the voice of the character with head-related transfer functions (HRTFs) to provide the voice of the character as the binaural sound; and
selecting the HRTFs based on which one of the VR seats the VR listener sits in. 39. The method of claim 36 further comprising:
processing the voice of the character with head-related transfer functions (HRTFs) in response to which one of the VR seats the VR listener sits in; and
changing the HRTFs processing the voice of the character in response to the VR listener moving to another one of the VR seats. 40. The method of claim 36 further comprising:
increasing a level of realism that the listener experiences while watching the movie by providing a sound to originate at a SLP in empty space behind the listener when the sound originates behind the character. | 2,600 |
10,752 | 10,752 | 15,720,795 | 2,684 | Systems and methods for securing an object in a vehicle are provided. One embodiment of a method includes receiving data related to a biometric identifier of a user, receiving data related to a physiological state of the user, and determining whether the data associated with the physiological state of the user corresponds to an undesired physiological state. In response to receiving a correct biometric identifier and determining that the physiological state of the user does not correspond to the undesired physiological state, some embodiments may be configured to grant access to a storage area of a biometric lockbox. Similarly, in response to at least one of the following: not receiving the correct biometric identifier or determining that the physiological state of the user corresponds to the undesired physiological state of the user, some embodiments may be configured to deny access to the storage area of the biometric lockbox. | 1. A method for securing an object in a vehicle comprising:
sampling a physiological state of a user at various times to classify a predetermined undesired physiological state; receiving data related to a biometric identifier of the user; receiving data related to a current physiological state of the user; determining whether the biometric identifier of the user corresponds with an authorized user; determining whether the data associated with the current physiological state of the authenticated user corresponds to the predetermined undesired physiological state; in response to determining that the received data related to the biometric identifier corresponds with the authorized user and determining that the current physiological state of the user does not correspond to the predetermined undesired physiological state, granting access to a storage area of a biometric lockbox; and in response to at least one of the following: determining that the received data related to the biometric identifier does not correspond with an authorized user or determining that the current physiological state of the user corresponds to the predetermined undesired physiological state of the user, denying access to the storage area of the biometric lockbox. 2. The method of claim 1, wherein the physiological state is determined from at least one of the following: a retina sensor, a heartrate sensor, a voice sensor, or a thermometer. 3. The method of claim 1, wherein the biometric identifier is received via at least one of the following: a fingerprint sensor, a retina sensor, a facial recognition sensor, a hand vein sensor, an iris sensor, a voice sensor, or an ear sensor. 4. The method of claim 1, wherein the biometric lockbox is located in at least one of the following: a center console of the vehicle, a glove box of the vehicle, or a dashboard area of the vehicle. 5. The method of claim 1, further comprising granting access to a compartment of the storage area upon receiving the correct biometric identifier. 6. The method of claim 1, further comprising determining a location of the vehicle and, in response to determining that the vehicle is located at an undesirable location, denying access to the storage area. 7. (canceled) 8. A system for securing an object in a vehicle comprising:
a biometric lockbox that includes a storage area for receiving an object, and a locking mechanism that includes a lock, a biometric sensor for detecting a biometric identifier to authenticate a user and a physiological sensor for detecting a physiological state of an authenticated user to determine whether the authenticated user is in a mental and physical condition to access the storage area; and a computing device coupled to the biometric lockbox that includes logic that, when executed, causes the system to perform at least the following:
sample the physiological state of the user at various times to classify a predetermined desired physiological state;
receive data related to the biometric identifier of the user;
receive data related to the physiological state of the user; and
in response to determining that a correct biometric identifier and the predetermined desired physiological state of the user were received, deactivate the lock to grant access to the storage area. 9. The system of claim 8, wherein the physiological sensor includes at least one of the following: a retina sensor, a heartrate sensor, or a thermometer. 10. The system of claim 8, wherein the biometric sensor includes at least one of the following: a fingerprint sensor, a retina sensor, a facial recognition sensor, a hand vein sensor, an iris sensor, a voice sensor, or an ear sensor. 11. The system of claim 8, wherein the biometric lockbox is located in at least one of the following: a center console of the vehicle, a glove box of the vehicle, or a dashboard area of the vehicle. 12. (canceled) 13. The system of claim 8, wherein the storage area includes a compartment to which access is granted upon receiving the correct biometric identifier. 14. The system of claim 8, wherein in response to a determination that at least one of the following: the physiological state of the user does not correspond to the predetermined desired physiological state or not receiving the correct biometric identifier, the logic causes the system to deny access to the storage area. 15. A biometric lockbox for securing an object in a vehicle comprising:
a storage area for receiving an object; a locking mechanism that includes a biometric sensor for receiving a biometric identifier of a user for authenticating the user and a physiological sensor for detecting a physiological state of an authenticated user to determine whether the authenticated user is in a physical and mental state to access the object; and a computing device that includes logic that, when executed, causes the biometric lockbox to perform at least the following: sample the physiological state of the user at various times to classify a predetermined undesired physiological state; receive data related to the biometric identifier of the user; receive data related to the physiological state of the user; determine whether the data associated with the physiological state of the user corresponds to the predetermined undesired physiological state; and in response to receiving a correct biometric identifier and determining that the physiological state of the user does not correspond to the predetermined undesired physiological state, grant access to the storage area. 16. The biometric lockbox of claim 15, wherein in response to a determination that at least one of the following: the physiological state of the user corresponds to the predetermined undesired physiological state or not receiving the correct biometric identifier, the logic causes the biometric lockbox to deny access to the storage area. 17. The biometric lockbox of claim 16, wherein the physiological sensor includes at least one of the following: a retina sensor, a heartrate sensor, or a thermometer. 18. The biometric lockbox of claim 15, wherein the biometric sensor includes at least one of the following: a fingerprint sensor, a retina sensor, a facial recognition sensor, a hand vein sensor, an iris sensor, a voice sensor, or an ear sensor. 19. (canceled) 20. The biometric lockbox of claim 15, wherein the biometric lockbox is located in one of the following: a center console of the vehicle and a glove box of the vehicle. 21. The biometric lockbox of claim 15, wherein the storage area includes a compartment to which access is granted upon receiving the correct biometric identifier. | Systems and methods for securing an object in a vehicle are provided. One embodiment of a method includes receiving data related to a biometric identifier of a user, receiving data related to a physiological state of the user, and determining whether the data associated with the physiological state of the user corresponds to an undesired physiological state. In response to receiving a correct biometric identifier and determining that the physiological state of the user does not correspond to the undesired physiological state, some embodiments may be configured to grant access to a storage area of a biometric lockbox. Similarly, in response to at least one of the following: not receiving the correct biometric identifier or determining that the physiological state of the user corresponds to the undesired physiological state of the user, some embodiments may be configured to deny access to the storage area of the biometric lockbox.1. A method for securing an object in a vehicle comprising:
sampling a physiological state of a user at various times to classify a predetermined undesired physiological state; receiving data related to a biometric identifier of the user; receiving data related to a current physiological state of the user; determining whether the biometric identifier of the user corresponds with an authorized user; determining whether the data associated with the current physiological state of the authenticated user corresponds to the predetermined undesired physiological state; in response to determining that the received data related to the biometric identifier corresponds with the authorized user and determining that the current physiological state of the user does not correspond to the predetermined undesired physiological state, granting access to a storage area of a biometric lockbox; and in response to at least one of the following: determining that the received data related to the biometric identifier does not correspond with an authorized user or determining that the current physiological state of the user corresponds to the predetermined undesired physiological state of the user, denying access to the storage area of the biometric lockbox. 2. The method of claim 1, wherein the physiological state is determined from at least one of the following: a retina sensor, a heartrate sensor, a voice sensor, or a thermometer. 3. The method of claim 1, wherein the biometric identifier is received via at least one of the following: a fingerprint sensor, a retina sensor, a facial recognition sensor, a hand vein sensor, an iris sensor, a voice sensor, or an ear sensor. 4. The method of claim 1, wherein the biometric lockbox is located in at least one of the following: a center console of the vehicle, a glove box of the vehicle, or a dashboard area of the vehicle. 5. The method of claim 1, further comprising granting access to a compartment of the storage area upon receiving the correct biometric identifier. 6. The method of claim 1, further comprising determining a location of the vehicle and, in response to determining that the vehicle is located at an undesirable location, denying access to the storage area. 7. (canceled) 8. A system for securing an object in a vehicle comprising:
a biometric lockbox that includes a storage area for receiving an object, and a locking mechanism that includes a lock, a biometric sensor for detecting a biometric identifier to authenticate a user and a physiological sensor for detecting a physiological state of an authenticated user to determine whether the authenticated user is in a mental and physical condition to access the storage area; and a computing device coupled to the biometric lockbox that includes logic that, when executed, causes the system to perform at least the following:
sample the physiological state of the user at various times to classify a predetermined desired physiological state;
receive data related to the biometric identifier of the user;
receive data related to the physiological state of the user; and
in response to determining that a correct biometric identifier and the predetermined desired physiological state of the user were received, deactivate the lock to grant access to the storage area. 9. The system of claim 8, wherein the physiological sensor includes at least one of the following: a retina sensor, a heartrate sensor, or a thermometer. 10. The system of claim 8, wherein the biometric sensor includes at least one of the following: a fingerprint sensor, a retina sensor, a facial recognition sensor, a hand vein sensor, an iris sensor, a voice sensor, or an ear sensor. 11. The system of claim 8, wherein the biometric lockbox is located in at least one of the following: a center console of the vehicle, a glove box of the vehicle, or a dashboard area of the vehicle. 12. (canceled) 13. The system of claim 8, wherein the storage area includes a compartment to which access is granted upon receiving the correct biometric identifier. 14. The system of claim 8, wherein in response to a determination that at least one of the following: the physiological state of the user does not correspond to the predetermined desired physiological state or not receiving the correct biometric identifier, the logic causes the system to deny access to the storage area. 15. A biometric lockbox for securing an object in a vehicle comprising:
a storage area for receiving an object; a locking mechanism that includes a biometric sensor for receiving a biometric identifier of a user for authenticating the user and a physiological sensor for detecting a physiological state of an authenticated user to determine whether the authenticated user is in a physical and mental state to access the object; and a computing device that includes logic that, when executed, causes the biometric lockbox to perform at least the following: sample the physiological state of the user at various times to classify a predetermined undesired physiological state; receive data related to the biometric identifier of the user; receive data related to the physiological state of the user; determine whether the data associated with the physiological state of the user corresponds to the predetermined undesired physiological state; and in response to receiving a correct biometric identifier and determining that the physiological state of the user does not correspond to the predetermined undesired physiological state, grant access to the storage area. 16. The biometric lockbox of claim 15, wherein in response to a determination that at least one of the following: the physiological state of the user corresponds to the predetermined undesired physiological state or not receiving the correct biometric identifier, the logic causes the biometric lockbox to deny access to the storage area. 17. The biometric lockbox of claim 16, wherein the physiological sensor includes at least one of the following: a retina sensor, a heartrate sensor, or a thermometer. 18. The biometric lockbox of claim 15, wherein the biometric sensor includes at least one of the following: a fingerprint sensor, a retina sensor, a facial recognition sensor, a hand vein sensor, an iris sensor, a voice sensor, or an ear sensor. 19. (canceled) 20. The biometric lockbox of claim 15, wherein the biometric lockbox is located in one of the following: a center console of the vehicle and a glove box of the vehicle. 21. The biometric lockbox of claim 15, wherein the storage area includes a compartment to which access is granted upon receiving the correct biometric identifier. | 2,600 |
10,753 | 10,753 | 15,821,270 | 2,633 | Techniques are disclosed related to calibrating and operating a multiple input multiple output (MIMO) radio system. Some embodiments comprise a method wherein a single calibration signal is used to calibrate a MIMO radio system by performing each of time synchronization, phase synchronization, and frequency response correction for multiple receivers. In different embodiments, the calibration may be achieved by deriving either a fractionally spaced frequency domain equalizer, or a time domain equalizer. | 1. A method for calibrating a plurality of receivers in a multiple input multiple output (MIMO) communication system, the method comprising:
operating the MIMO communication system in a calibration mode comprising:
by respective ones of the plurality of receivers:
receiving a respective calibration sequence, wherein each received calibration sequence is a different channel-modified version of a common calibration sequence from a source;
deriving a respective equalizer based on the received calibration sequence;
aligning a time and a phase associated with at least one of the plurality of receivers, and correcting a non-uniform frequency response of at least one of the plurality of receivers;
wherein the aligning of the time and the phase and the correcting the non-uniform frequency response is based on the derived equalizers; and
wherein the aligning of the time and the phase and the correcting the non-uniform frequency response at least partially corrects for differences in channels and time delays experienced by different ones of the plurality of receivers. 2. The method of claim 1, wherein deriving the equalizer based on the received calibration sequence comprises performing a channel estimation calculation using the received calibration sequence and a known calibration sequence. 3. The method of claim 2, wherein performing the channel estimation calculation comprises performing a cross-correlation calculation between the received calibration sequence and the known calibration sequence. 4. The method of claim 2, wherein the equalizers are derived at least in part based on an inhomogeneity in a spectral decomposition of the received calibration sequence determined from the channel estimation calculation, and wherein correcting the non-uniform frequency response based on the derived equalizers comprises correcting for the inhomogeneity in the spectral decomposition. 5. The method of claim 1, the method further comprising:
subsequent to said calibration mode:
using a switch at each of the plurality of receivers to switch to an operation mode;
wherein, while in the operation mode, respective receivers of the plurality of receivers are configured to receive signals from respective antennas using the alignment and correction. 6. The method of claim 5, wherein the method is configured to be performed in real time by:
automatically repeating the calibration mode at preset intervals; alternating between calibration mode and operation mode according to the preset intervals; wherein a radio protocol is designed to have pre-scheduled gaps in data transmission such that the pre-scheduled gaps coincide with the repeating of the calibration mode. 7. The method of claim 6, wherein operating in operation mode comprises using calibration results from a most recent calibration. 8. The method of claim 1, the method further comprising:
aligning a time and a phase associated with at least a second one of the plurality of receivers, and correcting a non-uniform frequency response of at least the second one of the plurality of receivers, based on the derived equalizers, wherein the non-uniform frequency response of the second one of the plurality of receivers is different from the non-uniform frequency response of the first one of the plurality receivers. 9. The method of claim 1, wherein the reference sequence is a Constant Amplitude Zero Autocorrelation (CAZAC) sequence. 10. A multiple-input multiple-output (MIMO) radio device comprising a plurality of receivers coupled to one or more processing elements, wherein the plurality of receivers and the one or more processing elements are configured to:
operate the MIMO radio device in a calibration mode, wherein while in calibration mode the MIMO radio device is configured to:
by each one of the plurality of receivers:
receive a calibration sequence, wherein each received calibration sequence is a different channel-modified version of a common calibration sequence from a source;
derive an equalizer based on the received calibration sequence;
align a time and a phase associated with at least one of the plurality of receivers, and correcting a non-uniform frequency response of at least one of the plurality of receivers;
wherein the aligning of the time and the phase and the correcting the non-uniform frequency response is based on the derived equalizers; and
wherein the aligning of the time and the phase and the correcting the non-uniform frequency response at least partially corrects for differences and time delays in channels experienced by different ones of the plurality of receivers. 11. The MIMO radio device of claim 10, wherein deriving the equalizer based on the received calibration sequence comprises performing a channel estimation calculation using the received calibration sequence and a known calibration sequence. 12. The MIMO radio device of claim 11, wherein performing the channel estimation calculation comprises performing a cross-correlation calculation between the received calibration sequence and the known calibration sequence. 13. The MIMO radio device of claim 11, wherein the equalizers are derived at least in part based on an inhomogeneity in a spectral decomposition of the received calibration sequence determined from the channel estimation calculation, and wherein correcting the non-uniform frequency response based on the derived equalizers comprises correcting for the inhomogeneity in the spectral decomposition. 14. The MIMO radio device of claim 10, wherein the derived equalizers are fractionally spaced frequency domain equalizers. 15. The MIMO radio device of claim 10, wherein the derived equalizers are time domain equalizers. 16. A non-transitory computer-readable memory medium comprising program instructions executable by a processor to calibrate a plurality of receivers in a multiple-input multiple-output (MIMO) communication system, wherein the program instructions are executable to:
derive an equalizer for each of the plurality of receivers, wherein the equalizers for each of the plurality of receivers are based on a channel-modified version of a common calibration sequence received by each of the plurality of receivers; align a time and a phase associated with at least one of the plurality of receivers, and correct a non-uniform frequency response of at least one of the plurality of receivers, wherein the aligning of the time and the phase and the correcting the non-uniform frequency response is based on the derived equalizers. 17. The non-transitory computer-readable memory medium of claim 16, wherein the program instructions are further executable to:
use a shared start trigger to initiate reception of the calibration sequence by the plurality of receivers. 18. The non-transitory computer-readable memory medium of claim 17, wherein the program instructions are executable to configure the MIMO communication system to perform calibration in real-time without missing data packets received from antennas during the calibration. 19. The non-transitory computer-readable memory medium of claim 16, wherein the plurality of receivers share a single local oscillator, wherein sharing the single local oscillator enables the MIMO communication system to remain calibrated for an extended duration of time. 20. The non-transitory computer-readable memory medium of claim 16, wherein the common calibration sequence is a Constant Amplitude Zero Autocorrelation (CAZAC) sequence. | Techniques are disclosed related to calibrating and operating a multiple input multiple output (MIMO) radio system. Some embodiments comprise a method wherein a single calibration signal is used to calibrate a MIMO radio system by performing each of time synchronization, phase synchronization, and frequency response correction for multiple receivers. In different embodiments, the calibration may be achieved by deriving either a fractionally spaced frequency domain equalizer, or a time domain equalizer.1. A method for calibrating a plurality of receivers in a multiple input multiple output (MIMO) communication system, the method comprising:
operating the MIMO communication system in a calibration mode comprising:
by respective ones of the plurality of receivers:
receiving a respective calibration sequence, wherein each received calibration sequence is a different channel-modified version of a common calibration sequence from a source;
deriving a respective equalizer based on the received calibration sequence;
aligning a time and a phase associated with at least one of the plurality of receivers, and correcting a non-uniform frequency response of at least one of the plurality of receivers;
wherein the aligning of the time and the phase and the correcting the non-uniform frequency response is based on the derived equalizers; and
wherein the aligning of the time and the phase and the correcting the non-uniform frequency response at least partially corrects for differences in channels and time delays experienced by different ones of the plurality of receivers. 2. The method of claim 1, wherein deriving the equalizer based on the received calibration sequence comprises performing a channel estimation calculation using the received calibration sequence and a known calibration sequence. 3. The method of claim 2, wherein performing the channel estimation calculation comprises performing a cross-correlation calculation between the received calibration sequence and the known calibration sequence. 4. The method of claim 2, wherein the equalizers are derived at least in part based on an inhomogeneity in a spectral decomposition of the received calibration sequence determined from the channel estimation calculation, and wherein correcting the non-uniform frequency response based on the derived equalizers comprises correcting for the inhomogeneity in the spectral decomposition. 5. The method of claim 1, the method further comprising:
subsequent to said calibration mode:
using a switch at each of the plurality of receivers to switch to an operation mode;
wherein, while in the operation mode, respective receivers of the plurality of receivers are configured to receive signals from respective antennas using the alignment and correction. 6. The method of claim 5, wherein the method is configured to be performed in real time by:
automatically repeating the calibration mode at preset intervals; alternating between calibration mode and operation mode according to the preset intervals; wherein a radio protocol is designed to have pre-scheduled gaps in data transmission such that the pre-scheduled gaps coincide with the repeating of the calibration mode. 7. The method of claim 6, wherein operating in operation mode comprises using calibration results from a most recent calibration. 8. The method of claim 1, the method further comprising:
aligning a time and a phase associated with at least a second one of the plurality of receivers, and correcting a non-uniform frequency response of at least the second one of the plurality of receivers, based on the derived equalizers, wherein the non-uniform frequency response of the second one of the plurality of receivers is different from the non-uniform frequency response of the first one of the plurality receivers. 9. The method of claim 1, wherein the reference sequence is a Constant Amplitude Zero Autocorrelation (CAZAC) sequence. 10. A multiple-input multiple-output (MIMO) radio device comprising a plurality of receivers coupled to one or more processing elements, wherein the plurality of receivers and the one or more processing elements are configured to:
operate the MIMO radio device in a calibration mode, wherein while in calibration mode the MIMO radio device is configured to:
by each one of the plurality of receivers:
receive a calibration sequence, wherein each received calibration sequence is a different channel-modified version of a common calibration sequence from a source;
derive an equalizer based on the received calibration sequence;
align a time and a phase associated with at least one of the plurality of receivers, and correcting a non-uniform frequency response of at least one of the plurality of receivers;
wherein the aligning of the time and the phase and the correcting the non-uniform frequency response is based on the derived equalizers; and
wherein the aligning of the time and the phase and the correcting the non-uniform frequency response at least partially corrects for differences and time delays in channels experienced by different ones of the plurality of receivers. 11. The MIMO radio device of claim 10, wherein deriving the equalizer based on the received calibration sequence comprises performing a channel estimation calculation using the received calibration sequence and a known calibration sequence. 12. The MIMO radio device of claim 11, wherein performing the channel estimation calculation comprises performing a cross-correlation calculation between the received calibration sequence and the known calibration sequence. 13. The MIMO radio device of claim 11, wherein the equalizers are derived at least in part based on an inhomogeneity in a spectral decomposition of the received calibration sequence determined from the channel estimation calculation, and wherein correcting the non-uniform frequency response based on the derived equalizers comprises correcting for the inhomogeneity in the spectral decomposition. 14. The MIMO radio device of claim 10, wherein the derived equalizers are fractionally spaced frequency domain equalizers. 15. The MIMO radio device of claim 10, wherein the derived equalizers are time domain equalizers. 16. A non-transitory computer-readable memory medium comprising program instructions executable by a processor to calibrate a plurality of receivers in a multiple-input multiple-output (MIMO) communication system, wherein the program instructions are executable to:
derive an equalizer for each of the plurality of receivers, wherein the equalizers for each of the plurality of receivers are based on a channel-modified version of a common calibration sequence received by each of the plurality of receivers; align a time and a phase associated with at least one of the plurality of receivers, and correct a non-uniform frequency response of at least one of the plurality of receivers, wherein the aligning of the time and the phase and the correcting the non-uniform frequency response is based on the derived equalizers. 17. The non-transitory computer-readable memory medium of claim 16, wherein the program instructions are further executable to:
use a shared start trigger to initiate reception of the calibration sequence by the plurality of receivers. 18. The non-transitory computer-readable memory medium of claim 17, wherein the program instructions are executable to configure the MIMO communication system to perform calibration in real-time without missing data packets received from antennas during the calibration. 19. The non-transitory computer-readable memory medium of claim 16, wherein the plurality of receivers share a single local oscillator, wherein sharing the single local oscillator enables the MIMO communication system to remain calibrated for an extended duration of time. 20. The non-transitory computer-readable memory medium of claim 16, wherein the common calibration sequence is a Constant Amplitude Zero Autocorrelation (CAZAC) sequence. | 2,600 |
10,754 | 10,754 | 15,706,169 | 2,616 | An electronic device includes one or more processors, a tag reader, a display, and a wireless communication circuit. The one or more processors can identify an item in an environment of the electronic device. The tag reader can read item information from a tag corresponding to the item. The wireless communication circuit can retrieve one or more overlays across a network from the item information. The one or more processors can present an overlay selected from the one or more overlays on the display. | 1. A method in an electronic device, the method comprising:
identifying, with one or more processors of the electronic device, an item; reading, with a tag reader operable with the one or more processors, item information from a tag associated with the item; retrieving, with a wireless communication device operable with the one or more processors, one or more overlays corresponding to the item information; and presenting, with the one or more processors, an overlay selected from the one or more overlays on a display of the electronic device. 2. The method of claim 1, further comprising capturing, with an imager operable with the one or more processors, an image of the item. 3. The method of claim 2, further comprising also presenting the image of the item on the display, wherein a presentation of the overlay overlaps the image of the item on the display. 4. The method of claim 1, wherein the one or more overlays comprises a plurality of overlays, further comprising:
receiving, with a user interface operable with the one or more processors, user input; and replacing, with the one or more processors, the overlay with another overlay selected from the plurality of overlays. 5. The method of claim 1, wherein the one or more overlays comprises a plurality of overlays, further comprising selecting the overlay from the plurality of overlays. 6. The method of claim 5, further comprising identifying one or more user behaviors stored within a memory of the electronic device, wherein the selecting the overlay from the plurality of overlays is a function of the one or more user behaviors. 7. The method of claim 5, further comprising identifying one or more user preferences stored within a memory of the electronic device, wherein the selecting the overlay from the plurality of overlays is a function of the one or more user preferences. 8. The method of claim 1, further comprising determining, with one or more sensors operable with the one or more processors, a distance of the electronic device from the item. 9. The method of claim 8, wherein the presenting comprises resizing the overlay as a function of the distance of the electronic device from the item. 10. The method of claim 1, wherein the overlay comprises a label for a product. 11. An electronic device, comprising:
one or more processors; a tag reader operable with the one or more processors; a display operable with the one or more processors; and a wireless communication circuit operable with the one or more processors;
the one or more processors identifying an item in an environment of the electronic device;
the tag reader reading item information from a tag corresponding to the item;
the wireless communication circuit retrieving one or more overlays from the item information; and
the one or more processors presenting an overlay selected from the one or more overlays on the display. 12. The electronic device of claim 11, further comprising an imager operable with the one or more processors, the imager capturing an image of the item, the one or more processors presenting the overlay by superimposing the overlay on the image of the item. 13. The electronic device of claim 12, the one or more processors further presenting alignment indicia on the display when capturing the image of the item. 14. The electronic device of claim 13, further comprising a beam steerer operable with the tag reader, the beam steerer directing electronic signals from the tag reader as a function of a location of the alignment indicia in the image of the item. 15. The electronic device of claim 12, further comprising one or more sensors operable with the one or more processors, the one or more processors determining a distance of the electronic device from the item and resizing one or both of the overlay or the image of the item as a function of the distance. 16. The electronic device of claim 11, the one or more overlays comprising a plurality of overlays, the one or more processors receiving user input from the display and presenting another overlay selected from the plurality of overlays in response to the user input. 17. The electronic device of claim 11, the one or more overlays comprising a plurality of overlays, the one or more processors selecting the overlay from the plurality of overlays as a function of one of one or more user preferences. 18. A method in an electronic device, the method comprising:
capturing, with an imager operable with one or more processors, one or more images of an item; reading, with a tag reader operable with the one or more processors, item information from a tag of the item; retrieving, with a wireless communication circuit, an overlay based upon the item information; and superimposing, on a display with the one or more processors, the overlay on the one or more images of the item. 19. The method of claim 18, wherein the superimposing comprises positioning the overlay within a perimeter boundary defined by the item in the one or more images. 20. The method of claim 18, further comprising changing the overlay from a first overlay to a second overlay. | An electronic device includes one or more processors, a tag reader, a display, and a wireless communication circuit. The one or more processors can identify an item in an environment of the electronic device. The tag reader can read item information from a tag corresponding to the item. The wireless communication circuit can retrieve one or more overlays across a network from the item information. The one or more processors can present an overlay selected from the one or more overlays on the display.1. A method in an electronic device, the method comprising:
identifying, with one or more processors of the electronic device, an item; reading, with a tag reader operable with the one or more processors, item information from a tag associated with the item; retrieving, with a wireless communication device operable with the one or more processors, one or more overlays corresponding to the item information; and presenting, with the one or more processors, an overlay selected from the one or more overlays on a display of the electronic device. 2. The method of claim 1, further comprising capturing, with an imager operable with the one or more processors, an image of the item. 3. The method of claim 2, further comprising also presenting the image of the item on the display, wherein a presentation of the overlay overlaps the image of the item on the display. 4. The method of claim 1, wherein the one or more overlays comprises a plurality of overlays, further comprising:
receiving, with a user interface operable with the one or more processors, user input; and replacing, with the one or more processors, the overlay with another overlay selected from the plurality of overlays. 5. The method of claim 1, wherein the one or more overlays comprises a plurality of overlays, further comprising selecting the overlay from the plurality of overlays. 6. The method of claim 5, further comprising identifying one or more user behaviors stored within a memory of the electronic device, wherein the selecting the overlay from the plurality of overlays is a function of the one or more user behaviors. 7. The method of claim 5, further comprising identifying one or more user preferences stored within a memory of the electronic device, wherein the selecting the overlay from the plurality of overlays is a function of the one or more user preferences. 8. The method of claim 1, further comprising determining, with one or more sensors operable with the one or more processors, a distance of the electronic device from the item. 9. The method of claim 8, wherein the presenting comprises resizing the overlay as a function of the distance of the electronic device from the item. 10. The method of claim 1, wherein the overlay comprises a label for a product. 11. An electronic device, comprising:
one or more processors; a tag reader operable with the one or more processors; a display operable with the one or more processors; and a wireless communication circuit operable with the one or more processors;
the one or more processors identifying an item in an environment of the electronic device;
the tag reader reading item information from a tag corresponding to the item;
the wireless communication circuit retrieving one or more overlays from the item information; and
the one or more processors presenting an overlay selected from the one or more overlays on the display. 12. The electronic device of claim 11, further comprising an imager operable with the one or more processors, the imager capturing an image of the item, the one or more processors presenting the overlay by superimposing the overlay on the image of the item. 13. The electronic device of claim 12, the one or more processors further presenting alignment indicia on the display when capturing the image of the item. 14. The electronic device of claim 13, further comprising a beam steerer operable with the tag reader, the beam steerer directing electronic signals from the tag reader as a function of a location of the alignment indicia in the image of the item. 15. The electronic device of claim 12, further comprising one or more sensors operable with the one or more processors, the one or more processors determining a distance of the electronic device from the item and resizing one or both of the overlay or the image of the item as a function of the distance. 16. The electronic device of claim 11, the one or more overlays comprising a plurality of overlays, the one or more processors receiving user input from the display and presenting another overlay selected from the plurality of overlays in response to the user input. 17. The electronic device of claim 11, the one or more overlays comprising a plurality of overlays, the one or more processors selecting the overlay from the plurality of overlays as a function of one of one or more user preferences. 18. A method in an electronic device, the method comprising:
capturing, with an imager operable with one or more processors, one or more images of an item; reading, with a tag reader operable with the one or more processors, item information from a tag of the item; retrieving, with a wireless communication circuit, an overlay based upon the item information; and superimposing, on a display with the one or more processors, the overlay on the one or more images of the item. 19. The method of claim 18, wherein the superimposing comprises positioning the overlay within a perimeter boundary defined by the item in the one or more images. 20. The method of claim 18, further comprising changing the overlay from a first overlay to a second overlay. | 2,600 |
10,755 | 10,755 | 15,613,631 | 2,685 | A method operates a counter device for a fluid which can be connected via a connecting device to a meter point having at least one line through which the fluid flows. A control device is configured when connected via at least one operating parameter specific for the meter point and/or a meter point environment. When the counter device is taken into operation, the operating parameters are called up by a communication device on the counter device side communicating wirelessly by radio via a communication link with a passive communication device on the meter point side. The operating parameters are stored in a storage device of the passive communication device. | 1. A method for operating a counter device for a fluid which can be connected via a connecting device to a meter point having at least one line through which the fluid flows, which comprises the steps of:
providing the counter device with a control device, the control device being configured upon receiving at least one operating parameter specific for the meter point and/or a meter point environment; putting the counter device into an operational mode by obtaining operating parameters by means of a communication device on a counter device side communicating wirelessly by radio via a communication link with a passive communication device on a meter point side, the passive communication device having a storage device in which the operating parameters are stored; and using the operating parameters for configuring the control device. 2. The method according to claim 1, which further comprises:
providing the passive communication device on the meter point side as an radio frequency identification transponder and/or a near field communication device and a communication range of the communication link is less than 20 cm; and integrating the passive communication device on the meter point side in a nameplate of the meter point. 3. The method according to claim 1, wherein the communication link uses at least one of alternating magnetic fields or a frequency in a range from 13 to 14 MHz. 4. The method according to claim 1, which further comprises selecting the operating parameters from the group consisting of identification information items of the meter point, a starting count, and communication data being used for establishing a communication link with a data collection device for consumption data of the counter device. 5. The method according to claim 1, wherein during installation of the meter point, mounting the passive communication device on the meter point side at the meter point, wherein the operating parameters are stored in the passive communication device. 6. The method according to claim 1, wherein the counter device, during establishment of the communication link, transmits energy needed for operation of the passive communication device on the meter point side wirelessly to the passive communication device on the meter point side. 7. The method according to claim 1, wherein in a case of a presence of an updating signal in the counter device, transmitting altered, updated operating parameters via the communication link to the passive communication device on the meter point side which stores the altered, updated operating parameters in the storage device. 8. The method according to claim 7, wherein in a case of an exchange of the counter device an update of the operating parameters occurs in the passive communication device on the meter point side by an old counter device, initiated by a user, and, after installation of a new counter device, the new counter device, after the altered, updated operating parameters have been called up, is configured by the altered, updated operating parameters. 9. The method according to claim 2, wherein the communication link is less than 10 cm. 10. The method according to claim 1, wherein the communication link uses at least one of alternating magnetic fields or a frequency of 13.56 MHz. 11. A counter device, comprising:
a communication device disposed on a counter device side for wireless communication by radio with a passive communication device on a meter point side; and a control device configured for activating said communication device on the counter device side for calling up operating parameters from the passive communication device on the meter point side and for configuration of the counter device according to the operating parameters called up. 12. A counter system, comprising:
a passive communication device disposed on a meter point side and having a storage device for storing operating parameters; and a counter device having a counter communications device disposed on a counter device side for wireless communication by radio with said passive communication device on the meter point side, said counter device having a control device configured for activating said counter communication device on the counter device side for calling up the operating parameters from said passive communication device on the meter point side for configuring said counter device according to the operating parameters called up. | A method operates a counter device for a fluid which can be connected via a connecting device to a meter point having at least one line through which the fluid flows. A control device is configured when connected via at least one operating parameter specific for the meter point and/or a meter point environment. When the counter device is taken into operation, the operating parameters are called up by a communication device on the counter device side communicating wirelessly by radio via a communication link with a passive communication device on the meter point side. The operating parameters are stored in a storage device of the passive communication device.1. A method for operating a counter device for a fluid which can be connected via a connecting device to a meter point having at least one line through which the fluid flows, which comprises the steps of:
providing the counter device with a control device, the control device being configured upon receiving at least one operating parameter specific for the meter point and/or a meter point environment; putting the counter device into an operational mode by obtaining operating parameters by means of a communication device on a counter device side communicating wirelessly by radio via a communication link with a passive communication device on a meter point side, the passive communication device having a storage device in which the operating parameters are stored; and using the operating parameters for configuring the control device. 2. The method according to claim 1, which further comprises:
providing the passive communication device on the meter point side as an radio frequency identification transponder and/or a near field communication device and a communication range of the communication link is less than 20 cm; and integrating the passive communication device on the meter point side in a nameplate of the meter point. 3. The method according to claim 1, wherein the communication link uses at least one of alternating magnetic fields or a frequency in a range from 13 to 14 MHz. 4. The method according to claim 1, which further comprises selecting the operating parameters from the group consisting of identification information items of the meter point, a starting count, and communication data being used for establishing a communication link with a data collection device for consumption data of the counter device. 5. The method according to claim 1, wherein during installation of the meter point, mounting the passive communication device on the meter point side at the meter point, wherein the operating parameters are stored in the passive communication device. 6. The method according to claim 1, wherein the counter device, during establishment of the communication link, transmits energy needed for operation of the passive communication device on the meter point side wirelessly to the passive communication device on the meter point side. 7. The method according to claim 1, wherein in a case of a presence of an updating signal in the counter device, transmitting altered, updated operating parameters via the communication link to the passive communication device on the meter point side which stores the altered, updated operating parameters in the storage device. 8. The method according to claim 7, wherein in a case of an exchange of the counter device an update of the operating parameters occurs in the passive communication device on the meter point side by an old counter device, initiated by a user, and, after installation of a new counter device, the new counter device, after the altered, updated operating parameters have been called up, is configured by the altered, updated operating parameters. 9. The method according to claim 2, wherein the communication link is less than 10 cm. 10. The method according to claim 1, wherein the communication link uses at least one of alternating magnetic fields or a frequency of 13.56 MHz. 11. A counter device, comprising:
a communication device disposed on a counter device side for wireless communication by radio with a passive communication device on a meter point side; and a control device configured for activating said communication device on the counter device side for calling up operating parameters from the passive communication device on the meter point side and for configuration of the counter device according to the operating parameters called up. 12. A counter system, comprising:
a passive communication device disposed on a meter point side and having a storage device for storing operating parameters; and a counter device having a counter communications device disposed on a counter device side for wireless communication by radio with said passive communication device on the meter point side, said counter device having a control device configured for activating said counter communication device on the counter device side for calling up the operating parameters from said passive communication device on the meter point side for configuring said counter device according to the operating parameters called up. | 2,600 |
10,756 | 10,756 | 15,799,404 | 2,626 | Imaging systems can often gather higher quality information about a field of view than the unaided human eye. For example, telescopes may magnify very distant objects, microscopes may magnify very small objects, and high frame-rate cameras may capture fast motion. The present disclosure includes devices and methods that provide real-time vision enhancement without the delay of replaying from storage media. The disclosed devices and methods may include a live view user interface with two or more interactive features or effects that may be controllable in real-time. Specifically, the disclosed devices and methods may include a live view display and image and other information enhancements, which utilize in-line computation and constant control. | 1. A method comprising:
based on image data from a camera, displaying, on a display of a computing device, a live view representation that shows (i) a first feature at a first portion of the live view representation and (ii) a second feature at a second portion of the live view representation; receiving, via an interface of the computing device, control input indicative of a swap effect, wherein the swap effect comprises causing (i) the first feature to be shown at the second portion of the live view representation rather than at the first portion of the live view representation and (ii) the second feature to be shown at the first portion of the live view representation rather than at the second portion of the live view representation; and in response to receiving control input indicative of the swap effect, producing the swap effect in the live view representation in real-time. 2. The method of claim 1,
wherein the first feature comprises a first body part shown at the first portion of the live view representation, wherein the second feature comprises a second body part shown at the second portion of the live view representation, and wherein producing the swap effect causes (i) the first body part to be shown at the second portion of the live view representation rather than at the first portion of the live view representation and (ii) the second body part to be shown at the first portion of the live view representation rather than at the second portion of the live view representation. 3. The method of claim 2, wherein the first body part is a first face, and wherein the second body part is a second face. 4. The method of claim 1, wherein the swap effect comprises a face swap effect. 5. The method of claim 1,
wherein the first feature comprises a first clothing item shown at the first portion of the live view representation, wherein the second feature comprises a second clothing item shown at the second portion of the live view representation, and wherein producing the swap effect causes (i) the first clothing item to be shown at the second portion of the live view representation rather than at the first portion of the live view representation and (ii) the second clothing item to be shown at the first portion of the live view representation rather than at the second portion of the live view representation. 6. The method of claim 1, wherein receiving control input indicative of a swap effect comprises receiving control input indicative of a selected icon from a plurality of icons, and wherein the selected icon corresponds to the swap effect. 7. The method of claim 6, wherein the interface of the computing device comprises a touch-based control interface. 8. The method of claim 7, wherein the touch-based control interface comprises one or more of the following: (i) a touch-sensitive surface and (ii) a button. 9. The method of claim 1, further comprising:
receiving, via the interface of the computing device, control input indicative of a further effect that is different from the swap effect; and in response to receiving control input indicative of the further effect, producing the further effect in the live view representation in real-time. 10. The method of claim 9, wherein the swap effect and the further effect are produced concurrently in the live view representation in real-time. 11. The method of claim 9, wherein the further effect comprises at least one of: (i) a slow-motion effect, (ii) a speed-up effect, (iii) a bokeh effect, or (iv) a high dynamic range effect. 12. The method of claim 1, wherein producing the swap effect comprises producing the swap effect while receiving, via the interface, control input indicative of the swap effect, the method further comprising:
detecting that control input indicative of the swap effect is no longer being received via the interface; and in response to detecting that control input indicative of the swap effect is no longer being received via the interface, stopping the producing of the swap effect in the live view representation in real-time. 13. A system comprising:
a computing device including a camera, a display, and an interface; and a control system configured to: based on image data from the camera, display, on the display, a live view representation that shows (i) a first feature at a first portion of the live view representation and (ii) a second feature at a second portion of the live view representation; receive, via the interface, control input indicative of a swap effect, wherein the swap effect comprises causing (i) the first feature to be shown at the second portion of the live view representation rather than at the first portion of the live view representation and (ii) the second feature to be shown at the first portion of the live view representation rather than at the second portion of the live view representation; and in response to receiving control input indicative of the swap effect, produce the swap effect in the live view representation in real-time. 14. The system of claim 13,
wherein the first feature comprises a first body part shown at the first portion of the live view representation, wherein the second feature comprises a second body part shown at the second portion of the live view representation, and wherein producing the swap effect causes (i) the first body part to be shown at the second portion of the live view representation rather than at the first portion of the live view representation and (ii) the second body part to be shown at the first portion of the live view representation rather than at the second portion of the live view representation. 15. The system of claim 14, wherein the first body part is a first face, and wherein the second body part is a second face. 16. The system of claim 13, wherein the swap effect comprises a face swap effect. 17. The system of claim 13,
wherein the first feature comprises a first clothing item shown at the first portion of the live view representation, wherein the second feature comprises a second clothing item shown at the second portion of the live view representation, and wherein producing the swap effect causes (i) the first clothing item to be shown at the second portion of the live view representation rather than at the first portion of the live view representation and (ii) the second clothing item to be shown at the first portion of the live view representation rather than at the second portion of the live view representation. 18. A non-transitory computer readable medium having stored therein instructions executable by one or more processors to cause a computing device to perform functions comprising:
based on image data from a camera, displaying, on a display of the computing device, a live view representation that shows (i) a first feature at a first portion of the live view representation and (ii) a second feature at a second portion of the live view representation; receiving, via an interface of the computing device, control input indicative of a swap effect, wherein the swap effect comprises causing (i) the first feature to be shown at the second portion of the live view representation rather than at the first portion of the live view representation and (ii) the second feature to be shown at the first portion of the live view representation rather than at the second portion of the live view representation; and in response to receiving control input indicative of the swap effect, producing the swap effect in the live view representation in real-time. 19. The non-transitory computer readable medium of claim 18,
wherein the first feature comprises a first body part shown at the first portion of the live view representation, wherein the second feature comprises a second body part shown at the second portion of the live view representation, and wherein producing the swap effect causes (i) the first body part to be shown at the second portion of the live view representation rather than at the first portion of the live view representation and (ii) the second body part to be shown at the first portion of the live view representation rather than at the second portion of the live view representation. 20. The non-transitory computer readable medium of claim 19, wherein the first body part is a first face, and wherein the second body part is a second face. | Imaging systems can often gather higher quality information about a field of view than the unaided human eye. For example, telescopes may magnify very distant objects, microscopes may magnify very small objects, and high frame-rate cameras may capture fast motion. The present disclosure includes devices and methods that provide real-time vision enhancement without the delay of replaying from storage media. The disclosed devices and methods may include a live view user interface with two or more interactive features or effects that may be controllable in real-time. Specifically, the disclosed devices and methods may include a live view display and image and other information enhancements, which utilize in-line computation and constant control.1. A method comprising:
based on image data from a camera, displaying, on a display of a computing device, a live view representation that shows (i) a first feature at a first portion of the live view representation and (ii) a second feature at a second portion of the live view representation; receiving, via an interface of the computing device, control input indicative of a swap effect, wherein the swap effect comprises causing (i) the first feature to be shown at the second portion of the live view representation rather than at the first portion of the live view representation and (ii) the second feature to be shown at the first portion of the live view representation rather than at the second portion of the live view representation; and in response to receiving control input indicative of the swap effect, producing the swap effect in the live view representation in real-time. 2. The method of claim 1,
wherein the first feature comprises a first body part shown at the first portion of the live view representation, wherein the second feature comprises a second body part shown at the second portion of the live view representation, and wherein producing the swap effect causes (i) the first body part to be shown at the second portion of the live view representation rather than at the first portion of the live view representation and (ii) the second body part to be shown at the first portion of the live view representation rather than at the second portion of the live view representation. 3. The method of claim 2, wherein the first body part is a first face, and wherein the second body part is a second face. 4. The method of claim 1, wherein the swap effect comprises a face swap effect. 5. The method of claim 1,
wherein the first feature comprises a first clothing item shown at the first portion of the live view representation, wherein the second feature comprises a second clothing item shown at the second portion of the live view representation, and wherein producing the swap effect causes (i) the first clothing item to be shown at the second portion of the live view representation rather than at the first portion of the live view representation and (ii) the second clothing item to be shown at the first portion of the live view representation rather than at the second portion of the live view representation. 6. The method of claim 1, wherein receiving control input indicative of a swap effect comprises receiving control input indicative of a selected icon from a plurality of icons, and wherein the selected icon corresponds to the swap effect. 7. The method of claim 6, wherein the interface of the computing device comprises a touch-based control interface. 8. The method of claim 7, wherein the touch-based control interface comprises one or more of the following: (i) a touch-sensitive surface and (ii) a button. 9. The method of claim 1, further comprising:
receiving, via the interface of the computing device, control input indicative of a further effect that is different from the swap effect; and in response to receiving control input indicative of the further effect, producing the further effect in the live view representation in real-time. 10. The method of claim 9, wherein the swap effect and the further effect are produced concurrently in the live view representation in real-time. 11. The method of claim 9, wherein the further effect comprises at least one of: (i) a slow-motion effect, (ii) a speed-up effect, (iii) a bokeh effect, or (iv) a high dynamic range effect. 12. The method of claim 1, wherein producing the swap effect comprises producing the swap effect while receiving, via the interface, control input indicative of the swap effect, the method further comprising:
detecting that control input indicative of the swap effect is no longer being received via the interface; and in response to detecting that control input indicative of the swap effect is no longer being received via the interface, stopping the producing of the swap effect in the live view representation in real-time. 13. A system comprising:
a computing device including a camera, a display, and an interface; and a control system configured to: based on image data from the camera, display, on the display, a live view representation that shows (i) a first feature at a first portion of the live view representation and (ii) a second feature at a second portion of the live view representation; receive, via the interface, control input indicative of a swap effect, wherein the swap effect comprises causing (i) the first feature to be shown at the second portion of the live view representation rather than at the first portion of the live view representation and (ii) the second feature to be shown at the first portion of the live view representation rather than at the second portion of the live view representation; and in response to receiving control input indicative of the swap effect, produce the swap effect in the live view representation in real-time. 14. The system of claim 13,
wherein the first feature comprises a first body part shown at the first portion of the live view representation, wherein the second feature comprises a second body part shown at the second portion of the live view representation, and wherein producing the swap effect causes (i) the first body part to be shown at the second portion of the live view representation rather than at the first portion of the live view representation and (ii) the second body part to be shown at the first portion of the live view representation rather than at the second portion of the live view representation. 15. The system of claim 14, wherein the first body part is a first face, and wherein the second body part is a second face. 16. The system of claim 13, wherein the swap effect comprises a face swap effect. 17. The system of claim 13,
wherein the first feature comprises a first clothing item shown at the first portion of the live view representation, wherein the second feature comprises a second clothing item shown at the second portion of the live view representation, and wherein producing the swap effect causes (i) the first clothing item to be shown at the second portion of the live view representation rather than at the first portion of the live view representation and (ii) the second clothing item to be shown at the first portion of the live view representation rather than at the second portion of the live view representation. 18. A non-transitory computer readable medium having stored therein instructions executable by one or more processors to cause a computing device to perform functions comprising:
based on image data from a camera, displaying, on a display of the computing device, a live view representation that shows (i) a first feature at a first portion of the live view representation and (ii) a second feature at a second portion of the live view representation; receiving, via an interface of the computing device, control input indicative of a swap effect, wherein the swap effect comprises causing (i) the first feature to be shown at the second portion of the live view representation rather than at the first portion of the live view representation and (ii) the second feature to be shown at the first portion of the live view representation rather than at the second portion of the live view representation; and in response to receiving control input indicative of the swap effect, producing the swap effect in the live view representation in real-time. 19. The non-transitory computer readable medium of claim 18,
wherein the first feature comprises a first body part shown at the first portion of the live view representation, wherein the second feature comprises a second body part shown at the second portion of the live view representation, and wherein producing the swap effect causes (i) the first body part to be shown at the second portion of the live view representation rather than at the first portion of the live view representation and (ii) the second body part to be shown at the first portion of the live view representation rather than at the second portion of the live view representation. 20. The non-transitory computer readable medium of claim 19, wherein the first body part is a first face, and wherein the second body part is a second face. | 2,600 |
10,757 | 10,757 | 16,040,381 | 2,699 | An object management system, an aircraft design system, and a method for managing an object. A three-dimensional environment with a model of an object and an avatar representing a human operator from a viewpoint relative to the avatar is displayed on a display system. A motion of the human operator is detected. An interaction between the avatar and the model of the object is identified in real time using information about motions of the human operator that are detected in real time. The interaction changes a group of dimensions in the model of the object. Further, the interaction between the avatar and the model of the object in the three-dimensional environment is displayed on the display system, enabling design changes in the model of the object made by the human operator. | 1-25. (canceled) 26. An object management system comprising:
a model manager having:
at least one storage device for storing program code; and
at least one processor for processing the program code to:
create an avatar representing a human operator;
place the avatar in a three-dimensional environment with a model of an object;
display the three-dimensional environment with the model of the object and the avatar from a viewpoint relative to the avatar on a display system;
identify an interaction between the avatar and the model of the object in real time using information about motions of the human operator detected in real time from a motion capture system;
determine whether the interaction constitutes a design change in the model of the object; and
implement the design change in the model of the object in response to determining that the interaction constitutes a design change in the model of the object thereby enabling the human operator to make design changes in the model of the object using the avatar; and
wherein the object is an aircraft, and wherein the interaction tests a usability of controls in the aircraft. 27. The object management system of claim 26, wherein the model manager updates a file storing the model of the object such that the file reflects a change in a group of dimensions corresponding to the design change in the model of the object in response to determining that the interaction constitutes a design change in the model of the object. 28. The object management system of claim 26, wherein the interaction is selected from one of moving a portion of the model of the object or displacing the portion of the model of the object. 29. The object management system of claim 26, wherein the avatar has dimensions of a person that performs ergonomic testing of the object. 30. The object management system of claim 26, wherein the model manager:
receives live information from a live environment in which the object is located; identifies an effect of the live information on the model of the object based on the live information; and displays the effect on the model of the object in the three-dimensional environment. 31. The object management system of claim 30, wherein the live information includes at least one of modulation data, temperature, acceleration, velocity, translation, vibration data, force, or acoustic data. 32. The object management system of claim 26 further comprising:
a manufacturing system that manufactures objects; and
a control system that controls operation of the manufacturing system using the model. 33. The object management system of claim 26, wherein the three-dimensional environment is selected from one of a virtual reality environment and an augmented reality environment. 34. The object management system of claim 26, wherein the display system is selected from at least one of a display device, a computer monitor, glasses, a head-mounted display device, a tablet computer, a mobile phone, a projector, a heads-up display, a holographic display system, or a virtual retinal display. 35. The object management system of claim 26, wherein the model is selected from one of a computer-aided design model, a finite element method model, and a computer-aided model. 36. The object management system of claim 26, wherein the interaction constitutes a design change in the model of the object if the interaction displaces a portion of the model of the object that is not designed to be movable. 37. A method for managing an object, the method comprising:
displaying a three-dimensional environment with a model of the object and an avatar representing a human operator from a viewpoint relative to the avatar on a display system; detecting a motion of the human operator; identifying an interaction between the avatar and the model of the object in real time using information about motions of the human operator that are detected in real time; determining whether the interaction constitutes a design change in the model of the object; and implementing the design change in the model of the object in response to determining that the interaction constitutes a design change in the model of the object thereby enabling design changes in the model of the object made by the human operator; and wherein the object is an aircraft, and wherein the interaction tests a usability of controls in the aircraft. 38. The method of claim 37, wherein the interaction changes the group of dimensions in the model of the object in response to determining that the interaction constitutes a design change in the mode of the object and further comprises:
updating a file storing the model of the object such that the file reflects a change in the group of dimensions in the model of the object. 39. The method of claim 37, wherein the interaction is selected from one of moving a portion of the model of the object or displacing the portion of the model of the object. 40. The method of claim 37, wherein the avatar has dimensions of a person that performs ergonomic testing of the object. 41. The method of claim 37 further comprising:
receiving live information from an environment in which the object is located;
identifying a change in the model of the object from applying the live information to the model of the object; and
displaying the change in the model of the object in the three-dimensional environment. 42. The method of claim 41, wherein the live information includes at least one of modulation data, temperature, acceleration, velocity, translation, vibration data, force, or acoustic data. 43. The method of claim 37 further comprising:
manufacturing objects in a manufacturing system using the model of the object. 44. The method of claim 37, wherein the three-dimensional environment is selected from one of a virtual reality environment and an augmented reality environment. 45. The method of claim 37, wherein the display system is selected from at least one of a display device, a computer monitor, glasses, a head-mounted display device, a tablet computer, a mobile phone, a projector, a heads-up display, a holographic display system, or a virtual retinal display. 46. The method of claim 37, wherein the model is selected from one of a computer-aided design model, a finite element method model, and a computer-aided model. 47. The method of claim 37, wherein the interaction constitutes a design change in the model of the object if the interaction displaces a portion of the model of the object that is not designed to be movable. | An object management system, an aircraft design system, and a method for managing an object. A three-dimensional environment with a model of an object and an avatar representing a human operator from a viewpoint relative to the avatar is displayed on a display system. A motion of the human operator is detected. An interaction between the avatar and the model of the object is identified in real time using information about motions of the human operator that are detected in real time. The interaction changes a group of dimensions in the model of the object. Further, the interaction between the avatar and the model of the object in the three-dimensional environment is displayed on the display system, enabling design changes in the model of the object made by the human operator.1-25. (canceled) 26. An object management system comprising:
a model manager having:
at least one storage device for storing program code; and
at least one processor for processing the program code to:
create an avatar representing a human operator;
place the avatar in a three-dimensional environment with a model of an object;
display the three-dimensional environment with the model of the object and the avatar from a viewpoint relative to the avatar on a display system;
identify an interaction between the avatar and the model of the object in real time using information about motions of the human operator detected in real time from a motion capture system;
determine whether the interaction constitutes a design change in the model of the object; and
implement the design change in the model of the object in response to determining that the interaction constitutes a design change in the model of the object thereby enabling the human operator to make design changes in the model of the object using the avatar; and
wherein the object is an aircraft, and wherein the interaction tests a usability of controls in the aircraft. 27. The object management system of claim 26, wherein the model manager updates a file storing the model of the object such that the file reflects a change in a group of dimensions corresponding to the design change in the model of the object in response to determining that the interaction constitutes a design change in the model of the object. 28. The object management system of claim 26, wherein the interaction is selected from one of moving a portion of the model of the object or displacing the portion of the model of the object. 29. The object management system of claim 26, wherein the avatar has dimensions of a person that performs ergonomic testing of the object. 30. The object management system of claim 26, wherein the model manager:
receives live information from a live environment in which the object is located; identifies an effect of the live information on the model of the object based on the live information; and displays the effect on the model of the object in the three-dimensional environment. 31. The object management system of claim 30, wherein the live information includes at least one of modulation data, temperature, acceleration, velocity, translation, vibration data, force, or acoustic data. 32. The object management system of claim 26 further comprising:
a manufacturing system that manufactures objects; and
a control system that controls operation of the manufacturing system using the model. 33. The object management system of claim 26, wherein the three-dimensional environment is selected from one of a virtual reality environment and an augmented reality environment. 34. The object management system of claim 26, wherein the display system is selected from at least one of a display device, a computer monitor, glasses, a head-mounted display device, a tablet computer, a mobile phone, a projector, a heads-up display, a holographic display system, or a virtual retinal display. 35. The object management system of claim 26, wherein the model is selected from one of a computer-aided design model, a finite element method model, and a computer-aided model. 36. The object management system of claim 26, wherein the interaction constitutes a design change in the model of the object if the interaction displaces a portion of the model of the object that is not designed to be movable. 37. A method for managing an object, the method comprising:
displaying a three-dimensional environment with a model of the object and an avatar representing a human operator from a viewpoint relative to the avatar on a display system; detecting a motion of the human operator; identifying an interaction between the avatar and the model of the object in real time using information about motions of the human operator that are detected in real time; determining whether the interaction constitutes a design change in the model of the object; and implementing the design change in the model of the object in response to determining that the interaction constitutes a design change in the model of the object thereby enabling design changes in the model of the object made by the human operator; and wherein the object is an aircraft, and wherein the interaction tests a usability of controls in the aircraft. 38. The method of claim 37, wherein the interaction changes the group of dimensions in the model of the object in response to determining that the interaction constitutes a design change in the mode of the object and further comprises:
updating a file storing the model of the object such that the file reflects a change in the group of dimensions in the model of the object. 39. The method of claim 37, wherein the interaction is selected from one of moving a portion of the model of the object or displacing the portion of the model of the object. 40. The method of claim 37, wherein the avatar has dimensions of a person that performs ergonomic testing of the object. 41. The method of claim 37 further comprising:
receiving live information from an environment in which the object is located;
identifying a change in the model of the object from applying the live information to the model of the object; and
displaying the change in the model of the object in the three-dimensional environment. 42. The method of claim 41, wherein the live information includes at least one of modulation data, temperature, acceleration, velocity, translation, vibration data, force, or acoustic data. 43. The method of claim 37 further comprising:
manufacturing objects in a manufacturing system using the model of the object. 44. The method of claim 37, wherein the three-dimensional environment is selected from one of a virtual reality environment and an augmented reality environment. 45. The method of claim 37, wherein the display system is selected from at least one of a display device, a computer monitor, glasses, a head-mounted display device, a tablet computer, a mobile phone, a projector, a heads-up display, a holographic display system, or a virtual retinal display. 46. The method of claim 37, wherein the model is selected from one of a computer-aided design model, a finite element method model, and a computer-aided model. 47. The method of claim 37, wherein the interaction constitutes a design change in the model of the object if the interaction displaces a portion of the model of the object that is not designed to be movable. | 2,600 |
10,758 | 10,758 | 15,970,729 | 2,646 | A device and method for altitude-based interference mitigation in an aerial vehicle. The device includes a transceiver, an altimeter, and an electronic processor coupled to the transceiver and the altimeter. The electronic processor is configured to determine, via the altimeter, an altitude level of the aerial vehicle, compare the altitude level to an altitude threshold, and, in response to the altitude level exceeding the altitude threshold, control a radio frequency characteristic of the transceiver to mitigate signal interference based the altitude level. | 1. A device for altitude-based interference mitigation of an aerial vehicle, the device comprising:
a transceiver; an altimeter; and an electronic processor coupled to the transceiver and the altimeter, the electronic processor configured to
determine, via the altimeter, an altitude level of the aerial vehicle,
compare the altitude level to an altitude threshold, and
in response to the altitude level exceeding the altitude threshold, control a radio frequency characteristic of the transceiver to mitigate signal interference of a data transmission based on the altitude level by
determining whether a signal quality of the data transmission exceeds an error threshold,
determining an attenuation factor when the signal quality of the data transmission exceeds the error threshold,
applying the attenuation factor to a receiver gain of the device, reducing the receiver gain,
while the signal quality of the data transmission exceeds the error threshold, incrementally increasing the attenuation factor until the signal quality no longer exceeds the error threshold or until a maximum attenuation factor is applied, and
tuning, when the signal quality continues to exceed the error threshold after the maximum attenuation factor is applied, the transceiver to a frequency offset from a center frequency to which the transceiver was originally tuned,
wherein the attenuation factor is a magnitude, and wherein the maximum attenuation factor is based on a strength of radio frequency communication between the device and a radio base station. 2. (canceled) 3. The device of claim 1, wherein the electronic processor determines the attenuation factor based on at least one selected from the group consisting of an error rate, the altitude level, and a range quality of a resulting signal. 4. The device of claim 1, wherein the electronic processor is further configured to disable site roaming while the attenuation factor is being determined. 5. The device of claim 1, wherein the error threshold is determined based on at least one selected from the group consisting of a bit error rate threshold, a cyclic redundancy check threshold, and a channel content mismatch. 6. (canceled) 7. The device of claim 1, wherein controlling the radio frequency characteristic further includes
determining a transmission power range based on the altitude level; determining whether a transmission power gain of the transceiver is within the power range; adjusting, in response to the transmission power gain being outside the power range, the transmission power gain to a second transmission power gain, wherein the second power gain is within a predetermined power range; and transmitting a data transmission from the transceiver at the second transmission power gain. 8. The device of claim 1, wherein the electronic processor is further configured to affect the radio frequency characteristic of the transceiver to mitigate signal interference based on the altitude level until at least one condition is met, the at least one condition selected from the group consisting of
a signal quality of a second data transmission received exceeds an error threshold; a received signal strength indicator level of the second data transmission fails to exceed a received signal strength indicator threshold; the device ends a first communication link with a first site and establishes a second communication link with a second site; and the altitude level no longer exceeds the altitude threshold. 9. The device of claim 1, wherein controlling the radio frequency characteristic further includes adjusting a transmission power gain of the transceiver. 10. The device of claim 1, wherein the electronic processor is further configured to
determine, via the altimeter, a second altitude level, compare the second altitude level to an altitude threshold, and in response to the second altitude level being below the altitude threshold, reverse the control of the radio frequency characteristic. 11. A method for altitude-based interference mitigation of a communication device of an aerial vehicle, the method comprising:
determining, by an altimeter, an altitude level of the aerial vehicle; comparing, by an electronic processor, the altitude level to an altitude threshold; and in response to the altitude level exceeding the altitude threshold, controlling, by the electronic processor, a radio frequency characteristic of a transceiver of the communication device to mitigate signal interference of a data transmission based on the altitude level by
determining, whether a signal quality of the data transmission exceeds an error threshold,
determining an attenuation factor when the signal quality of the data transmission exceeds the error threshold,
applying the attenuation factor to a receiver gain of the device, reducing the receiver gain,
while the signal quality of the data transmission exceeds the error threshold, incrementally increasing the attenuation factor until the signal quality no longer exceeds the error threshold or until a maximum attenuation factor is applied, and
tuning, when the signal quality continues to exceed the error threshold after the maximum attenuation factor is applied, the transceiver to a frequency offset from a center frequency to which the transceiver was originally tuned,
wherein the attenuation factor is a magnitude, and wherein the maximum attenuation factor is based on a strength of radio frequency communication between the device and a radio base station. 12. (canceled) 13. The method of claim 11, wherein the attenuation factor is determined based on at least one selected from the group consisting of an error rate, the altitude level, a range quality of a resulting signal. 14. The method of claim 11, the method further comprising disabling, via the electronic processor, site roaming while the attenuation factor is being determined. 15. The method of claim 11, wherein the error threshold is determined based on at least one selected from the group consisting of a bit error rate threshold, a cyclic redundancy check threshold, and a channel content mismatch. 16. (canceled) 17. The method of claim 11, wherein controlling the radio frequency characteristic further includes
determining a transmission power gain range based on the altitude level; determining whether a transmission power gain of the transceiver is within the power range; adjusting, in response to the transmission power gain being outside the power range, the transmission power gain to a second transmission power gain, wherein the second power gain is within a predetermined power range; and transmitting a data transmission from the transceiver at the second transmission power gain. 18. The method of claim 11, wherein affecting the radio frequency characteristic of the transceiver to mitigate signal interference based the altitude level until at least one condition is met, the at least one condition selected from the group consisting of
a signal quality of a second data transmission received exceeds an error threshold; a received signal strength indicator level of the second data transmission fails to exceed a received signal strength indicator threshold; the communication device ends a first communication link with a first site and establishes a second communication link with a second site; and the altitude level no longer exceeds the altitude threshold. 19. The method of claim 11, wherein controlling the radio frequency characteristic further includes adjusting a transmission power gain of the transceiver. 20. The method of claim 11, wherein the method further includes
determining, via the altimeter, a second altitude level, comparing, via the electronic processor, the second altitude level to an altitude threshold, and
in response to the second altitude level being below the altitude threshold, reversing, via the electronic processor, the control of the radio frequency characteristic. | A device and method for altitude-based interference mitigation in an aerial vehicle. The device includes a transceiver, an altimeter, and an electronic processor coupled to the transceiver and the altimeter. The electronic processor is configured to determine, via the altimeter, an altitude level of the aerial vehicle, compare the altitude level to an altitude threshold, and, in response to the altitude level exceeding the altitude threshold, control a radio frequency characteristic of the transceiver to mitigate signal interference based the altitude level.1. A device for altitude-based interference mitigation of an aerial vehicle, the device comprising:
a transceiver; an altimeter; and an electronic processor coupled to the transceiver and the altimeter, the electronic processor configured to
determine, via the altimeter, an altitude level of the aerial vehicle,
compare the altitude level to an altitude threshold, and
in response to the altitude level exceeding the altitude threshold, control a radio frequency characteristic of the transceiver to mitigate signal interference of a data transmission based on the altitude level by
determining whether a signal quality of the data transmission exceeds an error threshold,
determining an attenuation factor when the signal quality of the data transmission exceeds the error threshold,
applying the attenuation factor to a receiver gain of the device, reducing the receiver gain,
while the signal quality of the data transmission exceeds the error threshold, incrementally increasing the attenuation factor until the signal quality no longer exceeds the error threshold or until a maximum attenuation factor is applied, and
tuning, when the signal quality continues to exceed the error threshold after the maximum attenuation factor is applied, the transceiver to a frequency offset from a center frequency to which the transceiver was originally tuned,
wherein the attenuation factor is a magnitude, and wherein the maximum attenuation factor is based on a strength of radio frequency communication between the device and a radio base station. 2. (canceled) 3. The device of claim 1, wherein the electronic processor determines the attenuation factor based on at least one selected from the group consisting of an error rate, the altitude level, and a range quality of a resulting signal. 4. The device of claim 1, wherein the electronic processor is further configured to disable site roaming while the attenuation factor is being determined. 5. The device of claim 1, wherein the error threshold is determined based on at least one selected from the group consisting of a bit error rate threshold, a cyclic redundancy check threshold, and a channel content mismatch. 6. (canceled) 7. The device of claim 1, wherein controlling the radio frequency characteristic further includes
determining a transmission power range based on the altitude level; determining whether a transmission power gain of the transceiver is within the power range; adjusting, in response to the transmission power gain being outside the power range, the transmission power gain to a second transmission power gain, wherein the second power gain is within a predetermined power range; and transmitting a data transmission from the transceiver at the second transmission power gain. 8. The device of claim 1, wherein the electronic processor is further configured to affect the radio frequency characteristic of the transceiver to mitigate signal interference based on the altitude level until at least one condition is met, the at least one condition selected from the group consisting of
a signal quality of a second data transmission received exceeds an error threshold; a received signal strength indicator level of the second data transmission fails to exceed a received signal strength indicator threshold; the device ends a first communication link with a first site and establishes a second communication link with a second site; and the altitude level no longer exceeds the altitude threshold. 9. The device of claim 1, wherein controlling the radio frequency characteristic further includes adjusting a transmission power gain of the transceiver. 10. The device of claim 1, wherein the electronic processor is further configured to
determine, via the altimeter, a second altitude level, compare the second altitude level to an altitude threshold, and in response to the second altitude level being below the altitude threshold, reverse the control of the radio frequency characteristic. 11. A method for altitude-based interference mitigation of a communication device of an aerial vehicle, the method comprising:
determining, by an altimeter, an altitude level of the aerial vehicle; comparing, by an electronic processor, the altitude level to an altitude threshold; and in response to the altitude level exceeding the altitude threshold, controlling, by the electronic processor, a radio frequency characteristic of a transceiver of the communication device to mitigate signal interference of a data transmission based on the altitude level by
determining, whether a signal quality of the data transmission exceeds an error threshold,
determining an attenuation factor when the signal quality of the data transmission exceeds the error threshold,
applying the attenuation factor to a receiver gain of the device, reducing the receiver gain,
while the signal quality of the data transmission exceeds the error threshold, incrementally increasing the attenuation factor until the signal quality no longer exceeds the error threshold or until a maximum attenuation factor is applied, and
tuning, when the signal quality continues to exceed the error threshold after the maximum attenuation factor is applied, the transceiver to a frequency offset from a center frequency to which the transceiver was originally tuned,
wherein the attenuation factor is a magnitude, and wherein the maximum attenuation factor is based on a strength of radio frequency communication between the device and a radio base station. 12. (canceled) 13. The method of claim 11, wherein the attenuation factor is determined based on at least one selected from the group consisting of an error rate, the altitude level, a range quality of a resulting signal. 14. The method of claim 11, the method further comprising disabling, via the electronic processor, site roaming while the attenuation factor is being determined. 15. The method of claim 11, wherein the error threshold is determined based on at least one selected from the group consisting of a bit error rate threshold, a cyclic redundancy check threshold, and a channel content mismatch. 16. (canceled) 17. The method of claim 11, wherein controlling the radio frequency characteristic further includes
determining a transmission power gain range based on the altitude level; determining whether a transmission power gain of the transceiver is within the power range; adjusting, in response to the transmission power gain being outside the power range, the transmission power gain to a second transmission power gain, wherein the second power gain is within a predetermined power range; and transmitting a data transmission from the transceiver at the second transmission power gain. 18. The method of claim 11, wherein affecting the radio frequency characteristic of the transceiver to mitigate signal interference based the altitude level until at least one condition is met, the at least one condition selected from the group consisting of
a signal quality of a second data transmission received exceeds an error threshold; a received signal strength indicator level of the second data transmission fails to exceed a received signal strength indicator threshold; the communication device ends a first communication link with a first site and establishes a second communication link with a second site; and the altitude level no longer exceeds the altitude threshold. 19. The method of claim 11, wherein controlling the radio frequency characteristic further includes adjusting a transmission power gain of the transceiver. 20. The method of claim 11, wherein the method further includes
determining, via the altimeter, a second altitude level, comparing, via the electronic processor, the second altitude level to an altitude threshold, and
in response to the second altitude level being below the altitude threshold, reversing, via the electronic processor, the control of the radio frequency characteristic. | 2,600 |
10,759 | 10,759 | 15,534,420 | 2,685 | A system includes a downhole tool configured to transmit uplink data. The system also includes a surface controller configured to receive the uplink data and to transmit downlink data to the downhole tool. The system also includes a plurality of acoustic telemetry modules deployed downhole, wherein each of the modules selectively operates in a first communication mode in which its transducers simultaneously convey uplink data and downlink data, and in a second communication mode in which its transducers simultaneously convey only uplink data or only downlink data. | 1. A system that comprises:
a downhole tool configured to transmit uplink data; a surface controller configured to receive the uplink data and to transmit downlink data to the downhole tool; and a plurality of acoustic telemetry modules deployed downhole, wherein each of the modules selectively operates in a first communication mode in which its transducers simultaneously convey uplink data and downlink data, and in a second communication mode in which its transducers simultaneously convey only uplink data or only downlink data. 2. The system of claim 1, wherein the second communication mode provides an increased uplink data rate or an increased downlink data rate relative to the first communication mode. 3. The system of claim 1, wherein the second communication mode provides an increased uplink data redundancy or an increased downlink data redundancy relative to the first communication mode. 4. The system of claim 1, wherein each of the modules is configured to convey uplink data or downlink data using both compressional waves and shear waves. 5. The system of claim 1, wherein each of the modules comprises transducers positioned on different sides of a ring or tubular. 6. The system of claim 5, wherein each of the modules further comprises acoustic dampening material surrounding at least one of its transducers. 7. The system of claim 5, wherein each of the modules further comprises acoustic dampening material integrated with the ring or tubular, and positioned between adjacent transducers. 8. The system of 1, wherein each of the modules attach to an interior of a drill string or casing. 9. The system of 1, wherein each of the modules attach to an exterior of a drill string or casing. 10. The system of 1, wherein each of the modules is integrated with a drill string or casing collar. 11. The system of 1, further comprising short hop telemetry modules between the downhole tool and the surface controller. 12. The system of 1, wherein each of the modules is configured to use a drill string or casing as an acoustic channel for conveying the uplink data or downlink data. 13. The system of 1, wherein the downhole tool is part of a bottomhole assembly (BHA), wherein the uplink data corresponds to measurement-while-drilling (MWD) or logging-while-drilling (LWD) data, and wherein the downlink data corresponds to steering commands for the BHA. 14. A method that comprises:
deploying a tool downhole; deploying a plurality of acoustic telemetry modules downhole, wherein each of the modules supports a first communication mode in which its transducers simultaneously convey uplink data and downlink data and a second communication mode in which its transducers simultaneously convey only uplink data or only downlink data; using the plurality of acoustic telemetry modules to convey uplink data or downlink data between the tool and a surface controller. 15. The method of claim 14, further comprising switching between the first communication mode and the second communication mode based on a trigger event. 16. The method of claim 14, further comprising selecting a switching schedule for the first communication mode and the second communication mode. 17. The method of claim 16, further comprising adjusting the switching schedule for the first communication mode and the second communication mode. 18. The method of claim 14, wherein using the plurality of acoustic telemetry modules to convey uplink data or downlink data comprises using a drill string or casing as an acoustic channel and providing acoustic dampening between or around acoustic transducers of each module. 19. The method of claim 14, wherein using the plurality of acoustic telemetry modules to convey uplink data or downlink data involves use of both compressional waves and shear waves. 20. The method of claim 14, further comprising attaching each module along a drill string or casing. | A system includes a downhole tool configured to transmit uplink data. The system also includes a surface controller configured to receive the uplink data and to transmit downlink data to the downhole tool. The system also includes a plurality of acoustic telemetry modules deployed downhole, wherein each of the modules selectively operates in a first communication mode in which its transducers simultaneously convey uplink data and downlink data, and in a second communication mode in which its transducers simultaneously convey only uplink data or only downlink data.1. A system that comprises:
a downhole tool configured to transmit uplink data; a surface controller configured to receive the uplink data and to transmit downlink data to the downhole tool; and a plurality of acoustic telemetry modules deployed downhole, wherein each of the modules selectively operates in a first communication mode in which its transducers simultaneously convey uplink data and downlink data, and in a second communication mode in which its transducers simultaneously convey only uplink data or only downlink data. 2. The system of claim 1, wherein the second communication mode provides an increased uplink data rate or an increased downlink data rate relative to the first communication mode. 3. The system of claim 1, wherein the second communication mode provides an increased uplink data redundancy or an increased downlink data redundancy relative to the first communication mode. 4. The system of claim 1, wherein each of the modules is configured to convey uplink data or downlink data using both compressional waves and shear waves. 5. The system of claim 1, wherein each of the modules comprises transducers positioned on different sides of a ring or tubular. 6. The system of claim 5, wherein each of the modules further comprises acoustic dampening material surrounding at least one of its transducers. 7. The system of claim 5, wherein each of the modules further comprises acoustic dampening material integrated with the ring or tubular, and positioned between adjacent transducers. 8. The system of 1, wherein each of the modules attach to an interior of a drill string or casing. 9. The system of 1, wherein each of the modules attach to an exterior of a drill string or casing. 10. The system of 1, wherein each of the modules is integrated with a drill string or casing collar. 11. The system of 1, further comprising short hop telemetry modules between the downhole tool and the surface controller. 12. The system of 1, wherein each of the modules is configured to use a drill string or casing as an acoustic channel for conveying the uplink data or downlink data. 13. The system of 1, wherein the downhole tool is part of a bottomhole assembly (BHA), wherein the uplink data corresponds to measurement-while-drilling (MWD) or logging-while-drilling (LWD) data, and wherein the downlink data corresponds to steering commands for the BHA. 14. A method that comprises:
deploying a tool downhole; deploying a plurality of acoustic telemetry modules downhole, wherein each of the modules supports a first communication mode in which its transducers simultaneously convey uplink data and downlink data and a second communication mode in which its transducers simultaneously convey only uplink data or only downlink data; using the plurality of acoustic telemetry modules to convey uplink data or downlink data between the tool and a surface controller. 15. The method of claim 14, further comprising switching between the first communication mode and the second communication mode based on a trigger event. 16. The method of claim 14, further comprising selecting a switching schedule for the first communication mode and the second communication mode. 17. The method of claim 16, further comprising adjusting the switching schedule for the first communication mode and the second communication mode. 18. The method of claim 14, wherein using the plurality of acoustic telemetry modules to convey uplink data or downlink data comprises using a drill string or casing as an acoustic channel and providing acoustic dampening between or around acoustic transducers of each module. 19. The method of claim 14, wherein using the plurality of acoustic telemetry modules to convey uplink data or downlink data involves use of both compressional waves and shear waves. 20. The method of claim 14, further comprising attaching each module along a drill string or casing. | 2,600 |
10,760 | 10,760 | 15,875,960 | 2,652 | A prosthetic support, including a structure configured to apply a clamping force to a head of a recipient while extending about a back of at least one of a head or neck of the recipient such that output generated by a device supported by the structure is directed into skin of the recipient at a location behind an ear canal of the recipient that covers the mastoid bone of the recipient. | 1. A prosthetic support, comprising:
a structure configured to apply a clamping force to a head of a recipient while extending about a back of at least one of a head or neck of the recipient such that output generated by a device supported by the structure is directed into skin of the recipient at a location behind an ear canal of the recipient that covers the temporal bone of the recipient. 2. The prosthetic support of claim 1, wherein:
the structure is elastically flexibly biased such that the clamping force results from deformation of the structure due to interference of the structure with the head of the recipient relative to a geometry of the structure in a relaxed state. 3. The prosthetic support of claim 2, wherein:
the structure is generally “U” shaped, “C” shaped or segmented circle shaped in the relaxed state. 4. The prosthetic support of claim 1, further comprising:
a skin interface portion that is at least one of part of or directly attached to the structure, the skin interface portion being located on the support such that it abuts skin of the recipient at a location behind an ear canal of the recipient covering the temporal bone of the recipient when the clamping force is applied to the head of the recipient. 5. The prosthetic support of claim 4, wherein:
at least a substantial amount of the clamping force applied to the head of the recipient is applied through the skin interface portion. 6. The prosthetic support of claim 4, wherein:
the output includes vibrations generated by a bone conduction device; the skin interface portion is configured to transcutaneously transmit the vibrations into the skin of the recipient, from which the vibrations enter the temporal bone of the recipient, in amounts to provoke a hearing percept in the recipient, the vibrations traveling through the skin interface portion. 7. The prosthetic support of claim 1, further comprising:
a device attachment portion attached to or part of the structure, wherein the attachment portion is configured to removably attach the device to the prosthetic support, wherein the device is a medical device. 8. The prosthetic support of claim 7, wherein:
the medical device is a bone conduction device including a coupling apparatus; and the device attachment portion is configured to removably attach the coupling apparatus of the bone conduction device to the prosthetic support. 9. The prosthetic support of claim 8, wherein:
the attachment portion is configured to removably snap couple the coupling apparatus to the support. 10. The prosthetic support of claim 1, wherein the device is a bone conduction device that generates vibrations and the output includes the generated vibrations, the prosthetic support further comprising:
a skin interface portion that is part of or directly attached to the structure such that it abuts skin covering the temporal bone at a location behind an ear canal of the recipient, wherein a substantial amount of the clamping is applied via the skin interface portion, wherein the prosthetic support is configured to conduct the generated vibrations from the bone conduction device, through the skin interface portion and into the skin of the recipient, from which the vibrations enter the temporal bone of the recipient, in amounts to provoke a hearing percept in the recipient. 11. The prosthetic support of claim 10, further comprising:
a device attachment portion attached to or part of the structure, wherein the attachment portion is configured to removably attach the bone conduction device to the prosthetic support, wherein the skin interface portion and the device attachment portion are part of a modular assembly movably attached to the structure. 12. The prosthetic support of claim 7, further comprising:
a second device attachment portion at least one of attached to or part of the structure, wherein the second device attachment portion is configured to removably attach a second device to the prosthetic support, wherein the second device attachment portion is located on the structure substantially symmetrical with respect to the device attachment portion, and
wherein the second device is a second medical device. 13. The prosthetic support of claim 12, wherein:
the medical device is a first bone conduction device including a first coupling apparatus; the device attachment portion is configured to removably attach the first coupling apparatus of the first bone conduction device to the support; the second medical device is a second bone conduction device including a second coupling apparatus; and the second device attachment portion is configured to removably attach the second coupling apparatus of the second bone conduction device to the support. 14. The prosthetic support of claim 4, wherein:
the support is configured to enable a recipient to adjust a location of the skin interface portion relative to the structure. 15. The prosthetic support of claim 14, wherein:
the skin interface portion is part of a module that is releasably secured to a given location of the structure, the module including a device attachment portion configured to removably attach the device to the prosthetic support; the support is configured to enable the module to be released from securement to the given location of the structure and moved to and releasably secured to another location of the structure, thereby enabling the recipient to adjust a location of the skin interface portion relative to the structure; and the device is a medical device. 16. The prosthetic support of claim 14, wherein:
the skin interface portion is slidably attached to the structure; and the support is configured with at least one of a friction fit or an interference fit between the skin interface portion and the structure that enables the skin interface portion to slide along the structure upon application of sufficient force to the skin interface portion to overcome the fit, thereby enabling the recipient to adjust a location of the skin interface portion relative to the structure. 17. The prosthetic support of claim 1, wherein:
the structure includes a plurality of sub-structures, at least one of which is spatially adjustable relative to the other. 18. The prosthetic support of claim 17, wherein:
the plurality of sub-structures include at least a first sub-structure and a second sub-structure; and the first sub-structure is configured to telescope towards and away from the second sub-structure to enable a location of a component of the prosthetic support linked to the first sub-structure to be moved relative to the second sub-structure. 19. The prosthetic support of claim 4, wherein the structure comprises:
a first sub-structure on which the skin interface portion is mounted; and a second-sub structure, wherein the first sub-structure is configured to telescope towards and away from the second sub-structure to enable a location of the skin interface portion to be moved relative to the second sub-structure. 20. The prosthetic support of claim 4, wherein:
the device is a bone conduction device including a coupling apparatus; and the skin interface portion comprises:
an attachment portion configured to attach the coupling apparatus of the bone conduction device to the skin interface portion;
a body portion configured to connect the skin interface portion to the structure; and
an adhesive portion configured to adhesively removably retain the skin interface portion to skin of the recipient. 21. The prosthetic support of claim 1, wherein the output is configured to at least one of stimulate an organ or stimulate a device implanted in the recipient. 22. A medical device, comprising:
the prosthetic support of claim 1; and the device, wherein the device includes a telecoil, wherein the clamping force is sufficient to hold the telecoil in proximity to the skin to place the telecoil into transcutaneous electromagnetic communication with an implanted medical device implanted beneath the skin of the recipient. 23. A headset for a bone conduction device, comprising:
a headset configured to support at least one bone conduction device and configured to provide a clamping force reactive against a head of the recipient sufficient to transmit vibrations from the bone conduction device into skin of the recipient at a location where the skin covers the temporal bone behind an ear canal of the recipient, wherein the headset is configured such that a center of gravity of the headset, during normal use, is located behind and, with respect to a vertical direction, at least one of about level with or below the location. 24. The headset of claim 23, wherein:
the headset is configured to maintain a position of the headset, relative to the recipient, such that the vibrations are transmitted into the skin of the recipient without a component extending about a top of the head. 25. The headset of claim 24, wherein:
the headset is configured to maintain the position while the recipient is subjected to 5 G-forces in the vertical direction. 26. The headset of claim 23, wherein:
the headset is configured to snap couple to the bone conduction device supported by the headset. 27. The headset of claim 23, wherein:
the headset includes a structure configured to extend about a back of at least one of a head or neck of the recipient; an attachment portion configured to removably attach the at least one bone conduction device to the support; and the headset is configured such that the attachment portion articulates relative to the structure. 28. The headset of claim 23, wherein:
the headset is configured to substantially vibrationally isolate the headset from the vibrations. 29. A hearing prosthesis, comprising:
a bone conduction device; and means for supporting the bone conduction device such that vibrations from the bone conduction device are transferred into skin of a recipient of the bone conduction device covering the temporal bone at a location behind an ear canal of the recipient, wherein the means is completely external to the recipient. 30. The hearing prosthesis of claim 29, wherein:
the bone conduction device is removably snap-coupled to the means for supporting the bone conduction device. | A prosthetic support, including a structure configured to apply a clamping force to a head of a recipient while extending about a back of at least one of a head or neck of the recipient such that output generated by a device supported by the structure is directed into skin of the recipient at a location behind an ear canal of the recipient that covers the mastoid bone of the recipient.1. A prosthetic support, comprising:
a structure configured to apply a clamping force to a head of a recipient while extending about a back of at least one of a head or neck of the recipient such that output generated by a device supported by the structure is directed into skin of the recipient at a location behind an ear canal of the recipient that covers the temporal bone of the recipient. 2. The prosthetic support of claim 1, wherein:
the structure is elastically flexibly biased such that the clamping force results from deformation of the structure due to interference of the structure with the head of the recipient relative to a geometry of the structure in a relaxed state. 3. The prosthetic support of claim 2, wherein:
the structure is generally “U” shaped, “C” shaped or segmented circle shaped in the relaxed state. 4. The prosthetic support of claim 1, further comprising:
a skin interface portion that is at least one of part of or directly attached to the structure, the skin interface portion being located on the support such that it abuts skin of the recipient at a location behind an ear canal of the recipient covering the temporal bone of the recipient when the clamping force is applied to the head of the recipient. 5. The prosthetic support of claim 4, wherein:
at least a substantial amount of the clamping force applied to the head of the recipient is applied through the skin interface portion. 6. The prosthetic support of claim 4, wherein:
the output includes vibrations generated by a bone conduction device; the skin interface portion is configured to transcutaneously transmit the vibrations into the skin of the recipient, from which the vibrations enter the temporal bone of the recipient, in amounts to provoke a hearing percept in the recipient, the vibrations traveling through the skin interface portion. 7. The prosthetic support of claim 1, further comprising:
a device attachment portion attached to or part of the structure, wherein the attachment portion is configured to removably attach the device to the prosthetic support, wherein the device is a medical device. 8. The prosthetic support of claim 7, wherein:
the medical device is a bone conduction device including a coupling apparatus; and the device attachment portion is configured to removably attach the coupling apparatus of the bone conduction device to the prosthetic support. 9. The prosthetic support of claim 8, wherein:
the attachment portion is configured to removably snap couple the coupling apparatus to the support. 10. The prosthetic support of claim 1, wherein the device is a bone conduction device that generates vibrations and the output includes the generated vibrations, the prosthetic support further comprising:
a skin interface portion that is part of or directly attached to the structure such that it abuts skin covering the temporal bone at a location behind an ear canal of the recipient, wherein a substantial amount of the clamping is applied via the skin interface portion, wherein the prosthetic support is configured to conduct the generated vibrations from the bone conduction device, through the skin interface portion and into the skin of the recipient, from which the vibrations enter the temporal bone of the recipient, in amounts to provoke a hearing percept in the recipient. 11. The prosthetic support of claim 10, further comprising:
a device attachment portion attached to or part of the structure, wherein the attachment portion is configured to removably attach the bone conduction device to the prosthetic support, wherein the skin interface portion and the device attachment portion are part of a modular assembly movably attached to the structure. 12. The prosthetic support of claim 7, further comprising:
a second device attachment portion at least one of attached to or part of the structure, wherein the second device attachment portion is configured to removably attach a second device to the prosthetic support, wherein the second device attachment portion is located on the structure substantially symmetrical with respect to the device attachment portion, and
wherein the second device is a second medical device. 13. The prosthetic support of claim 12, wherein:
the medical device is a first bone conduction device including a first coupling apparatus; the device attachment portion is configured to removably attach the first coupling apparatus of the first bone conduction device to the support; the second medical device is a second bone conduction device including a second coupling apparatus; and the second device attachment portion is configured to removably attach the second coupling apparatus of the second bone conduction device to the support. 14. The prosthetic support of claim 4, wherein:
the support is configured to enable a recipient to adjust a location of the skin interface portion relative to the structure. 15. The prosthetic support of claim 14, wherein:
the skin interface portion is part of a module that is releasably secured to a given location of the structure, the module including a device attachment portion configured to removably attach the device to the prosthetic support; the support is configured to enable the module to be released from securement to the given location of the structure and moved to and releasably secured to another location of the structure, thereby enabling the recipient to adjust a location of the skin interface portion relative to the structure; and the device is a medical device. 16. The prosthetic support of claim 14, wherein:
the skin interface portion is slidably attached to the structure; and the support is configured with at least one of a friction fit or an interference fit between the skin interface portion and the structure that enables the skin interface portion to slide along the structure upon application of sufficient force to the skin interface portion to overcome the fit, thereby enabling the recipient to adjust a location of the skin interface portion relative to the structure. 17. The prosthetic support of claim 1, wherein:
the structure includes a plurality of sub-structures, at least one of which is spatially adjustable relative to the other. 18. The prosthetic support of claim 17, wherein:
the plurality of sub-structures include at least a first sub-structure and a second sub-structure; and the first sub-structure is configured to telescope towards and away from the second sub-structure to enable a location of a component of the prosthetic support linked to the first sub-structure to be moved relative to the second sub-structure. 19. The prosthetic support of claim 4, wherein the structure comprises:
a first sub-structure on which the skin interface portion is mounted; and a second-sub structure, wherein the first sub-structure is configured to telescope towards and away from the second sub-structure to enable a location of the skin interface portion to be moved relative to the second sub-structure. 20. The prosthetic support of claim 4, wherein:
the device is a bone conduction device including a coupling apparatus; and the skin interface portion comprises:
an attachment portion configured to attach the coupling apparatus of the bone conduction device to the skin interface portion;
a body portion configured to connect the skin interface portion to the structure; and
an adhesive portion configured to adhesively removably retain the skin interface portion to skin of the recipient. 21. The prosthetic support of claim 1, wherein the output is configured to at least one of stimulate an organ or stimulate a device implanted in the recipient. 22. A medical device, comprising:
the prosthetic support of claim 1; and the device, wherein the device includes a telecoil, wherein the clamping force is sufficient to hold the telecoil in proximity to the skin to place the telecoil into transcutaneous electromagnetic communication with an implanted medical device implanted beneath the skin of the recipient. 23. A headset for a bone conduction device, comprising:
a headset configured to support at least one bone conduction device and configured to provide a clamping force reactive against a head of the recipient sufficient to transmit vibrations from the bone conduction device into skin of the recipient at a location where the skin covers the temporal bone behind an ear canal of the recipient, wherein the headset is configured such that a center of gravity of the headset, during normal use, is located behind and, with respect to a vertical direction, at least one of about level with or below the location. 24. The headset of claim 23, wherein:
the headset is configured to maintain a position of the headset, relative to the recipient, such that the vibrations are transmitted into the skin of the recipient without a component extending about a top of the head. 25. The headset of claim 24, wherein:
the headset is configured to maintain the position while the recipient is subjected to 5 G-forces in the vertical direction. 26. The headset of claim 23, wherein:
the headset is configured to snap couple to the bone conduction device supported by the headset. 27. The headset of claim 23, wherein:
the headset includes a structure configured to extend about a back of at least one of a head or neck of the recipient; an attachment portion configured to removably attach the at least one bone conduction device to the support; and the headset is configured such that the attachment portion articulates relative to the structure. 28. The headset of claim 23, wherein:
the headset is configured to substantially vibrationally isolate the headset from the vibrations. 29. A hearing prosthesis, comprising:
a bone conduction device; and means for supporting the bone conduction device such that vibrations from the bone conduction device are transferred into skin of a recipient of the bone conduction device covering the temporal bone at a location behind an ear canal of the recipient, wherein the means is completely external to the recipient. 30. The hearing prosthesis of claim 29, wherein:
the bone conduction device is removably snap-coupled to the means for supporting the bone conduction device. | 2,600 |
10,761 | 10,761 | 15,804,718 | 2,651 | In general, techniques are described for performing layered intermediate compression for higher order ambisonic (HOA) audio data. A device comprising a memory and a processor may be configured to perform the techniques. The memory may store HOA coefficients of the HOA audio data. The processors may decompose the HOA coefficients into a predominant sound component and a corresponding spatial component. The spatial component may be representative of the directions, shape, and width of the predominant sound component, and defined in the spherical harmonic domain. The processor may specify, in a bitstream conforming to an intermediate compression format, a subset of the HOA coefficients that represent an ambient component. The processor may also specify, in the bitstream and irrespective of a determination of a minimum number of ambient channels and a number of elements to specify in the bitstream for the spatial component, all elements of the spatial component. | 1. A device configured to compress higher order ambisonic audio data representative of a soundfield, the device comprising:
a memory configured to store higher order ambisonic coefficients of the higher order ambisonic audio data; and one or more processors configured to: decompose the higher order ambisonic coefficients into a predominant sound component and a corresponding spatial component, the corresponding spatial component representative of the directions, shape, and width of the predominant sound component, and defined in the spherical harmonic domain; specify, in a bitstream conforming to an intermediate compression format, a subset of the higher order ambisonic coefficients that represent an ambient component of the soundfield; and specify, in the bitstream and irrespective of a determination of a minimum number of ambient channels and a number of elements to specify in the bitstream for the spatial component, all elements of the spatial component. 2. The device of claim 1, wherein the one or more processors are configured to specify, in the bitstream, the subset of the higher order ambisonic coefficients associated with spherical basis functions having an order from zero through two. 3. The device of claim 1,
wherein the predominant sound component comprises a first predominant sound component, wherein the spatial component comprises a first spatial component, wherein the one or more processors are configured to: decompose the higher order ambisonic coefficients into a plurality of predominant sound components that include the first predominant sound component and a corresponding plurality of spatial components that include the first spatial component, specify, in the bitstream, all elements of each of four of the plurality of spatial components, the four of the plurality of spatial components including the first spatial component; and specify, in the bitstream, four of the plurality of predominant sound components corresponding to the four of the plurality of spatial components. 4. The device of claim 3, wherein the one or more processors are configured to:
specify all elements of each of the four of the plurality of spatial components in a single side information channel of the bitstream; specify each of the four of the plurality of predominant sound components in a separate foreground channel of the bitstream; and specify each of the subset of the higher order ambisonic coefficients in a separate ambient channel of the bitstream. 5. The device of claim 1, wherein the one or more processors are further configured to specify, in the bitstream and without applying decorrelation to the subset of the higher order ambisonic coefficients, the subset of the higher order ambisonic coefficients. 6. The device of claim 1, wherein the intermediate compression format comprises a mezzanine compression format. 7. The device of claim 1, wherein the intermediate compression format comprises a mezzanine compression format used for communication of audio data for broadcast networks. 8. The device of claim 1,
wherein the device comprises a microphone array configured to capture spatial audio data, and wherein the one or more processors are further configured to convert the spatial audio data into the higher order ambisonic audio data. 9. The device of claim 1, wherein the one or more processors are configured to:
receive the higher order ambisonic audio data; and output the bitstream to an emission encoder, the emission encoder configured to transcode the bitstream based on a target bitrate. 10. The device of claim 1, further comprising a microphone configured to capture spatial audio data representative of the higher order ambisonic audio data, and convert the spatial audio data to the higher order ambisonic audio data. 11. The device of claim 1, wherein the device comprises a robotic device. 12. The device of claim 1, wherein the device comprises a flying device. 13. A method to compress higher order ambisonic audio data representative of a soundfield, the method comprising:
decomposing higher order ambisonic coefficients representative of a soundfield into a predominant sound component and a corresponding spatial component, the corresponding spatial component representative of the directions, shape, and width of the predominant sound component, and defined in the spherical harmonic domain; specifying, in a bitstream conforming to an intermediate compression format, a subset of the higher order ambisonic coefficients that represent an ambient component of the soundfield; and specifying, in the bitstream and irrespective of a determination of a minimum number of ambient channels and a number of elements to specify in the bitstream for the spatial component, all elements of the spatial component. 14. The method of claim 13, wherein specifying the subset of the higher order ambisonic coefficients comprises specifying, in the bitstream, the subset of the higher order ambisonic coefficients associated with spherical basis functions having an order from zero through two. 15. The method of claim 13,
wherein the predominant sound component comprises a first predominant sound component, wherein the spatial component comprises a first spatial component, wherein decomposing the higher order ambisonic coefficients comprises decomposing the higher order ambisonic coefficients into a plurality of predominant sound components that include the first predominant sound component and a corresponding plurality of spatial components that include the first spatial component, wherein specifying all of the elements of the spatial component comprises specifying, in the bitstream, all elements of each of four of the plurality of spatial components, the four of the plurality of spatial components including the first spatial component, and wherein the method further comprises specifying, in the bitstream, four of the plurality of predominant sound components corresponding to the four of the plurality of spatial components. 16. The method of claim 15,
wherein specifying all of the elements of each of the four of the plurality of spatial components comprises specifying all of the elements of each of the four of the plurality of spatial components in a single side information channel of the bitstream, wherein specifying the four of the plurality of predominant sound components comprises specifying each of the four of the plurality of predominant sound components in a separate foreground channel of the bitstream, and wherein specifying the subset of the higher order ambisonic coefficients comprises specifying each of the subset of the higher order ambisonic coefficients in a separate ambient channel of the bitstream. 17. The method of claim 13, further comprising specifying, in the bitstream and without applying decorrelation to the subset of the higher order ambisonic coefficients, the subset of the higher order ambisonic coefficients. 18. The method of claim 13, wherein the intermediate compression format comprises a mezzanine compression format. 19. The method of claim 13, wherein the intermediate compression format comprises a mezzanine compression format used for communication of audio data for broadcast network. 20. The method of claim 13, further comprising:
capturing, by a microphone array, spatial audio data, and converting the spatial audio data into the higher order ambisonic audio data. 21. The method of claim 13, further comprising:
receiving the higher order ambisonic audio data; and outputting the bitstream to an emission encoder, the emission encoder configured to transcode the bitstream based on a target bitrate, wherein the device comprises a mobile communication handset. 22. The method of claim 13, further comprising:
capturing spatial audio data representative of the higher order ambisonic audio data; and converting the spatial audio data to the higher order ambisonic audio data, wherein the device comprises a flying device. 23. A non-transitory computer-readable storage medium having stored thereon instructions that, when executed, cause one or more processors to:
decompose higher order ambisonic coefficients representative of a soundfield into a predominant sound component and a corresponding spatial component, the corresponding spatial component representative of the directions, shape, and width of the predominant sound component, and defined in the spherical harmonic domain; specify, in a bitstream conforming to an intermediate compression format, a subset of the higher order ambisonic coefficients that represent an ambient component of the soundfield; and specify, in the bitstream and irrespective of a determination of a minimum number of ambient channels and a number of elements to specify in the bitstream for the spatial component, all elements of the spatial component. 24. The non-transitory computer-readable storage medium of claim 23, further storing instructions that, when executed, cause the one or more processors to specify, in the bitstream, the subset of the higher order ambisonic coefficients associated with spherical basis functions having an order from zero through two. 25. The non-transitory computer-readable storage medium of claim 23, further storing instructions that, when executed, cause the one or more processors to specify, in the bitstream and without applying decorrelation to the subset of the higher order ambisonic coefficients, the subset of the higher order ambisonic coefficients. 26. A device configured to compress higher order ambisonic audio data representative of a soundfield, the device comprising:
means for decomposing higher order ambisonic coefficients representative of a soundfield into a predominant sound component and a corresponding spatial component, the corresponding spatial component representative of the directions, shape, and width of the predominant sound component, and defined in the spherical harmonic domain; means for specifying, in a bitstream conforming to an intermediate compression format, a subset of the higher order ambisonic coefficients that represent an ambient component of the soundfield; and means for specifying, in the bitstream and irrespective of a determination of a minimum number of ambient channels and a number of elements to specify in the bitstream for the spatial component, all elements of the spatial component. | In general, techniques are described for performing layered intermediate compression for higher order ambisonic (HOA) audio data. A device comprising a memory and a processor may be configured to perform the techniques. The memory may store HOA coefficients of the HOA audio data. The processors may decompose the HOA coefficients into a predominant sound component and a corresponding spatial component. The spatial component may be representative of the directions, shape, and width of the predominant sound component, and defined in the spherical harmonic domain. The processor may specify, in a bitstream conforming to an intermediate compression format, a subset of the HOA coefficients that represent an ambient component. The processor may also specify, in the bitstream and irrespective of a determination of a minimum number of ambient channels and a number of elements to specify in the bitstream for the spatial component, all elements of the spatial component.1. A device configured to compress higher order ambisonic audio data representative of a soundfield, the device comprising:
a memory configured to store higher order ambisonic coefficients of the higher order ambisonic audio data; and one or more processors configured to: decompose the higher order ambisonic coefficients into a predominant sound component and a corresponding spatial component, the corresponding spatial component representative of the directions, shape, and width of the predominant sound component, and defined in the spherical harmonic domain; specify, in a bitstream conforming to an intermediate compression format, a subset of the higher order ambisonic coefficients that represent an ambient component of the soundfield; and specify, in the bitstream and irrespective of a determination of a minimum number of ambient channels and a number of elements to specify in the bitstream for the spatial component, all elements of the spatial component. 2. The device of claim 1, wherein the one or more processors are configured to specify, in the bitstream, the subset of the higher order ambisonic coefficients associated with spherical basis functions having an order from zero through two. 3. The device of claim 1,
wherein the predominant sound component comprises a first predominant sound component, wherein the spatial component comprises a first spatial component, wherein the one or more processors are configured to: decompose the higher order ambisonic coefficients into a plurality of predominant sound components that include the first predominant sound component and a corresponding plurality of spatial components that include the first spatial component, specify, in the bitstream, all elements of each of four of the plurality of spatial components, the four of the plurality of spatial components including the first spatial component; and specify, in the bitstream, four of the plurality of predominant sound components corresponding to the four of the plurality of spatial components. 4. The device of claim 3, wherein the one or more processors are configured to:
specify all elements of each of the four of the plurality of spatial components in a single side information channel of the bitstream; specify each of the four of the plurality of predominant sound components in a separate foreground channel of the bitstream; and specify each of the subset of the higher order ambisonic coefficients in a separate ambient channel of the bitstream. 5. The device of claim 1, wherein the one or more processors are further configured to specify, in the bitstream and without applying decorrelation to the subset of the higher order ambisonic coefficients, the subset of the higher order ambisonic coefficients. 6. The device of claim 1, wherein the intermediate compression format comprises a mezzanine compression format. 7. The device of claim 1, wherein the intermediate compression format comprises a mezzanine compression format used for communication of audio data for broadcast networks. 8. The device of claim 1,
wherein the device comprises a microphone array configured to capture spatial audio data, and wherein the one or more processors are further configured to convert the spatial audio data into the higher order ambisonic audio data. 9. The device of claim 1, wherein the one or more processors are configured to:
receive the higher order ambisonic audio data; and output the bitstream to an emission encoder, the emission encoder configured to transcode the bitstream based on a target bitrate. 10. The device of claim 1, further comprising a microphone configured to capture spatial audio data representative of the higher order ambisonic audio data, and convert the spatial audio data to the higher order ambisonic audio data. 11. The device of claim 1, wherein the device comprises a robotic device. 12. The device of claim 1, wherein the device comprises a flying device. 13. A method to compress higher order ambisonic audio data representative of a soundfield, the method comprising:
decomposing higher order ambisonic coefficients representative of a soundfield into a predominant sound component and a corresponding spatial component, the corresponding spatial component representative of the directions, shape, and width of the predominant sound component, and defined in the spherical harmonic domain; specifying, in a bitstream conforming to an intermediate compression format, a subset of the higher order ambisonic coefficients that represent an ambient component of the soundfield; and specifying, in the bitstream and irrespective of a determination of a minimum number of ambient channels and a number of elements to specify in the bitstream for the spatial component, all elements of the spatial component. 14. The method of claim 13, wherein specifying the subset of the higher order ambisonic coefficients comprises specifying, in the bitstream, the subset of the higher order ambisonic coefficients associated with spherical basis functions having an order from zero through two. 15. The method of claim 13,
wherein the predominant sound component comprises a first predominant sound component, wherein the spatial component comprises a first spatial component, wherein decomposing the higher order ambisonic coefficients comprises decomposing the higher order ambisonic coefficients into a plurality of predominant sound components that include the first predominant sound component and a corresponding plurality of spatial components that include the first spatial component, wherein specifying all of the elements of the spatial component comprises specifying, in the bitstream, all elements of each of four of the plurality of spatial components, the four of the plurality of spatial components including the first spatial component, and wherein the method further comprises specifying, in the bitstream, four of the plurality of predominant sound components corresponding to the four of the plurality of spatial components. 16. The method of claim 15,
wherein specifying all of the elements of each of the four of the plurality of spatial components comprises specifying all of the elements of each of the four of the plurality of spatial components in a single side information channel of the bitstream, wherein specifying the four of the plurality of predominant sound components comprises specifying each of the four of the plurality of predominant sound components in a separate foreground channel of the bitstream, and wherein specifying the subset of the higher order ambisonic coefficients comprises specifying each of the subset of the higher order ambisonic coefficients in a separate ambient channel of the bitstream. 17. The method of claim 13, further comprising specifying, in the bitstream and without applying decorrelation to the subset of the higher order ambisonic coefficients, the subset of the higher order ambisonic coefficients. 18. The method of claim 13, wherein the intermediate compression format comprises a mezzanine compression format. 19. The method of claim 13, wherein the intermediate compression format comprises a mezzanine compression format used for communication of audio data for broadcast network. 20. The method of claim 13, further comprising:
capturing, by a microphone array, spatial audio data, and converting the spatial audio data into the higher order ambisonic audio data. 21. The method of claim 13, further comprising:
receiving the higher order ambisonic audio data; and outputting the bitstream to an emission encoder, the emission encoder configured to transcode the bitstream based on a target bitrate, wherein the device comprises a mobile communication handset. 22. The method of claim 13, further comprising:
capturing spatial audio data representative of the higher order ambisonic audio data; and converting the spatial audio data to the higher order ambisonic audio data, wherein the device comprises a flying device. 23. A non-transitory computer-readable storage medium having stored thereon instructions that, when executed, cause one or more processors to:
decompose higher order ambisonic coefficients representative of a soundfield into a predominant sound component and a corresponding spatial component, the corresponding spatial component representative of the directions, shape, and width of the predominant sound component, and defined in the spherical harmonic domain; specify, in a bitstream conforming to an intermediate compression format, a subset of the higher order ambisonic coefficients that represent an ambient component of the soundfield; and specify, in the bitstream and irrespective of a determination of a minimum number of ambient channels and a number of elements to specify in the bitstream for the spatial component, all elements of the spatial component. 24. The non-transitory computer-readable storage medium of claim 23, further storing instructions that, when executed, cause the one or more processors to specify, in the bitstream, the subset of the higher order ambisonic coefficients associated with spherical basis functions having an order from zero through two. 25. The non-transitory computer-readable storage medium of claim 23, further storing instructions that, when executed, cause the one or more processors to specify, in the bitstream and without applying decorrelation to the subset of the higher order ambisonic coefficients, the subset of the higher order ambisonic coefficients. 26. A device configured to compress higher order ambisonic audio data representative of a soundfield, the device comprising:
means for decomposing higher order ambisonic coefficients representative of a soundfield into a predominant sound component and a corresponding spatial component, the corresponding spatial component representative of the directions, shape, and width of the predominant sound component, and defined in the spherical harmonic domain; means for specifying, in a bitstream conforming to an intermediate compression format, a subset of the higher order ambisonic coefficients that represent an ambient component of the soundfield; and means for specifying, in the bitstream and irrespective of a determination of a minimum number of ambient channels and a number of elements to specify in the bitstream for the spatial component, all elements of the spatial component. | 2,600 |
10,762 | 10,762 | 16,248,177 | 2,632 | The present disclosure describes methods, device, system that provide a codebook indication operation. In one example, a codebook indication method includes: receiving by a terminal device, a transmission parameter indication information indicating an index of one codebook subset configuration of three codebook subset configurations in the terminal device from a base station, wherein the three codebook subset configurations in the terminal device are related to fully coherent, partial coherent, and incoherent respectively, and the codebook subset configuration related to fully coherent includes M indexes, the codebook subset configuration related to partial coherent includes N indexes, and the codebook subset configuration related to incoherent includes K indexes, wherein M is an integer larger than N, and N is larger than K; and determining a transmission layer and precoding matrix associated with the index according to the transmission parameter indication information. | 1. A method of wireless communications, comprising:
receiving, by a terminal device, transmission parameter indication information indicating an index of one codebook subset configuration of three codebook subset configurations in the terminal device from a base station, wherein the three codebook subset configurations in the terminal device are related to fully coherent, partial coherent, and incoherent respectively, the codebook subset configuration related to fully coherent includes M indexes, the codebook subset configuration related to partial coherent includes N indexes, and the codebook subset configuration related to incoherent includes K indexes, M is an integer larger than N, and N is larger than K; and determining a transmission layer and precoding matrix associated with the index according to the transmission parameter indication information. 2. The method according to claim 1, wherein M is 64, N is 32, and K is 16. 3. The method according to claim 2, wherein at least first 12 indexes in the codebook subset configuration related to incoherent are associated with same transmission layers and precoding matrices as first 12 indexes in the codebook subset configuration related to partial coherent; and indexes in the codebook subset configuration related to partial coherent are associated with same transmission layers and precoding matrices with first 32 indexes in the codebook subset configuration related to fully coherent. 4. The method according to claim 1, wherein each index is associated with a transmission rank indication (TRI) and a transmission precoding matrix indication (TPMI). 5. The method according to claim 1, wherein length of the transmission parameter indication information is 4 bits, 5 bits or 6 bits depending on which one of the three codebook subset configurations is used in the terminal device. 6. The method according to claim 5, wherein the terminal device receives coherence capability indication information indicating a codebook subset configuration used in the terminal device from the base station. 7. The method according to claim 1, wherein contents of the three codebook subset configurations are represented by the following table:
fully
Field
coherent
Field
partial coherent
Field
Incoherent
index
transmission
index
transmission
index
transmission
0
One layer:
0
One layer: TPMI = 0
0
One layer:
TPMI = 0
TPMI = 0
1
One layer:
1
One layer: TPMI = 1
1
One layer:
TPMI = 1
TPMI = 1
2
One layer:
2
One layer:
2
One layer:
TPMI = 2
TPMI = 2
TPMI = 2
3
One layer:
3
One layer: TPMI = 3
3
One layer:
TPMI = 3
TPMI = 3
4
Two layers:
4
Two layers:
4
Two layers:
TPMI = 0
TPMI = 0
TPMI = 0
5
Two layers:
5
Two layers:
5
Two layers:
TPMI = 1
TPMI = 1
TPMI = 1
6
Two layers:
6
Two layers:
6
Two layers:
TPMI = 2
TPMI = 2
TPMI = 2
7
Two layers:
7
Two layers:
7
Two layers:
TPMI = 3
TPMI = 3
TPMI = 3
8
Two layers:
8
Two layers:
8
Two layers:
TPMI = 4
TPMI = 4
TPMI = 4
9
Two layers:
9
Two layers:
9
Two layers:
TPMI = 5
TPMI = 5
TPMI = 5
10
Three layers:
10
Three layers:
10
Three layers:
TPMI = 0
TPMI = 0
TPMI = 0
11
Four layers:
11
Four layers:
11
Four layers:
TPMI = 0
TPMI = 0
TPMI = 0
12
One layer:
12
One layer: TPMI = 4
TPMI = 4
13
One layer:
13
One layer: TPMI = 5
TPMI = 5
14
One layer:
14
One layer: TPMI = 6
TPMI = 6
15
One layer:
15
One layer: TPMI = 7
TPMI = 7
16
One layer:
16
One layer: TPMI = 8
TPMI = 8
17
One layer:
17
One layer: TPMI = 9
TPMI = 9
18
One layer:
18
One layer:
TPMI = 10
TPMI = 10
19
One layer:
19
One layer:
TPMI = 11
TPMI = 11
20
Two layers:
20
Two layers:
TPMI = 6
TPMI = 6
21
Two layers:
21
Two layers:
TPMI = 7
TPMI = 7
22
Two layers:
22
Two layers:
TPMI = 8
TPMI = 8
23
Two layers:
23
Two layers:
TPMI = 9
TPMI = 9
24
Two layers:
24
Two layers:
TPMI = 10
TPMI = 10
25
Two layers:
25
Two layers:
TPMI = 11
TPMI = 11
26
Two layers:
26
Two layers:
TPMI = 12
TPMI = 12
27
Two layers:
27
Two layers:
TPMI = 13
TPMI = 13
28
Three layers:
28
Three layers:
TPMI = 1
TPMI = 1
29
Three layers:
29
Three layers:
TPMI = 2
TPMI = 2
30
Four layers:
30
Four layers:
TPMI = 1
TPMI = 1
31
Four layers:
31
Four layers:
TPMI = 2
TPMI = 2
32
One layer:
TPMI = 12
33
One layer:
TPMI = 13
34
One layer:
TPMI = 14
35
One layer:
TPMI = 15
36
One layer:
TPMI = 16
37
One layer:
TPMI = 17
38
One layer:
TPMI = 18
39
One layer:
TPMI = 19
40
One layer:
TPMI = 20
41
One layer:
TPMI = 21
42
One layer:
TPMI = 22
43
One layer:
TPMI = 23
44
One layer:
TPMI = 24
45
One layer:
TPMI = 25
46
One layer:
TPMI = 26
47
One layer:
TPMI = 27
48
Two layers:
TPMI = 14
49
Two layers:
TPMI = 14
50
Two layers:
TPMI = 15
51
Two layers:
TPMI = 16
52
Two layers:
TPMI = 17
53
Two layers:
TPMI = 18
54
Two layers:
TPMI = 19
55
Two layers:
TPMI = 20
55
Two layers:
TPMI = 21
56
Three layers:
TPMI = 3
57
Three layers:
TPMI = 4
58
Three layers:
TPMI = 5
59
Three layers:
TPMI = 6
60
Four layers:
TPMI = 3
61
Four layers:
TPMI = 4 8. A terminal device, comprising:
a receiver, configured to receive transmission parameter indication information indicating an index of one codebook subset configuration of three codebook subset configurations in the terminal device from a base station, wherein the three codebook subset configurations in the terminal device are related to fully coherent, partial coherent, and incoherent respectively, the codebook subset configuration related to fully coherent includes M indexes, the codebook subset configuration related to partial coherent includes N indexes, and the codebook subset configuration related to incoherent includes K indexes, M is an integer larger than N, and N is larger than K; and a processor coupled to the receiver and configured to determine a transmission layer and precoding matrix associated with the index according to the transmission parameter indication information. 9. The terminal device according to claim 8, wherein M is 64, N is 32 and K is 16. 10. The terminal device according to claim 9, wherein at least first 12 indexes in the codebook subset configuration related to incoherent are associated with same transmission layers and precoding matrices as first 12 indexes in the codebook subset configuration related to partial coherent; and indexes in the codebook subset configuration related to partial coherent are associated with same transmission layers and precoding matrices with first 32 indexes in the codebook subset configuration related to fully coherent. 11. The terminal device according to claim 8, wherein each index is associated with a transmission rank indication (TRI) and a transmission precoding matrix indication (TPMI). 12. The terminal device according to claim 8, wherein length of the transmission parameter indication information is 4 bits, 5 bits or, 6 bits depending on which one of the three codebook subset configurations is used in the terminal device. 13. The terminal device according to claim 12, wherein
the receiver is further configured to receive coherence capability indication information indicating a codebook subset configuration used in the terminal device from the base station. 14. The terminal device according to claim 8, wherein contents of the three codebook subset configurations are represented by the following table:
fully
Field
coherent
Field
partial coherent
Field
Incoherent
index
transmission
index
transmission
index
transmission
0
One layer:
0
One layer: TPMI = 0
0
One layer:
TPMI = 0
TPMI = 0
1
One layer:
1
One layer: TPMI = 1
1
One layer:
TPMI = 1
TPMI = 1
2
One layer:
2
One layer:
2
One layer:
TPMI = 2
TPMI = 2
TPMI = 2
3
One layer:
3
One layer: TPMI = 3
3
One layer:
TPMI = 3
TPMI = 3
4
Two layers:
4
Two layers:
4
Two layers:
TPMI = 0
TPMI = 0
TPMI = 0
5
Two layers:
5
Two layers:
5
Two layers:
TPMI = 1
TPMI = 1
TPMI = 1
6
Two layers:
6
Two layers:
6
Two layers:
TPMI = 2
TPMI = 2
TPMI = 2
7
Two layers:
7
Two layers:
7
Two layers:
TPMI = 3
TPMI = 3
TPMI = 3
8
Two layers:
8
Two layers:
8
Two layers:
TPMI = 4
TPMI = 4
TPMI = 4
9
Two layers:
9
Two layers:
9
Two layers:
TPMI = 5
TPMI = 5
TPMI = 5
10
Three layers:
10
Three layers:
10
Three layers:
TPMI = 0
TPMI = 0
TPMI = 0
11
Four layers:
11
Four layers:
11
Four layers:
TPMI = 0
TPMI = 0
TPMI = 0
12
One layer:
12
One layer: TPMI = 4
TPMI = 4
13
One layer:
13
One layer: TPMI = 5
TPMI = 5
14
One layer:
14
One layer: TPMI = 6
TPMI = 6
15
One layer:
15
One layer: TPMI = 7
TPMI = 7
16
One layer:
16
One layer: TPMI = 8
TPMI = 8
17
One layer:
17
One layer: TPMI = 9
TPMI = 9
18
One layer:
18
One layer:
TPMI = 10
TPMI = 10
19
One layer:
19
One layer:
TPMI = 11
TPMI = 11
20
Two layers:
20
Two layers:
TPMI = 6
TPMI = 6
21
Two layers:
21
Two layers:
TPMI = 7
TPMI = 7
22
Two layers:
22
Two layers:
TPMI = 8
TPMI = 8
23
Two layers:
23
Two layers:
TPMI = 9
TPMI = 9
24
Two layers:
24
Two layers:
TPMI = 10
TPMI = 10
25
Two layers:
25
Two layers:
TPMI = 11
TPMI = 11
26
Two layers:
26
Two layers:
TPMI = 12
TPMI = 12
27
Two layers:
27
Two layers:
TPMI = 13
TPMI = 13
28
Three layers:
28
Three layers:
TPMI = 1
TPMI = 1
29
Three layers:
29
Three layers:
TPMI = 2
TPMI = 2
30
Four layers:
30
Four layers:
TPMI = 1
TPMI = 1
31
Four layers:
31
Four layers:
TPMI = 2
TPMI = 2
32
One layer:
TPMI = 12
33
One layer:
TPMI = 13
34
One layer:
TPMI = 14
35
One layer:
TPMI = 15
36
One layer:
TPMI = 16
37
One layer:
TPMI = 17
38
One layer:
TPMI = 18
39
One layer:
TPMI = 19
40
One layer:
TPMI = 20
41
One layer:
TPMI = 21
42
One layer:
TPMI = 22
43
One layer:
TPMI = 23
44
One layer:
TPMI = 24
45
One layer:
TPMI = 25
46
One layer:
TPMI = 26
47
One layer:
TPMI = 27
48
Two layers:
TPMI = 14
49
Two layers:
TPMI = 14
50
Two layers:
TPMI = 15
51
Two layers:
TPMI = 16
52
Two layers:
TPMI = 17
53
Two layers:
TPMI = 18
54
Two layers:
TPMI = 19
55
Two layers:
TPMI = 20
55
Two layers:
TPMI = 21
56
Three layers:
TPMI = 3
57
Three layers:
TPMI = 4
58
Three layers:
TPMI = 5
59
Three layers:
TPMI = 6
60
Four layers:
TPMI = 3
61
Four layers:
TPMI = 4 15. A non-transitory computer-readable storage medium having instructions recorded thereon which, when executed by a processor, cause the processor to perform operations comprising:
receiving a transmission parameter indication information indicating an index of one codebook subset configuration of three codebook subset configurations in a terminal device from a base station, wherein the three codebook subset configurations in the terminal device are related to fully coherent, partial coherent, and incoherent respectively, the codebook subset configuration related to fully coherent includes M indexes, the codebook subset configuration related to partial coherent includes N indexes, and the codebook subset configuration related to incoherent includes K indexes, M is an integer larger than N, and N is larger than K; and
determining a transmission layer and precoding matrix associated with the index according to the transmission parameter indication information. 16. The computer-readable storage medium according to claim 15, wherein M is 64, N is 32, and K is 16. 17. The computer-readable storage medium according to claim 16, wherein at least first 12 indexes in the codebook subset configuration related to incoherent are associated with same transmission layers and precoding matrices as first 12 indexes in the codebook subset configuration related to partial coherent; and indexes in the codebook subset configuration related to partial coherent are associated with same transmission layers and precoding matrices with first 32 indexes in the codebook subset configuration related to fully coherent. 18. The computer-readable storage medium according to claim 15, wherein each index is associated with a transmission rank indication (TRI) and a transmission precoding matrix indication (TPMI). 19. The computer-readable storage medium according to claim 15, wherein length of the transmission parameter indication information is 4 bits, 5 bits, or 6 bits depending on which one of the three codebook subset configurations is used. | The present disclosure describes methods, device, system that provide a codebook indication operation. In one example, a codebook indication method includes: receiving by a terminal device, a transmission parameter indication information indicating an index of one codebook subset configuration of three codebook subset configurations in the terminal device from a base station, wherein the three codebook subset configurations in the terminal device are related to fully coherent, partial coherent, and incoherent respectively, and the codebook subset configuration related to fully coherent includes M indexes, the codebook subset configuration related to partial coherent includes N indexes, and the codebook subset configuration related to incoherent includes K indexes, wherein M is an integer larger than N, and N is larger than K; and determining a transmission layer and precoding matrix associated with the index according to the transmission parameter indication information.1. A method of wireless communications, comprising:
receiving, by a terminal device, transmission parameter indication information indicating an index of one codebook subset configuration of three codebook subset configurations in the terminal device from a base station, wherein the three codebook subset configurations in the terminal device are related to fully coherent, partial coherent, and incoherent respectively, the codebook subset configuration related to fully coherent includes M indexes, the codebook subset configuration related to partial coherent includes N indexes, and the codebook subset configuration related to incoherent includes K indexes, M is an integer larger than N, and N is larger than K; and determining a transmission layer and precoding matrix associated with the index according to the transmission parameter indication information. 2. The method according to claim 1, wherein M is 64, N is 32, and K is 16. 3. The method according to claim 2, wherein at least first 12 indexes in the codebook subset configuration related to incoherent are associated with same transmission layers and precoding matrices as first 12 indexes in the codebook subset configuration related to partial coherent; and indexes in the codebook subset configuration related to partial coherent are associated with same transmission layers and precoding matrices with first 32 indexes in the codebook subset configuration related to fully coherent. 4. The method according to claim 1, wherein each index is associated with a transmission rank indication (TRI) and a transmission precoding matrix indication (TPMI). 5. The method according to claim 1, wherein length of the transmission parameter indication information is 4 bits, 5 bits or 6 bits depending on which one of the three codebook subset configurations is used in the terminal device. 6. The method according to claim 5, wherein the terminal device receives coherence capability indication information indicating a codebook subset configuration used in the terminal device from the base station. 7. The method according to claim 1, wherein contents of the three codebook subset configurations are represented by the following table:
fully
Field
coherent
Field
partial coherent
Field
Incoherent
index
transmission
index
transmission
index
transmission
0
One layer:
0
One layer: TPMI = 0
0
One layer:
TPMI = 0
TPMI = 0
1
One layer:
1
One layer: TPMI = 1
1
One layer:
TPMI = 1
TPMI = 1
2
One layer:
2
One layer:
2
One layer:
TPMI = 2
TPMI = 2
TPMI = 2
3
One layer:
3
One layer: TPMI = 3
3
One layer:
TPMI = 3
TPMI = 3
4
Two layers:
4
Two layers:
4
Two layers:
TPMI = 0
TPMI = 0
TPMI = 0
5
Two layers:
5
Two layers:
5
Two layers:
TPMI = 1
TPMI = 1
TPMI = 1
6
Two layers:
6
Two layers:
6
Two layers:
TPMI = 2
TPMI = 2
TPMI = 2
7
Two layers:
7
Two layers:
7
Two layers:
TPMI = 3
TPMI = 3
TPMI = 3
8
Two layers:
8
Two layers:
8
Two layers:
TPMI = 4
TPMI = 4
TPMI = 4
9
Two layers:
9
Two layers:
9
Two layers:
TPMI = 5
TPMI = 5
TPMI = 5
10
Three layers:
10
Three layers:
10
Three layers:
TPMI = 0
TPMI = 0
TPMI = 0
11
Four layers:
11
Four layers:
11
Four layers:
TPMI = 0
TPMI = 0
TPMI = 0
12
One layer:
12
One layer: TPMI = 4
TPMI = 4
13
One layer:
13
One layer: TPMI = 5
TPMI = 5
14
One layer:
14
One layer: TPMI = 6
TPMI = 6
15
One layer:
15
One layer: TPMI = 7
TPMI = 7
16
One layer:
16
One layer: TPMI = 8
TPMI = 8
17
One layer:
17
One layer: TPMI = 9
TPMI = 9
18
One layer:
18
One layer:
TPMI = 10
TPMI = 10
19
One layer:
19
One layer:
TPMI = 11
TPMI = 11
20
Two layers:
20
Two layers:
TPMI = 6
TPMI = 6
21
Two layers:
21
Two layers:
TPMI = 7
TPMI = 7
22
Two layers:
22
Two layers:
TPMI = 8
TPMI = 8
23
Two layers:
23
Two layers:
TPMI = 9
TPMI = 9
24
Two layers:
24
Two layers:
TPMI = 10
TPMI = 10
25
Two layers:
25
Two layers:
TPMI = 11
TPMI = 11
26
Two layers:
26
Two layers:
TPMI = 12
TPMI = 12
27
Two layers:
27
Two layers:
TPMI = 13
TPMI = 13
28
Three layers:
28
Three layers:
TPMI = 1
TPMI = 1
29
Three layers:
29
Three layers:
TPMI = 2
TPMI = 2
30
Four layers:
30
Four layers:
TPMI = 1
TPMI = 1
31
Four layers:
31
Four layers:
TPMI = 2
TPMI = 2
32
One layer:
TPMI = 12
33
One layer:
TPMI = 13
34
One layer:
TPMI = 14
35
One layer:
TPMI = 15
36
One layer:
TPMI = 16
37
One layer:
TPMI = 17
38
One layer:
TPMI = 18
39
One layer:
TPMI = 19
40
One layer:
TPMI = 20
41
One layer:
TPMI = 21
42
One layer:
TPMI = 22
43
One layer:
TPMI = 23
44
One layer:
TPMI = 24
45
One layer:
TPMI = 25
46
One layer:
TPMI = 26
47
One layer:
TPMI = 27
48
Two layers:
TPMI = 14
49
Two layers:
TPMI = 14
50
Two layers:
TPMI = 15
51
Two layers:
TPMI = 16
52
Two layers:
TPMI = 17
53
Two layers:
TPMI = 18
54
Two layers:
TPMI = 19
55
Two layers:
TPMI = 20
55
Two layers:
TPMI = 21
56
Three layers:
TPMI = 3
57
Three layers:
TPMI = 4
58
Three layers:
TPMI = 5
59
Three layers:
TPMI = 6
60
Four layers:
TPMI = 3
61
Four layers:
TPMI = 4 8. A terminal device, comprising:
a receiver, configured to receive transmission parameter indication information indicating an index of one codebook subset configuration of three codebook subset configurations in the terminal device from a base station, wherein the three codebook subset configurations in the terminal device are related to fully coherent, partial coherent, and incoherent respectively, the codebook subset configuration related to fully coherent includes M indexes, the codebook subset configuration related to partial coherent includes N indexes, and the codebook subset configuration related to incoherent includes K indexes, M is an integer larger than N, and N is larger than K; and a processor coupled to the receiver and configured to determine a transmission layer and precoding matrix associated with the index according to the transmission parameter indication information. 9. The terminal device according to claim 8, wherein M is 64, N is 32 and K is 16. 10. The terminal device according to claim 9, wherein at least first 12 indexes in the codebook subset configuration related to incoherent are associated with same transmission layers and precoding matrices as first 12 indexes in the codebook subset configuration related to partial coherent; and indexes in the codebook subset configuration related to partial coherent are associated with same transmission layers and precoding matrices with first 32 indexes in the codebook subset configuration related to fully coherent. 11. The terminal device according to claim 8, wherein each index is associated with a transmission rank indication (TRI) and a transmission precoding matrix indication (TPMI). 12. The terminal device according to claim 8, wherein length of the transmission parameter indication information is 4 bits, 5 bits or, 6 bits depending on which one of the three codebook subset configurations is used in the terminal device. 13. The terminal device according to claim 12, wherein
the receiver is further configured to receive coherence capability indication information indicating a codebook subset configuration used in the terminal device from the base station. 14. The terminal device according to claim 8, wherein contents of the three codebook subset configurations are represented by the following table:
fully
Field
coherent
Field
partial coherent
Field
Incoherent
index
transmission
index
transmission
index
transmission
0
One layer:
0
One layer: TPMI = 0
0
One layer:
TPMI = 0
TPMI = 0
1
One layer:
1
One layer: TPMI = 1
1
One layer:
TPMI = 1
TPMI = 1
2
One layer:
2
One layer:
2
One layer:
TPMI = 2
TPMI = 2
TPMI = 2
3
One layer:
3
One layer: TPMI = 3
3
One layer:
TPMI = 3
TPMI = 3
4
Two layers:
4
Two layers:
4
Two layers:
TPMI = 0
TPMI = 0
TPMI = 0
5
Two layers:
5
Two layers:
5
Two layers:
TPMI = 1
TPMI = 1
TPMI = 1
6
Two layers:
6
Two layers:
6
Two layers:
TPMI = 2
TPMI = 2
TPMI = 2
7
Two layers:
7
Two layers:
7
Two layers:
TPMI = 3
TPMI = 3
TPMI = 3
8
Two layers:
8
Two layers:
8
Two layers:
TPMI = 4
TPMI = 4
TPMI = 4
9
Two layers:
9
Two layers:
9
Two layers:
TPMI = 5
TPMI = 5
TPMI = 5
10
Three layers:
10
Three layers:
10
Three layers:
TPMI = 0
TPMI = 0
TPMI = 0
11
Four layers:
11
Four layers:
11
Four layers:
TPMI = 0
TPMI = 0
TPMI = 0
12
One layer:
12
One layer: TPMI = 4
TPMI = 4
13
One layer:
13
One layer: TPMI = 5
TPMI = 5
14
One layer:
14
One layer: TPMI = 6
TPMI = 6
15
One layer:
15
One layer: TPMI = 7
TPMI = 7
16
One layer:
16
One layer: TPMI = 8
TPMI = 8
17
One layer:
17
One layer: TPMI = 9
TPMI = 9
18
One layer:
18
One layer:
TPMI = 10
TPMI = 10
19
One layer:
19
One layer:
TPMI = 11
TPMI = 11
20
Two layers:
20
Two layers:
TPMI = 6
TPMI = 6
21
Two layers:
21
Two layers:
TPMI = 7
TPMI = 7
22
Two layers:
22
Two layers:
TPMI = 8
TPMI = 8
23
Two layers:
23
Two layers:
TPMI = 9
TPMI = 9
24
Two layers:
24
Two layers:
TPMI = 10
TPMI = 10
25
Two layers:
25
Two layers:
TPMI = 11
TPMI = 11
26
Two layers:
26
Two layers:
TPMI = 12
TPMI = 12
27
Two layers:
27
Two layers:
TPMI = 13
TPMI = 13
28
Three layers:
28
Three layers:
TPMI = 1
TPMI = 1
29
Three layers:
29
Three layers:
TPMI = 2
TPMI = 2
30
Four layers:
30
Four layers:
TPMI = 1
TPMI = 1
31
Four layers:
31
Four layers:
TPMI = 2
TPMI = 2
32
One layer:
TPMI = 12
33
One layer:
TPMI = 13
34
One layer:
TPMI = 14
35
One layer:
TPMI = 15
36
One layer:
TPMI = 16
37
One layer:
TPMI = 17
38
One layer:
TPMI = 18
39
One layer:
TPMI = 19
40
One layer:
TPMI = 20
41
One layer:
TPMI = 21
42
One layer:
TPMI = 22
43
One layer:
TPMI = 23
44
One layer:
TPMI = 24
45
One layer:
TPMI = 25
46
One layer:
TPMI = 26
47
One layer:
TPMI = 27
48
Two layers:
TPMI = 14
49
Two layers:
TPMI = 14
50
Two layers:
TPMI = 15
51
Two layers:
TPMI = 16
52
Two layers:
TPMI = 17
53
Two layers:
TPMI = 18
54
Two layers:
TPMI = 19
55
Two layers:
TPMI = 20
55
Two layers:
TPMI = 21
56
Three layers:
TPMI = 3
57
Three layers:
TPMI = 4
58
Three layers:
TPMI = 5
59
Three layers:
TPMI = 6
60
Four layers:
TPMI = 3
61
Four layers:
TPMI = 4 15. A non-transitory computer-readable storage medium having instructions recorded thereon which, when executed by a processor, cause the processor to perform operations comprising:
receiving a transmission parameter indication information indicating an index of one codebook subset configuration of three codebook subset configurations in a terminal device from a base station, wherein the three codebook subset configurations in the terminal device are related to fully coherent, partial coherent, and incoherent respectively, the codebook subset configuration related to fully coherent includes M indexes, the codebook subset configuration related to partial coherent includes N indexes, and the codebook subset configuration related to incoherent includes K indexes, M is an integer larger than N, and N is larger than K; and
determining a transmission layer and precoding matrix associated with the index according to the transmission parameter indication information. 16. The computer-readable storage medium according to claim 15, wherein M is 64, N is 32, and K is 16. 17. The computer-readable storage medium according to claim 16, wherein at least first 12 indexes in the codebook subset configuration related to incoherent are associated with same transmission layers and precoding matrices as first 12 indexes in the codebook subset configuration related to partial coherent; and indexes in the codebook subset configuration related to partial coherent are associated with same transmission layers and precoding matrices with first 32 indexes in the codebook subset configuration related to fully coherent. 18. The computer-readable storage medium according to claim 15, wherein each index is associated with a transmission rank indication (TRI) and a transmission precoding matrix indication (TPMI). 19. The computer-readable storage medium according to claim 15, wherein length of the transmission parameter indication information is 4 bits, 5 bits, or 6 bits depending on which one of the three codebook subset configurations is used. | 2,600 |
10,763 | 10,763 | 16,478,774 | 2,613 | At least one visual output unit, wearable on the head, displays virtual elements from a prescribed virtual observation position. A control device evaluates sensor data characterizing a movement and/or spatial location of a motor vehicle actuates the visual output unit such that at least some of the virtual elements displayed by the visual output unit move relative to the virtual observation position in accordance with the movement of the motor vehicle and/or at least some of the virtual elements displayed by the visual output unit are arranged relative to the virtual observation position in accordance with the spatial location of the motor vehicle. | 1-9. (canceled) 10. An entertainment system for a motor vehicle, comprising
at least one visual output unit configured to be worn on a head of a user and to display virtual elements from a prescribed virtual observation position; and a control device configured to evaluate first sensor data characterizing at least one of a movement and a spatial location of the motor vehicle and to actuate the at least one visual output unit so that at least some of the virtual elements are displayed by the visual output device with at least one of movement relative to the virtual observation position in accordance with the movement of the motor vehicle and arrangement of at least some of the virtual elements relative to the virtual observation position in accordance with the spatial location of the motor vehicle. 11. The entertainment system as claimed in claim 10, wherein the control device is configured to evaluate second sensor data characterizing a state of the user of the virtual output unit, and to actuate the visual output unit as a function of the second sensor data. 12. The entertainment system as claimed in claim 11, wherein the control device is configured to evaluate preference data characterizing personal preferences of the user of the visual output unit, and to actuate the visual output unit as a function of the preference data. 13. The entertainment system as claimed in claim 12, further comprising sensors separate from the vehicle and configured to sense the at least one of the movement of the motor vehicle and the spatial location of the motor vehicle, and the state of the user of the visual output unit. 14. The entertainment system as claimed in claim 12, further comprising sensors integrated in the motor vehicle and configured to sense the at least one of the movement of the motor vehicle and the spatial location of the motor vehicle, and the state of the user of the visual output unit. 15. The entertainment system as claimed in claim 14, wherein the motor vehicle has a vehicle-side diagnostic socket,
wherein the control device is integrated into the visual output unit, and wherein the entertainment system further comprises an interface compatible with the vehicle-side diagnostic socket and configured to transmit the first and second sensor data to the control device. 16. The entertainment system as claimed in claim 11, wherein the visual output unit includes one of virtual reality glasses and augmented reality glasses. 17. The entertainment system as claimed in claim 11, wherein the control device is configured to
select music files as a function of at least one of the movement of the motor vehicle, the state of the user and the personal preferences of the user of the visual output unit, and actuate a loudspeaker for outputting the music files. 18. The entertainment system as claimed in claim 11, further comprising sensors separate from the vehicle and configured to sense the at least one of the movement of the motor vehicle and the spatial location of the motor vehicle, and the state of the user of the visual output unit. 19. The entertainment system as claimed in claim 11, further comprising sensors integrated in the motor vehicle side configured to sense the at least one of the movement of the motor vehicle and the spatial location of the motor vehicle, and the state of the user of the visual output unit. 20. The entertainment system as claimed in claim 19, wherein the motor vehicle has a vehicle-side diagnostic socket,
wherein the control device is integrated into the visual output unit, and wherein the entertainment system further comprises an interface compatible with the vehicle-side diagnostic socket and configured to transmit the first and second sensor data to the control device. 21. A method for operating an entertainment system, comprising:
displaying virtual elements from a prescribed virtual observation position by at least one visual output unit configured to be worn on a head of a user; evaluating first sensor data characterizing at least one of a movement and a spatial location of a motor vehicle by a control device, and actuating the visual output unit so that at least some of the virtual elements are displayed by the visual output unit with at least one of movement relative to the virtual observation position in accordance with the movement of the motor vehicle and arrangement of at least some of the virtual elements relative to the virtual observation position in accordance with the spatial location of the motor vehicle. 22. The method as claimed in claim 21, further comprising:
evaluating second sensor data characterizing a state of the user of the virtual output unit; and actuating the visual output unit as a function of the second sensor data. 23. The method as claimed in claim 22, further comprising:
evaluating preference data characterizing personal preferences of the user of the visual output unit; and actuating the visual output unit as a function of the preference data. 24. The method as claimed in claim 23, further comprising:
selecting music files as a function of at least one of the movement of the motor vehicle, the state of the user and the personal preferences of the user of the visual output unit; and actuating a loudspeaker for outputting the music files. 25. The method as claimed in claim 21, further comprising:
evaluating preference data characterizing personal preferences of the user of the visual output unit; and actuating the visual output unit as a function of the preference data. 26. The method as claimed in claim 21, further comprising:
selecting music files as a function of at least one of the movement of the motor vehicle, a state of the user and personal preferences of the user of the visual output unit; and actuating a loudspeaker for outputting the music files. | At least one visual output unit, wearable on the head, displays virtual elements from a prescribed virtual observation position. A control device evaluates sensor data characterizing a movement and/or spatial location of a motor vehicle actuates the visual output unit such that at least some of the virtual elements displayed by the visual output unit move relative to the virtual observation position in accordance with the movement of the motor vehicle and/or at least some of the virtual elements displayed by the visual output unit are arranged relative to the virtual observation position in accordance with the spatial location of the motor vehicle.1-9. (canceled) 10. An entertainment system for a motor vehicle, comprising
at least one visual output unit configured to be worn on a head of a user and to display virtual elements from a prescribed virtual observation position; and a control device configured to evaluate first sensor data characterizing at least one of a movement and a spatial location of the motor vehicle and to actuate the at least one visual output unit so that at least some of the virtual elements are displayed by the visual output device with at least one of movement relative to the virtual observation position in accordance with the movement of the motor vehicle and arrangement of at least some of the virtual elements relative to the virtual observation position in accordance with the spatial location of the motor vehicle. 11. The entertainment system as claimed in claim 10, wherein the control device is configured to evaluate second sensor data characterizing a state of the user of the virtual output unit, and to actuate the visual output unit as a function of the second sensor data. 12. The entertainment system as claimed in claim 11, wherein the control device is configured to evaluate preference data characterizing personal preferences of the user of the visual output unit, and to actuate the visual output unit as a function of the preference data. 13. The entertainment system as claimed in claim 12, further comprising sensors separate from the vehicle and configured to sense the at least one of the movement of the motor vehicle and the spatial location of the motor vehicle, and the state of the user of the visual output unit. 14. The entertainment system as claimed in claim 12, further comprising sensors integrated in the motor vehicle and configured to sense the at least one of the movement of the motor vehicle and the spatial location of the motor vehicle, and the state of the user of the visual output unit. 15. The entertainment system as claimed in claim 14, wherein the motor vehicle has a vehicle-side diagnostic socket,
wherein the control device is integrated into the visual output unit, and wherein the entertainment system further comprises an interface compatible with the vehicle-side diagnostic socket and configured to transmit the first and second sensor data to the control device. 16. The entertainment system as claimed in claim 11, wherein the visual output unit includes one of virtual reality glasses and augmented reality glasses. 17. The entertainment system as claimed in claim 11, wherein the control device is configured to
select music files as a function of at least one of the movement of the motor vehicle, the state of the user and the personal preferences of the user of the visual output unit, and actuate a loudspeaker for outputting the music files. 18. The entertainment system as claimed in claim 11, further comprising sensors separate from the vehicle and configured to sense the at least one of the movement of the motor vehicle and the spatial location of the motor vehicle, and the state of the user of the visual output unit. 19. The entertainment system as claimed in claim 11, further comprising sensors integrated in the motor vehicle side configured to sense the at least one of the movement of the motor vehicle and the spatial location of the motor vehicle, and the state of the user of the visual output unit. 20. The entertainment system as claimed in claim 19, wherein the motor vehicle has a vehicle-side diagnostic socket,
wherein the control device is integrated into the visual output unit, and wherein the entertainment system further comprises an interface compatible with the vehicle-side diagnostic socket and configured to transmit the first and second sensor data to the control device. 21. A method for operating an entertainment system, comprising:
displaying virtual elements from a prescribed virtual observation position by at least one visual output unit configured to be worn on a head of a user; evaluating first sensor data characterizing at least one of a movement and a spatial location of a motor vehicle by a control device, and actuating the visual output unit so that at least some of the virtual elements are displayed by the visual output unit with at least one of movement relative to the virtual observation position in accordance with the movement of the motor vehicle and arrangement of at least some of the virtual elements relative to the virtual observation position in accordance with the spatial location of the motor vehicle. 22. The method as claimed in claim 21, further comprising:
evaluating second sensor data characterizing a state of the user of the virtual output unit; and actuating the visual output unit as a function of the second sensor data. 23. The method as claimed in claim 22, further comprising:
evaluating preference data characterizing personal preferences of the user of the visual output unit; and actuating the visual output unit as a function of the preference data. 24. The method as claimed in claim 23, further comprising:
selecting music files as a function of at least one of the movement of the motor vehicle, the state of the user and the personal preferences of the user of the visual output unit; and actuating a loudspeaker for outputting the music files. 25. The method as claimed in claim 21, further comprising:
evaluating preference data characterizing personal preferences of the user of the visual output unit; and actuating the visual output unit as a function of the preference data. 26. The method as claimed in claim 21, further comprising:
selecting music files as a function of at least one of the movement of the motor vehicle, a state of the user and personal preferences of the user of the visual output unit; and actuating a loudspeaker for outputting the music files. | 2,600 |
10,764 | 10,764 | 16,395,916 | 2,683 | The present disclosure relates to a sensor network, Machine Type Communication (MTC), Machine-to-Machine (M2M) communication, and technology for Internet of Things (IoT). The present disclosure may be applied to intelligent services based on the above technologies, such as smart home, smart building, smart city, smart car, connected car, health care, digital education, smart retail, security and safety services. A method and an apparatus for alarm service using user status recognition information in an electronic device is provided. The method of electronic device includes determining a rule for eliminating a fire danger of at least one device capable of communicating with the electronic device, determining the fire danger of the at least one device based on the rule, and if the fire danger exists, notifying the fire danger to a user. | 1. A method for operation of an electronic device, the method comprising:
detecting an operation state of at least one other device; determining, based on the operation state of the at least one other device, whether a fire danger exists in the at least one other device; in response to determining that the fire danger exists in the at least one other device, determining, based on a time and an operating status of a temperature controller, a rule to eliminate the fire danger; and controlling, based on the determined rule, the at least one other device and the temperature controller. 2. The method of claim 1, wherein the rule is determined based on the operation state of the at least one other device. 3. The method of claim 1, wherein the rule is determined based on at least one of temperature, weather, and an electricity rate. 4. The method of claim 1, wherein the rule comprises information indicating a control operation for the at least one other device and the temperature controller. 5. The method of claim 4, wherein the control operation comprising an operation controlling a temperature of the temperature controller. 6. The method of claim 1, wherein determining whether the fire danger exists in the at least one other device comprises:
determining whether a use time of the at least one other device exceeds an allowed use time; and determining that the fire danger exists in the at least one other device in response to determining that the use time of the at least one other device exceeds the allowed use time. 7. The method of claim 1, further comprising:
transmitting a notification for fire danger to a user equipment. 8. An electronic device comprising:
a communication module; and a processor configured to:
detect an operation state of at least one other device;
determine, based on the operation state of the at least one other device, whether a fire danger exists in the at least one other device;
in response to determining that the fire danger exists in the at least one other device, determine, based on a time and an operating status of a temperature controller, a rule to eliminate the fire danger; and
control, based on the determined rule, the at least one other device and the temperature controller via the communication module. 9. The electronic device of claim 8, wherein the rule is determined based on the operation state of the at least one other device. 10. The electronic device of claim 8, wherein the rule is determined based on at least one of temperature, weather, and an electricity rate. 11. The electronic device of claim 8, wherein the rule comprises information indicating a control operation for the at least one other device and the temperature controller. 12. The electronic device of claim 11, wherein the control operation comprises an operation for controlling a temperature of the temperature controller. 13. The electronic device of claim 8, wherein the processor is further configured to:
determine whether a use time of the at least one other device exceeds an allowed use time; and determine that the fire danger exists in the at least one other device in response to determining that the use time of the at least one other device exceeds the allowed use time. 14. The electronic device of claim 8, wherein the processor is further configured to:
control the communication module to transmit a notification for fire danger to a user equipment. | The present disclosure relates to a sensor network, Machine Type Communication (MTC), Machine-to-Machine (M2M) communication, and technology for Internet of Things (IoT). The present disclosure may be applied to intelligent services based on the above technologies, such as smart home, smart building, smart city, smart car, connected car, health care, digital education, smart retail, security and safety services. A method and an apparatus for alarm service using user status recognition information in an electronic device is provided. The method of electronic device includes determining a rule for eliminating a fire danger of at least one device capable of communicating with the electronic device, determining the fire danger of the at least one device based on the rule, and if the fire danger exists, notifying the fire danger to a user.1. A method for operation of an electronic device, the method comprising:
detecting an operation state of at least one other device; determining, based on the operation state of the at least one other device, whether a fire danger exists in the at least one other device; in response to determining that the fire danger exists in the at least one other device, determining, based on a time and an operating status of a temperature controller, a rule to eliminate the fire danger; and controlling, based on the determined rule, the at least one other device and the temperature controller. 2. The method of claim 1, wherein the rule is determined based on the operation state of the at least one other device. 3. The method of claim 1, wherein the rule is determined based on at least one of temperature, weather, and an electricity rate. 4. The method of claim 1, wherein the rule comprises information indicating a control operation for the at least one other device and the temperature controller. 5. The method of claim 4, wherein the control operation comprising an operation controlling a temperature of the temperature controller. 6. The method of claim 1, wherein determining whether the fire danger exists in the at least one other device comprises:
determining whether a use time of the at least one other device exceeds an allowed use time; and determining that the fire danger exists in the at least one other device in response to determining that the use time of the at least one other device exceeds the allowed use time. 7. The method of claim 1, further comprising:
transmitting a notification for fire danger to a user equipment. 8. An electronic device comprising:
a communication module; and a processor configured to:
detect an operation state of at least one other device;
determine, based on the operation state of the at least one other device, whether a fire danger exists in the at least one other device;
in response to determining that the fire danger exists in the at least one other device, determine, based on a time and an operating status of a temperature controller, a rule to eliminate the fire danger; and
control, based on the determined rule, the at least one other device and the temperature controller via the communication module. 9. The electronic device of claim 8, wherein the rule is determined based on the operation state of the at least one other device. 10. The electronic device of claim 8, wherein the rule is determined based on at least one of temperature, weather, and an electricity rate. 11. The electronic device of claim 8, wherein the rule comprises information indicating a control operation for the at least one other device and the temperature controller. 12. The electronic device of claim 11, wherein the control operation comprises an operation for controlling a temperature of the temperature controller. 13. The electronic device of claim 8, wherein the processor is further configured to:
determine whether a use time of the at least one other device exceeds an allowed use time; and determine that the fire danger exists in the at least one other device in response to determining that the use time of the at least one other device exceeds the allowed use time. 14. The electronic device of claim 8, wherein the processor is further configured to:
control the communication module to transmit a notification for fire danger to a user equipment. | 2,600 |
10,765 | 10,765 | 15,723,964 | 2,649 | A method and console are provided that create an explicit talk group list for a user device, the explicit talk group list is created by the user device and includes a plurality of first talk groups. A privileged user device, such as a console, creates an implicit talk group list for the user device. The implicit talk group list includes a plurality of second talk groups. The explicit talk group is combined with the implicit talk group list to form a scan list. If the number of talk groups in the scan list exceeds a predetermined threshold, enough talk groups are removed from the scan list until the number of talk groups in the talk group equals the predetermined threshold. | 1. A method comprising:
creating an explicit talk group list for a user device, the explicit talk group list created by the user device and comprising a plurality of first talk groups; creating an implicit talk group list for the user device, the implicit talk group list created by a privileged user device coupled with the user device and comprising a plurality of second talk groups; and selecting an active talk group for the user device from one of the plurality of first talk groups or one of the plurality of second talk groups. 2. The method of claim 1, the method further comprising the step of moving one of the plurality of first talk groups from the explicit talk group list to the implicit talk group list. 3. The method of claim 2, wherein the step of moving one of the plurality of first talk groups from the explicit talk group list to the implicit talk group list comprises moving one of the plurality of first talk groups from the explicit talk group list to the implicit talk group list if the one of the plurality of first talk groups is available to the user device. 4. The method of claim 2, wherein the step of moving one of the plurality of first talk groups from the explicit talk group list to the implicit talk group list comprises moving one of the plurality of first talk groups from the explicit talk group list to the implicit talk group list if the one of the plurality of first talk groups is on a scan list of the user device. 5. The method of claim 1, the method further comprising the step of moving one of the plurality of second talk groups from the implicit talk group list to the explicit talk group list. 6. The method of claim 5, wherein the step of moving one of the plurality of second talk groups from the implicit talk group list to the explicit talk group list comprises moving one of the plurality of second talk groups from the implicit talk group list to the explicit talk group list if the one of the plurality of second talk groups is available to the user device. 7. The method of claim 5, wherein the step of moving one of the plurality of second talk groups from the implicit talk group list to the explicit talk group list comprises moving one of the plurality of second talk groups from the implicit talk group list to the explicit talk group list if the one of the plurality of second talk groups is on a scan list of the user device. 8. The method of claim 1, wherein the step of creating an implicit talk group list for the user device comprises creating an implicit talk group list for the user device that lasts for a predetermined period of time. 9. The method of claim 1, the method further comprising the step of removing, by the privileged user device, the active talk group for the user device. 10. The method of claim 9, the method further comprising the step of selecting, by the privileged user device, a second active talk group for the user device from one of the plurality of first talk groups or one of the plurality of second talk groups. 11. The method of claim 10, wherein the second active talk group is selected based at least in part upon the last talk group selected by the user device. 12. The method of claim 10, wherein the second active talk group is selected based at least in part upon the last talk group assigned by the privileged user device. 13. The method of claim 1, the method further comprising the step of selecting a second active talk group for the user device by the user device from one of the plurality of first talk groups or one of the plurality of second talk groups. 14. The method of claim 13, the method further comprising the step of providing, by the user device, a notification that the second active talk group is not one of the plurality of second talk groups. 15. The method of claim 1, wherein the step of selecting an active talk group for the user device comprises selecting an active talk group for the user device by the privileged user device. 16. The method of claim 1, wherein the step of selecting an active talk group for the user device comprises selecting an active talk group for the user device by a network device. 17. The method of claim 1, wherein the step of selecting an active talk group for the user device comprises selecting an active talk group for the user device by the user device. 18. A method comprising:
creating an explicit talk group list for a user device, the explicit talk group list created by the user device and comprising a plurality of first talk groups; creating an implicit talk group list for the user device, the implicit talk group list created by a privileged user device coupled with the user device and comprising a plurality of second talk groups; combining the explicit talk group with the implicit talk group to form a scan list; and removing, if the number of talk groups in the scan list exceeds a predetermined threshold, talk groups from the scan list until the number of talk groups in the scan list equals the predetermined threshold. 19. The method of claim 18, wherein the step of creating an implicit talk group list for the user device comprises creating an implicit talk group list for the user device that lasts for a predetermined period of time. 20. A privileged user device comprising a processor configured to:
create an explicit talk group list for a user device, the explicit talk group list created by the user device and comprising a plurality of first talk groups; create an implicit talk group list for the user device, the implicit talk group list created by a privileged user coupled with the user device and comprising a plurality of second talk groups; combine the explicit talk group with the implicit talk group to form a scan list; and remove, if the number of talk groups in the scan list exceeds a predetermined threshold, talk groups from the scan list until the number of talk groups in the scan list equals the predetermined threshold. | A method and console are provided that create an explicit talk group list for a user device, the explicit talk group list is created by the user device and includes a plurality of first talk groups. A privileged user device, such as a console, creates an implicit talk group list for the user device. The implicit talk group list includes a plurality of second talk groups. The explicit talk group is combined with the implicit talk group list to form a scan list. If the number of talk groups in the scan list exceeds a predetermined threshold, enough talk groups are removed from the scan list until the number of talk groups in the talk group equals the predetermined threshold.1. A method comprising:
creating an explicit talk group list for a user device, the explicit talk group list created by the user device and comprising a plurality of first talk groups; creating an implicit talk group list for the user device, the implicit talk group list created by a privileged user device coupled with the user device and comprising a plurality of second talk groups; and selecting an active talk group for the user device from one of the plurality of first talk groups or one of the plurality of second talk groups. 2. The method of claim 1, the method further comprising the step of moving one of the plurality of first talk groups from the explicit talk group list to the implicit talk group list. 3. The method of claim 2, wherein the step of moving one of the plurality of first talk groups from the explicit talk group list to the implicit talk group list comprises moving one of the plurality of first talk groups from the explicit talk group list to the implicit talk group list if the one of the plurality of first talk groups is available to the user device. 4. The method of claim 2, wherein the step of moving one of the plurality of first talk groups from the explicit talk group list to the implicit talk group list comprises moving one of the plurality of first talk groups from the explicit talk group list to the implicit talk group list if the one of the plurality of first talk groups is on a scan list of the user device. 5. The method of claim 1, the method further comprising the step of moving one of the plurality of second talk groups from the implicit talk group list to the explicit talk group list. 6. The method of claim 5, wherein the step of moving one of the plurality of second talk groups from the implicit talk group list to the explicit talk group list comprises moving one of the plurality of second talk groups from the implicit talk group list to the explicit talk group list if the one of the plurality of second talk groups is available to the user device. 7. The method of claim 5, wherein the step of moving one of the plurality of second talk groups from the implicit talk group list to the explicit talk group list comprises moving one of the plurality of second talk groups from the implicit talk group list to the explicit talk group list if the one of the plurality of second talk groups is on a scan list of the user device. 8. The method of claim 1, wherein the step of creating an implicit talk group list for the user device comprises creating an implicit talk group list for the user device that lasts for a predetermined period of time. 9. The method of claim 1, the method further comprising the step of removing, by the privileged user device, the active talk group for the user device. 10. The method of claim 9, the method further comprising the step of selecting, by the privileged user device, a second active talk group for the user device from one of the plurality of first talk groups or one of the plurality of second talk groups. 11. The method of claim 10, wherein the second active talk group is selected based at least in part upon the last talk group selected by the user device. 12. The method of claim 10, wherein the second active talk group is selected based at least in part upon the last talk group assigned by the privileged user device. 13. The method of claim 1, the method further comprising the step of selecting a second active talk group for the user device by the user device from one of the plurality of first talk groups or one of the plurality of second talk groups. 14. The method of claim 13, the method further comprising the step of providing, by the user device, a notification that the second active talk group is not one of the plurality of second talk groups. 15. The method of claim 1, wherein the step of selecting an active talk group for the user device comprises selecting an active talk group for the user device by the privileged user device. 16. The method of claim 1, wherein the step of selecting an active talk group for the user device comprises selecting an active talk group for the user device by a network device. 17. The method of claim 1, wherein the step of selecting an active talk group for the user device comprises selecting an active talk group for the user device by the user device. 18. A method comprising:
creating an explicit talk group list for a user device, the explicit talk group list created by the user device and comprising a plurality of first talk groups; creating an implicit talk group list for the user device, the implicit talk group list created by a privileged user device coupled with the user device and comprising a plurality of second talk groups; combining the explicit talk group with the implicit talk group to form a scan list; and removing, if the number of talk groups in the scan list exceeds a predetermined threshold, talk groups from the scan list until the number of talk groups in the scan list equals the predetermined threshold. 19. The method of claim 18, wherein the step of creating an implicit talk group list for the user device comprises creating an implicit talk group list for the user device that lasts for a predetermined period of time. 20. A privileged user device comprising a processor configured to:
create an explicit talk group list for a user device, the explicit talk group list created by the user device and comprising a plurality of first talk groups; create an implicit talk group list for the user device, the implicit talk group list created by a privileged user coupled with the user device and comprising a plurality of second talk groups; combine the explicit talk group with the implicit talk group to form a scan list; and remove, if the number of talk groups in the scan list exceeds a predetermined threshold, talk groups from the scan list until the number of talk groups in the scan list equals the predetermined threshold. | 2,600 |
10,766 | 10,766 | 16,527,601 | 2,628 | A display device includes an electronic control unit configured to: obtain a traveling plan from an automatic driving system of a vehicle, the traveling plan including a course of the vehicle under automatic drive control of the automatic driving system; obtain, from the automatic driving system, a system confidence level of the automatic drive control calculated based on at least an external environment around the vehicle; and display, on a display, a pointer as an image indicating the course during the automatic drive control in a display mode set based on the system confidence level. | 1. A display device comprising an electronic control unit configured to:
obtain a traveling plan from an automatic driving system of a vehicle, the traveling plan including a course of the vehicle under automatic drive control of the automatic driving system; obtain, from the automatic driving system, a system confidence level of the automatic drive control calculated based on at least an external environment around the vehicle; and display on a display, a pointer as an image indicating the course during the automatic drive control in a display mode set based on the system confidence level. 2. The display device according to claim 1, wherein:
the display is a head-up display configured to project and display the pointer on a front windshield of the vehicle while superimposing the pointer on a landscape in front of the vehicle; and the electronic control unit is configured to:
set a first degree of transparency of the pointer to be higher than a second degree of transparency of the pointer, the first degree of transparency of the pointer being a degree of transparency of the pointer in a case where the system confidence level is lower than a confidence level threshold value, and the second degree of transparency of the pointer being a degree of transparency of the pointer in a case when the system confidence level is equal to or higher than the confidence level threshold value, or
set a degree of transparency of the pointer to be higher as the system confidence level is lower. 3. The display device according to claim 1, wherein:
the electronic control unit is configured to display a plurality of the pointers on the display; and the pointers respectively correspond to positions of the vehicle on the traveling plan at a plurality of future times at predetermined time intervals, the future times being set in advance. 4. The display device according to claim 1, wherein:
the electronic control unit is configured to display a plurality of the pointers on the display; and positions where the pointers are displayed are a plurality of positions of the vehicle on the course, the positions being spaced at predetermined intervals from the vehicle as a start point. 5. The display device according to claim 1, wherein the electronic control unit is configured to:
obtain information on a cause of reduction of the system confidence level from the automatic driving system; recognize the cause of reduction of the system confidence level based on the information on the cause of reduction of the system confidence level when the system confidence level is less than a cause specification threshold value; and set the display mode of the pointer based on the system confidence level and the cause of reduction of the system confidence level when the system confidence level is less than the cause specification threshold value. 6. The display device according to claim 1, wherein the system confidence level is an index that indicates a reliability of the automatic drive control. 7. The display device according to claim 1, wherein the display mode of the pointer includes at least one of degree of transparency, luminance, color, shape, size, presence or absence of blinking, blinking speed, presence or absence of flickering, and flickering amount. 8. The display device according to claim 1, wherein the electronic control unit is configured to display an arrival time in vicinity of the pointer on the display, the arrival time being a time when the vehicle arrives a position corresponding to the pointer. | A display device includes an electronic control unit configured to: obtain a traveling plan from an automatic driving system of a vehicle, the traveling plan including a course of the vehicle under automatic drive control of the automatic driving system; obtain, from the automatic driving system, a system confidence level of the automatic drive control calculated based on at least an external environment around the vehicle; and display, on a display, a pointer as an image indicating the course during the automatic drive control in a display mode set based on the system confidence level.1. A display device comprising an electronic control unit configured to:
obtain a traveling plan from an automatic driving system of a vehicle, the traveling plan including a course of the vehicle under automatic drive control of the automatic driving system; obtain, from the automatic driving system, a system confidence level of the automatic drive control calculated based on at least an external environment around the vehicle; and display on a display, a pointer as an image indicating the course during the automatic drive control in a display mode set based on the system confidence level. 2. The display device according to claim 1, wherein:
the display is a head-up display configured to project and display the pointer on a front windshield of the vehicle while superimposing the pointer on a landscape in front of the vehicle; and the electronic control unit is configured to:
set a first degree of transparency of the pointer to be higher than a second degree of transparency of the pointer, the first degree of transparency of the pointer being a degree of transparency of the pointer in a case where the system confidence level is lower than a confidence level threshold value, and the second degree of transparency of the pointer being a degree of transparency of the pointer in a case when the system confidence level is equal to or higher than the confidence level threshold value, or
set a degree of transparency of the pointer to be higher as the system confidence level is lower. 3. The display device according to claim 1, wherein:
the electronic control unit is configured to display a plurality of the pointers on the display; and the pointers respectively correspond to positions of the vehicle on the traveling plan at a plurality of future times at predetermined time intervals, the future times being set in advance. 4. The display device according to claim 1, wherein:
the electronic control unit is configured to display a plurality of the pointers on the display; and positions where the pointers are displayed are a plurality of positions of the vehicle on the course, the positions being spaced at predetermined intervals from the vehicle as a start point. 5. The display device according to claim 1, wherein the electronic control unit is configured to:
obtain information on a cause of reduction of the system confidence level from the automatic driving system; recognize the cause of reduction of the system confidence level based on the information on the cause of reduction of the system confidence level when the system confidence level is less than a cause specification threshold value; and set the display mode of the pointer based on the system confidence level and the cause of reduction of the system confidence level when the system confidence level is less than the cause specification threshold value. 6. The display device according to claim 1, wherein the system confidence level is an index that indicates a reliability of the automatic drive control. 7. The display device according to claim 1, wherein the display mode of the pointer includes at least one of degree of transparency, luminance, color, shape, size, presence or absence of blinking, blinking speed, presence or absence of flickering, and flickering amount. 8. The display device according to claim 1, wherein the electronic control unit is configured to display an arrival time in vicinity of the pointer on the display, the arrival time being a time when the vehicle arrives a position corresponding to the pointer. | 2,600 |
10,767 | 10,767 | 16,118,279 | 2,648 | In accordance with a first aspect of the present disclosure, a near field communication (NFC) device is provided, comprising a processor and a wake-up detector, wherein the processor and the wake-up detector are configured to operate in different power domains, and wherein the wake-up detector is configured to wake up the processor in response to receiving power derived from a radio frequency (RF) field. In accordance with a second aspect of the present disclosure, a method of managing power in an NFC device is conceived, said NFC device comprising a processor and a wake-up detector that operate in different power domains, wherein the wake-up detector wakes up the processor in response to receiving power derived from an RF field. | 1. A near field communication, NFC, device, comprising a processor and a wake-up detector, wherein the processor and the wake-up detector are configured to operate in different power domains, and wherein the wake-up detector is configured to wake up the processor in response to receiving power derived from a radio frequency, RF, field. 2. The NFC device of claim 1, wherein the wake-up detector is coupled to a signal line between a matching network and an input of a receiver of the NFC device, and wherein the wake-up detector is configured to receive the power derived from the RF field via said signal line. 3. The NFC device of claim 1, wherein the wake-up detector is communicatively coupled to a power management unit of the NFC device, and wherein the wake-up detector is configured to transmit a wake-up signal to said power management unit. 4. The NFC device of claim 1, wherein the wake-up detector is communicatively coupled to an external host processor, and wherein the wake-up detector is configured to transmit a wake-up signal to said host processor. 5. The NFC device of claim 1, wherein the wake-up detector is communicatively coupled to an external system power management unit, and wherein the wake-up detector is configured to transmit a wake-up signal to said system power management unit. 6. The NFC device of claim 1, said NFC device being by default in at least one of a power-off state, a deep-sleep state, a low-power state or a standby state. 7. The NFC device of claim 1, wherein waking up the processor comprises causing an NFC system state to change from standby to active by initiating a start-up sequence, wherein said start-up sequence is initiated in response to receiving, by the processor, a wake-up signal from the wake-up detector. 8. The NFC device of claim 3, wherein receiving, by the power management unit, the wake-up signal initiates a start-up sequence which causes an NFC system state to change from power-off to active. 9. The NFC device of claim 1, wherein the wake-up detector is implemented as a passive device, a semi-active device, or an active device. 10. A wearable device comprising the NFC device of claim 1. 11. A method of managing power in an NFC device, said NFC device comprising a processor and a wake-up detector that operate in different power domains, wherein the wake-up detector wakes up the processor in response to receiving power derived from an RF field. 12. The method of claim 11, wherein the wake-up detector is coupled to a signal line between a matching network and an input of a receiver of the NFC device, and wherein the wake-up detector receives the power derived from the RF field via said signal line. 13. The method of claim 11, wherein the wake-up detector is communicatively coupled to a power management unit of the NFC device, and wherein the wake-up detector transmits a wake-up signal to said power management unit. 14. The method of claim 11, wherein the wake-up detector is communicatively coupled to an external host processor, and wherein the wake-up detector transmits a wake-up signal to said host processor. 15. The method of claim 11, wherein the wake-up detector is communicatively coupled to an external system power management unit, and wherein the wake-up detector transmits a wake-up signal to said system power management unit. | In accordance with a first aspect of the present disclosure, a near field communication (NFC) device is provided, comprising a processor and a wake-up detector, wherein the processor and the wake-up detector are configured to operate in different power domains, and wherein the wake-up detector is configured to wake up the processor in response to receiving power derived from a radio frequency (RF) field. In accordance with a second aspect of the present disclosure, a method of managing power in an NFC device is conceived, said NFC device comprising a processor and a wake-up detector that operate in different power domains, wherein the wake-up detector wakes up the processor in response to receiving power derived from an RF field.1. A near field communication, NFC, device, comprising a processor and a wake-up detector, wherein the processor and the wake-up detector are configured to operate in different power domains, and wherein the wake-up detector is configured to wake up the processor in response to receiving power derived from a radio frequency, RF, field. 2. The NFC device of claim 1, wherein the wake-up detector is coupled to a signal line between a matching network and an input of a receiver of the NFC device, and wherein the wake-up detector is configured to receive the power derived from the RF field via said signal line. 3. The NFC device of claim 1, wherein the wake-up detector is communicatively coupled to a power management unit of the NFC device, and wherein the wake-up detector is configured to transmit a wake-up signal to said power management unit. 4. The NFC device of claim 1, wherein the wake-up detector is communicatively coupled to an external host processor, and wherein the wake-up detector is configured to transmit a wake-up signal to said host processor. 5. The NFC device of claim 1, wherein the wake-up detector is communicatively coupled to an external system power management unit, and wherein the wake-up detector is configured to transmit a wake-up signal to said system power management unit. 6. The NFC device of claim 1, said NFC device being by default in at least one of a power-off state, a deep-sleep state, a low-power state or a standby state. 7. The NFC device of claim 1, wherein waking up the processor comprises causing an NFC system state to change from standby to active by initiating a start-up sequence, wherein said start-up sequence is initiated in response to receiving, by the processor, a wake-up signal from the wake-up detector. 8. The NFC device of claim 3, wherein receiving, by the power management unit, the wake-up signal initiates a start-up sequence which causes an NFC system state to change from power-off to active. 9. The NFC device of claim 1, wherein the wake-up detector is implemented as a passive device, a semi-active device, or an active device. 10. A wearable device comprising the NFC device of claim 1. 11. A method of managing power in an NFC device, said NFC device comprising a processor and a wake-up detector that operate in different power domains, wherein the wake-up detector wakes up the processor in response to receiving power derived from an RF field. 12. The method of claim 11, wherein the wake-up detector is coupled to a signal line between a matching network and an input of a receiver of the NFC device, and wherein the wake-up detector receives the power derived from the RF field via said signal line. 13. The method of claim 11, wherein the wake-up detector is communicatively coupled to a power management unit of the NFC device, and wherein the wake-up detector transmits a wake-up signal to said power management unit. 14. The method of claim 11, wherein the wake-up detector is communicatively coupled to an external host processor, and wherein the wake-up detector transmits a wake-up signal to said host processor. 15. The method of claim 11, wherein the wake-up detector is communicatively coupled to an external system power management unit, and wherein the wake-up detector transmits a wake-up signal to said system power management unit. | 2,600 |
10,768 | 10,768 | 13,431,263 | 2,626 | In accordance with an example embodiment of the present invention, an apparatus is disclosed. The apparatus includes a housing and a sensor system. The housing includes a display assembly and a cavity. The cavity is proximate the display assembly. The sensor system is at the cavity. The sensor system is configured to determine an amount of force exerted on the display assembly in response to a pressure change inside the cavity. | 1. An apparatus comprising:
a housing comprising a display assembly and a cavity, wherein the cavity is proximate the display assembly; and a sensor system at the cavity, wherein the sensor system is configured to determine an amount of force exerted on the display assembly in response to a pressure change inside the cavity. 2. An apparatus as in claim 1 wherein the sensor system is an air pressure sensor system. 3. An apparatus as in claim 1 wherein the sensor system comprises at least one microphone. 4. An apparatus as in claim 1 wherein the display assembly is an audio display assembly comprising audio, display, and tactile feedback functionality. 5. An apparatus as in claim 1 wherein the apparatus further comprises one or more piezoelectric actuators. 6. An apparatus as in claim 1 wherein the display assembly comprises a touch screen user interface, and wherein the sensor system is configured to determine an amount of force exerted on the display assembly in response to a positional change of the display when force is exerted on the touch screen user interface. 7. An apparatus as in claim 1 wherein the cavity is between the display assembly and a back cover section of the apparatus. 8. An apparatus as in claim 1 wherein the cavity is opposite a center portion of the display assembly. 9. An apparatus as in claim 1 wherein the force corresponds to a touch at a touch screen of the display assembly. 10. An apparatus as in claim 1 wherein the apparatus comprises a display suspension system connected between the housing and the display assembly. 11. An apparatus as in claim 1 wherein the cavity is substantially sealed. 12. An apparatus as in claim 1 further comprising:
at least one processor; and
at least one memory including computer program code
the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
sense a change in the air pressure at the cavity; and
determine an amount of force exerted on a touch screen of the display assembly. 13. An apparatus as in claim 1 wherein the apparatus is a mobile phone. 14. A method comprising:
measuring an air pressure at a cavity of a device, wherein the cavity is proximate a display assembly of the device; and determining an amount of force exerted on the display assembly based on a change in air pressure at the cavity. 15. A method as in claim 14 wherein the display assembly is an audio display assembly comprising audio, display and tactile feedback functionality. 16. A method as in claim 14 wherein the display assembly comprises a touch screen user interface. 17. A method as in claim 14 wherein the cavity is between the display assembly and a back cover section of the device. 18. A method as in claim 14 further comprising performing an operation based, at least partially, upon the determined force on the display assembly. 19. A method as in claim 18 wherein the performed operation is based on a value of the determined force. 20. A computer program product comprising a computer-readable medium bearing computer program code embodied therein for use with a computer, the computer program code comprising:
code for measuring an air pressure at a cavity of a device, wherein the cavity is proximate a display assembly of the device; and code for determining an amount of force exerted on the display assembly based on a change in air pressure at the cavity; and code for performing an operation based, at least partially, upon the determined force on the display assembly. 21. A computer program product as in claim 20 wherein the operation comprises code for measuring further comprises code for measuring the air pressure at a sealed cavity of the device, wherein the cavity is between an audio display assembly of the device and a housing section of the device. | In accordance with an example embodiment of the present invention, an apparatus is disclosed. The apparatus includes a housing and a sensor system. The housing includes a display assembly and a cavity. The cavity is proximate the display assembly. The sensor system is at the cavity. The sensor system is configured to determine an amount of force exerted on the display assembly in response to a pressure change inside the cavity.1. An apparatus comprising:
a housing comprising a display assembly and a cavity, wherein the cavity is proximate the display assembly; and a sensor system at the cavity, wherein the sensor system is configured to determine an amount of force exerted on the display assembly in response to a pressure change inside the cavity. 2. An apparatus as in claim 1 wherein the sensor system is an air pressure sensor system. 3. An apparatus as in claim 1 wherein the sensor system comprises at least one microphone. 4. An apparatus as in claim 1 wherein the display assembly is an audio display assembly comprising audio, display, and tactile feedback functionality. 5. An apparatus as in claim 1 wherein the apparatus further comprises one or more piezoelectric actuators. 6. An apparatus as in claim 1 wherein the display assembly comprises a touch screen user interface, and wherein the sensor system is configured to determine an amount of force exerted on the display assembly in response to a positional change of the display when force is exerted on the touch screen user interface. 7. An apparatus as in claim 1 wherein the cavity is between the display assembly and a back cover section of the apparatus. 8. An apparatus as in claim 1 wherein the cavity is opposite a center portion of the display assembly. 9. An apparatus as in claim 1 wherein the force corresponds to a touch at a touch screen of the display assembly. 10. An apparatus as in claim 1 wherein the apparatus comprises a display suspension system connected between the housing and the display assembly. 11. An apparatus as in claim 1 wherein the cavity is substantially sealed. 12. An apparatus as in claim 1 further comprising:
at least one processor; and
at least one memory including computer program code
the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
sense a change in the air pressure at the cavity; and
determine an amount of force exerted on a touch screen of the display assembly. 13. An apparatus as in claim 1 wherein the apparatus is a mobile phone. 14. A method comprising:
measuring an air pressure at a cavity of a device, wherein the cavity is proximate a display assembly of the device; and determining an amount of force exerted on the display assembly based on a change in air pressure at the cavity. 15. A method as in claim 14 wherein the display assembly is an audio display assembly comprising audio, display and tactile feedback functionality. 16. A method as in claim 14 wherein the display assembly comprises a touch screen user interface. 17. A method as in claim 14 wherein the cavity is between the display assembly and a back cover section of the device. 18. A method as in claim 14 further comprising performing an operation based, at least partially, upon the determined force on the display assembly. 19. A method as in claim 18 wherein the performed operation is based on a value of the determined force. 20. A computer program product comprising a computer-readable medium bearing computer program code embodied therein for use with a computer, the computer program code comprising:
code for measuring an air pressure at a cavity of a device, wherein the cavity is proximate a display assembly of the device; and code for determining an amount of force exerted on the display assembly based on a change in air pressure at the cavity; and code for performing an operation based, at least partially, upon the determined force on the display assembly. 21. A computer program product as in claim 20 wherein the operation comprises code for measuring further comprises code for measuring the air pressure at a sealed cavity of the device, wherein the cavity is between an audio display assembly of the device and a housing section of the device. | 2,600 |
10,769 | 10,769 | 15,118,632 | 2,647 | The present invention relates to methods, systems, LI systems and nodes in a telecommunication network for providing bandwidth optimization by means of a tokenizer functionality and a restore functionality. It is further provided a token-content-synch process for synchronizing the tokenizer functionality and the restore functionality. | 1. A method in a telecommunications network, the method comprising the steps of:
generating a first node a corresponding token for an original content (OC); sending from said first node the OC and the corresponding token over a synchronization plane to one or more nodes or User Equipments (UEs) comprising restore functionality; storing the token and the OC by means of the restore functionality; sending from said first node the token over a user plane to one or more nodes comprising restore functionality or to the UE having generated a request for a corresponding OC; receiving the token in the UE having generated the request or in a node of the communication network, the node or the UE comprising restore functionality; restoring the OC by means of the token. 2. The method according to claim 1, wherein the telecommunications network is associated with a Lawful Intercept (LI) system, which comprises restore functionality, the step of sending from said first node the OC and the corresponding token over the synchronization plane involves:
sending from said first node the OC and the corresponding token over the synchronization plane to the LI system associated with the telecommunications network. 3. The method according to claim 2, the step of receiving the token further comprises the steps of:
receiving the token in the LI system comprising restore functionality; restoring the OC; sending the OC to a Law Enforcement Monitoring Facility. 4. The method according to claim , wherein the telecommunications network comprises a main content node comprising at least one OC, the method comprises the step of:
sending from the main content node an OC to a LI system associated with the telecommunications network via a link connected to said LI system. 5. The method according to claim 1, wherein the first node of the telecommunications network comprises at leak one OC, the step of sending from said first node the OC and the corresponding token over the synchronization plane to one or more nodes or User Equipments (UEs) comprising restore functionality further involves:
sending from the first node an OC to a LI system associated with the telecommunications network via a link connected to said LI system. 6. The method according to claim 5, wherein the method further comprises:
sending from the first node a token corresponding to the OC to the LI system associated with the telecommunications network via a link connected to said LI system. 7. A method in a Lawful Intercept (LI) system, the method comprising:
the LI system receiving from a first node in a telecommunications network an original content (OC) and a corresponding token corresponding to the OC; the LI system receiving from a second node in the telecommunications network the corresponding token corresponding to the OC; and the LI system restoring the OC using the received corresponding token. 8. The method according to claim 7, wherein the method further comprises:
after restoring the OC using the corresponding token, sending the restored OC to a Law Enforcement Monitoring Facility. 9. A system in a telecommunications network, the system comprising:
a first node adapted to generate a corresponding token for an original content (OC), and to send the OC and the corresponding token over a synchronization plane to one or more of a second no comprising restore functionality adapted to store the token and the OC and a first user equipment (UE) comprising restore functionality adapted to store the token and the OC, wherein said first node is further adapted such that, in response to said first node receiving a request for the OC transmitted by said first UE or a second UE said first node transmits over a user plane to one or more of said second node and said first UE the corresponding token but not the requested OC. 10. The system according to claim 9, wherein the first node is configured to send the OC and the corresponding token over a synchronization plane to a Lawful Intercept (LI) system associated with the telecommunications network, wherein the LI system comprises a restore functionality. 11. The system according to claim 10, wherein the LI system is adapted to receive the token, to restore the OC by means of the token, and to send the OC to a Law Enforcement Monitoring Facility. 12. The system according to claim 9, wherein the telecommunications network comprises a main content node comprising at least one OC, the system is adapted to send from the main content node an OC to a Lawful Intercept (LI) system associated with the telecommunications network via a link connected to said LI system. 13. The system according to claim 9, wherein the first node of the telecommunications network comprises at least one OC, said node is further adapted to send the OC and the corresponding token over a synchronization plane to one or more nodes or User Equipments (UEs) comprising restore functionality, said first node is further adapted to send an OC to a Lawful Intercept (LI) system associated with the telecommunications network via a link connected to said LI system. 14. The system according to claim 9, wherein the first node is further adapted to send a token corresponding to the OC to a Lawful Intercept (LI) system associated with the telecommunications network via a link connected to said LI system. 15. A Lawful Intercept (LI) system being adapted to:
receive from a first node in a telecommunications network an original content (OC) and a corresponding token corresponding to the OC; receive from a second node in the telecommunications network the corresponding token corresponding to the OC; and after receiving the corresponding token from the second node, restore the OC using the received corresponding token. 16. The system according to claim 15, the restore functionality is adapted to send the OC to a Law Enforcement Monitoring Facility. 17. A method in a node of a telecommunications network, the method comprising:
generating a corresponding token for an original content (OC); sending the OC and the corresponding token over a synchronization plane to one or more of a second node and a first user equipment (UE); as a result of receiving a request for the OC transmitted by a second UE, sending over a user plane to one or more of the second node and the first UE the corresponding token but not the OC. 18. The method according to claim 17, wherein the telecommunications network is associated with a Lawful Intercept (LI) system, which comprises restore functionality, the step of sending from said first node the OC and the corresponding token over the synchronization plane comprises:
sending the OC and the corresponding token over the synchronization plane to the LI system. 19. The method according to claim 17, wherein the method further comprises:
sending an OC to a Lawful Intercept (LI) system associated with the telecommunications network via a link connected to said LI system. 20. The method according to claim 19, the method further comprises:
sending a token corresponding to the OC to the LI system associated with the telecommunications network via a link connected to said LI system. 21. A first node of a telecommunications network, the first node being adapted to:
generate a corresponding token for an original content (OC); send the OC and the corresponding token over a synchronization plane to one or more of a second node comprising restore functionality and a first user equipment comprising restore functionality; and as a result of receiving a request first UE or a second UE, send over a user plane to one or more of the second node and the first UE the corresponding token but not the OC. 22. The first node according to claim 21, the first node being adapted to send the OC and the corresponding token over a synchronization plane to a Lawful Intercept (LI) system associated with the telecommunications network. 23. The first node according to claim 21, the first node being adapted to send an OC to a Lawful Intercept (LI) system associated with the telecommunications network via a link connected to said LI system. 24. The first node according to claim 23, the first node being adapted to send a token corresponding to the OC to the LI system associated with the telecommunications network via a link connected to said LI system. 25. A computer program product comprising a non-transitory computer readable medium comprising computer program code which, when run in a processor of a node, causes the node to perform the method claim 17. 26. The computer program product of claim 25, wherein the computer prow code comprises instructions for sending the OC and the corresponding token over the synchronization plane to a Lawful Intercept (LI) system. 27. The computer program product of claim 25, wherein the computer code comprises instructions for sending an OC to a Lawful Intercept (LI) system associated with the telecommunications network via a link connected to said LI system. | The present invention relates to methods, systems, LI systems and nodes in a telecommunication network for providing bandwidth optimization by means of a tokenizer functionality and a restore functionality. It is further provided a token-content-synch process for synchronizing the tokenizer functionality and the restore functionality.1. A method in a telecommunications network, the method comprising the steps of:
generating a first node a corresponding token for an original content (OC); sending from said first node the OC and the corresponding token over a synchronization plane to one or more nodes or User Equipments (UEs) comprising restore functionality; storing the token and the OC by means of the restore functionality; sending from said first node the token over a user plane to one or more nodes comprising restore functionality or to the UE having generated a request for a corresponding OC; receiving the token in the UE having generated the request or in a node of the communication network, the node or the UE comprising restore functionality; restoring the OC by means of the token. 2. The method according to claim 1, wherein the telecommunications network is associated with a Lawful Intercept (LI) system, which comprises restore functionality, the step of sending from said first node the OC and the corresponding token over the synchronization plane involves:
sending from said first node the OC and the corresponding token over the synchronization plane to the LI system associated with the telecommunications network. 3. The method according to claim 2, the step of receiving the token further comprises the steps of:
receiving the token in the LI system comprising restore functionality; restoring the OC; sending the OC to a Law Enforcement Monitoring Facility. 4. The method according to claim , wherein the telecommunications network comprises a main content node comprising at least one OC, the method comprises the step of:
sending from the main content node an OC to a LI system associated with the telecommunications network via a link connected to said LI system. 5. The method according to claim 1, wherein the first node of the telecommunications network comprises at leak one OC, the step of sending from said first node the OC and the corresponding token over the synchronization plane to one or more nodes or User Equipments (UEs) comprising restore functionality further involves:
sending from the first node an OC to a LI system associated with the telecommunications network via a link connected to said LI system. 6. The method according to claim 5, wherein the method further comprises:
sending from the first node a token corresponding to the OC to the LI system associated with the telecommunications network via a link connected to said LI system. 7. A method in a Lawful Intercept (LI) system, the method comprising:
the LI system receiving from a first node in a telecommunications network an original content (OC) and a corresponding token corresponding to the OC; the LI system receiving from a second node in the telecommunications network the corresponding token corresponding to the OC; and the LI system restoring the OC using the received corresponding token. 8. The method according to claim 7, wherein the method further comprises:
after restoring the OC using the corresponding token, sending the restored OC to a Law Enforcement Monitoring Facility. 9. A system in a telecommunications network, the system comprising:
a first node adapted to generate a corresponding token for an original content (OC), and to send the OC and the corresponding token over a synchronization plane to one or more of a second no comprising restore functionality adapted to store the token and the OC and a first user equipment (UE) comprising restore functionality adapted to store the token and the OC, wherein said first node is further adapted such that, in response to said first node receiving a request for the OC transmitted by said first UE or a second UE said first node transmits over a user plane to one or more of said second node and said first UE the corresponding token but not the requested OC. 10. The system according to claim 9, wherein the first node is configured to send the OC and the corresponding token over a synchronization plane to a Lawful Intercept (LI) system associated with the telecommunications network, wherein the LI system comprises a restore functionality. 11. The system according to claim 10, wherein the LI system is adapted to receive the token, to restore the OC by means of the token, and to send the OC to a Law Enforcement Monitoring Facility. 12. The system according to claim 9, wherein the telecommunications network comprises a main content node comprising at least one OC, the system is adapted to send from the main content node an OC to a Lawful Intercept (LI) system associated with the telecommunications network via a link connected to said LI system. 13. The system according to claim 9, wherein the first node of the telecommunications network comprises at least one OC, said node is further adapted to send the OC and the corresponding token over a synchronization plane to one or more nodes or User Equipments (UEs) comprising restore functionality, said first node is further adapted to send an OC to a Lawful Intercept (LI) system associated with the telecommunications network via a link connected to said LI system. 14. The system according to claim 9, wherein the first node is further adapted to send a token corresponding to the OC to a Lawful Intercept (LI) system associated with the telecommunications network via a link connected to said LI system. 15. A Lawful Intercept (LI) system being adapted to:
receive from a first node in a telecommunications network an original content (OC) and a corresponding token corresponding to the OC; receive from a second node in the telecommunications network the corresponding token corresponding to the OC; and after receiving the corresponding token from the second node, restore the OC using the received corresponding token. 16. The system according to claim 15, the restore functionality is adapted to send the OC to a Law Enforcement Monitoring Facility. 17. A method in a node of a telecommunications network, the method comprising:
generating a corresponding token for an original content (OC); sending the OC and the corresponding token over a synchronization plane to one or more of a second node and a first user equipment (UE); as a result of receiving a request for the OC transmitted by a second UE, sending over a user plane to one or more of the second node and the first UE the corresponding token but not the OC. 18. The method according to claim 17, wherein the telecommunications network is associated with a Lawful Intercept (LI) system, which comprises restore functionality, the step of sending from said first node the OC and the corresponding token over the synchronization plane comprises:
sending the OC and the corresponding token over the synchronization plane to the LI system. 19. The method according to claim 17, wherein the method further comprises:
sending an OC to a Lawful Intercept (LI) system associated with the telecommunications network via a link connected to said LI system. 20. The method according to claim 19, the method further comprises:
sending a token corresponding to the OC to the LI system associated with the telecommunications network via a link connected to said LI system. 21. A first node of a telecommunications network, the first node being adapted to:
generate a corresponding token for an original content (OC); send the OC and the corresponding token over a synchronization plane to one or more of a second node comprising restore functionality and a first user equipment comprising restore functionality; and as a result of receiving a request first UE or a second UE, send over a user plane to one or more of the second node and the first UE the corresponding token but not the OC. 22. The first node according to claim 21, the first node being adapted to send the OC and the corresponding token over a synchronization plane to a Lawful Intercept (LI) system associated with the telecommunications network. 23. The first node according to claim 21, the first node being adapted to send an OC to a Lawful Intercept (LI) system associated with the telecommunications network via a link connected to said LI system. 24. The first node according to claim 23, the first node being adapted to send a token corresponding to the OC to the LI system associated with the telecommunications network via a link connected to said LI system. 25. A computer program product comprising a non-transitory computer readable medium comprising computer program code which, when run in a processor of a node, causes the node to perform the method claim 17. 26. The computer program product of claim 25, wherein the computer prow code comprises instructions for sending the OC and the corresponding token over the synchronization plane to a Lawful Intercept (LI) system. 27. The computer program product of claim 25, wherein the computer code comprises instructions for sending an OC to a Lawful Intercept (LI) system associated with the telecommunications network via a link connected to said LI system. | 2,600 |
10,770 | 10,770 | 15,799,436 | 2,697 | A window for resistive heating and a camera apparatus including a window for resistive heating. The window includes a transparent member having an outer edge, wherein the transparent member is made of a first material, wherein the first material is a low conductivity material; and at least one set of two conductive pads disposed on the outer edge of the transparent member and electrically coupled to at least one source of electricity, wherein each conductive pad is made of a second material, wherein matter disposed on the transparent member is removed via resistive heating when electricity is conducted from the at least one source through the at least one set of two conductive pads and the transparent member. | 1. A window for resistive heating, comprising:
a transparent member having an outer edge, wherein the transparent member is made of a first material, wherein the first material is a low conductivity material; and at least one set of two conductive pads disposed on the outer edge of the transparent member and electrically coupled to at least one source of electricity, wherein each conductive pad is made of a second material, wherein matter disposed on the transparent member is removed via resistive heating when electricity is conducted from the at least one source through the at least one set of two conductive pads and the transparent member. 2. The window of claim 1, wherein the first material is a semiconductor. 3. The window of claim 2, wherein the first material is Germanium. 4. The window of claim 3, wherein the first material is an N-type Germanium semiconductor. 5. The window of claim 1, wherein the second material is copper. 6. The window of claim 1, wherein the at least one source of electricity is a flex printed circuit board (PCB). 7. The window of claim 6, wherein the window is disposed in a camera including the flex PCB and at least one lens, wherein the transparent member is disposed between the at least one lens and an environment around the camera. 8. The window of claim 1, wherein the transparent member has a first side and a second side, wherein the first side has a first surface coating and the second side has a second surface coating. 9. The window of claim 8, wherein the first surface coating is a high durability coating. 10. The window of claim 9, wherein the high durability coating is a diamond-like carbon coating. 11. The window of claim 8, wherein the second surface coating is an anti-reflective coating. 12. The window of claim 1, wherein the matter removed from the transparent member includes at least one of: ice, and fog. 13. The window of claim 1, wherein the matter is removed from the transparent member via evaporation. 14. The window of claim 1, wherein a resistivity of the transparent member is between 3 ohms centimeter (Ω·cm) and 15 Ω·cm, inclusive. 15. A camera apparatus, comprising:
a thermal core including at least one source of electricity, at least one sensor, and a lens; and a window, the window further comprising:
a transparent member having an outer edge, wherein the transparent member is made of a first material, wherein the first material is a low conductivity material, wherein the lens is disposed between the transparent member and the at least one sensor; and
at least one set of two conductive pads disposed on the outer edge of the transparent member and electrically coupled to the at least one source of electricity, wherein each conductive pad is made of a second material, wherein the second material is a high conductivity material, wherein matter disposed on the transparent member is removed via resistive heating when electricity is conducted from the at least one source through the at least one set of two conductive pad and the transparent member. 16. The camera apparatus of claim 15, wherein the first material is a semiconductor. 17. The camera apparatus of claim 16, wherein the first material is Germanium. 18. The camera apparatus of claim 17, wherein the first material is an N-type Germanium semiconductor. 19. The camera apparatus of claim 15, wherein the at least one source of electricity is a printed circuit board. 20. The camera apparatus of claim 15, wherein the camera apparatus is an infrared camera. | A window for resistive heating and a camera apparatus including a window for resistive heating. The window includes a transparent member having an outer edge, wherein the transparent member is made of a first material, wherein the first material is a low conductivity material; and at least one set of two conductive pads disposed on the outer edge of the transparent member and electrically coupled to at least one source of electricity, wherein each conductive pad is made of a second material, wherein matter disposed on the transparent member is removed via resistive heating when electricity is conducted from the at least one source through the at least one set of two conductive pads and the transparent member.1. A window for resistive heating, comprising:
a transparent member having an outer edge, wherein the transparent member is made of a first material, wherein the first material is a low conductivity material; and at least one set of two conductive pads disposed on the outer edge of the transparent member and electrically coupled to at least one source of electricity, wherein each conductive pad is made of a second material, wherein matter disposed on the transparent member is removed via resistive heating when electricity is conducted from the at least one source through the at least one set of two conductive pads and the transparent member. 2. The window of claim 1, wherein the first material is a semiconductor. 3. The window of claim 2, wherein the first material is Germanium. 4. The window of claim 3, wherein the first material is an N-type Germanium semiconductor. 5. The window of claim 1, wherein the second material is copper. 6. The window of claim 1, wherein the at least one source of electricity is a flex printed circuit board (PCB). 7. The window of claim 6, wherein the window is disposed in a camera including the flex PCB and at least one lens, wherein the transparent member is disposed between the at least one lens and an environment around the camera. 8. The window of claim 1, wherein the transparent member has a first side and a second side, wherein the first side has a first surface coating and the second side has a second surface coating. 9. The window of claim 8, wherein the first surface coating is a high durability coating. 10. The window of claim 9, wherein the high durability coating is a diamond-like carbon coating. 11. The window of claim 8, wherein the second surface coating is an anti-reflective coating. 12. The window of claim 1, wherein the matter removed from the transparent member includes at least one of: ice, and fog. 13. The window of claim 1, wherein the matter is removed from the transparent member via evaporation. 14. The window of claim 1, wherein a resistivity of the transparent member is between 3 ohms centimeter (Ω·cm) and 15 Ω·cm, inclusive. 15. A camera apparatus, comprising:
a thermal core including at least one source of electricity, at least one sensor, and a lens; and a window, the window further comprising:
a transparent member having an outer edge, wherein the transparent member is made of a first material, wherein the first material is a low conductivity material, wherein the lens is disposed between the transparent member and the at least one sensor; and
at least one set of two conductive pads disposed on the outer edge of the transparent member and electrically coupled to the at least one source of electricity, wherein each conductive pad is made of a second material, wherein the second material is a high conductivity material, wherein matter disposed on the transparent member is removed via resistive heating when electricity is conducted from the at least one source through the at least one set of two conductive pad and the transparent member. 16. The camera apparatus of claim 15, wherein the first material is a semiconductor. 17. The camera apparatus of claim 16, wherein the first material is Germanium. 18. The camera apparatus of claim 17, wherein the first material is an N-type Germanium semiconductor. 19. The camera apparatus of claim 15, wherein the at least one source of electricity is a printed circuit board. 20. The camera apparatus of claim 15, wherein the camera apparatus is an infrared camera. | 2,600 |
10,771 | 10,771 | 15,151,247 | 2,625 | An electronic device described herein includes a touch screen for a touch sensitive display carried by a portable housing. The electronic device is configured to operate in a high. detection threshold mode to determine whether an object is in contact with the touch sensitive display, and operate in a low detection threshold mode to determine whether the object is adjacent to the touch sensitive display, based on lack of detection of the object being in contact with the touch sensitive display. The electronic device is further configured to determine whether the object is in contact with a peripheral edge of the portable housing by determining whether the object is adjacent opposite sides of the touch sensitive display, based on detection of the object being adjacent to the touch sensitive display. | 1. An electronic device, comprising:
a touch screen controller for a touch sensitive display carried by a portable housing configured to: operate in a high detection threshold mode to determine whether an object is in contact with the touch sensitive display; operate in a low detection threshold mode to determine whether the object is adjacent to the touch sensitive display, based on lack of detection of the object being in contact with the touch sensitive display; and determine whether the object is in contact with a peripheral edge of the portable housing based on determination of the object being adjacent to the touch sensitive display. 2. The electronic device of claim 1, wherein, when operating in the high detection threshold mode, the touch screen controller scans each sense line of the touch sensitive display; and wherein, when operating in the low detection threshold mode, the touch screen controller scans a subset of the sense lines of the touch sensitive display. 3. The electronic device of claim 2, wherein the subset of the sense lines of the touch sensitive display includes sense lines located toward at least one side of the touch sensitive display and not sense lines located toward a center of the touch sensitive display. 4. The electronic device of claim 2, wherein the subset of the sense lines of the touch sensitive display includes a pair of sense lines adjacent each side of the touch sensitive display and excludes other sense lines of the touch sensitive display. 5. The electronic device of claim 1, wherein, when operating in the high detection threshold mode, the touch screen controller drives each force line of the touch sensitive display; and wherein, when operating in the low detection threshold mode, the touch screen controller drives a subset of the force lines of the touch sensitive display and scans each sense line of the touch sensitive display. 6. The electronic device of claim 5, wherein the subset of the force lines of the touch sensitive display includes force lines located toward at least one side of the touch sensitive display and not force lines located toward a center of the touch sensitive display. 7. The electronic device of claim 5, wherein the subset of the force lines of the touch sensitive display includes a pair of force lines adjacent each side of the touch sensitive display and excludes other force lines of the touch sensitive display. 8. The electronic device of claim 1, wherein the touch screen controller determines whether the object is adjacent to the touch sensitive display by determining whether the object is adjacent to a first side of the touch sensitive display and then determining whether the object is adjacent to a second side of the touch sensitive display opposite from the first side. 9. The electronic device of claim 1, wherein the touch screen controller determines whether the object is adjacent the touch sensitive display by simultaneously determining whether the object is adjacent to first and second opposing sides of the touch sensitive display. 10. The electronic device of claim 1, wherein the touch screen controller determines that the object is hovering above the touch sensitive display based on detection of the object being adjacent to the touch sensitive display but not being adjacent to first and second opposing sides of the touch sensitive display. 11. The electronic device of claim 1, wherein the touch screen controller determines whether the object is adjacent to the touch sensitive display by at least one of:
scanning a first plurality of adjacent sense lines for a first side of the touch sensitive display and determining that the object is adjacent the first side as a function of a difference between strength values for each sense line of the first plurality of adjacent sense lines being greater than a threshold; and scanning a second plurality of adjacent sense lines for a second side of the touch sensitive display opposite to the first side and determining that the object is adjacent the second side as a function of a difference between strength values for each sense line of the second plurality of adjacent sense lines being greater than the threshold. 12. The electronic device of claim 11, wherein the touch screen controller determines that the object is hovering above the touch sensitive display by determining that the difference between strength values for each sense line of the first plurality of adjacent sense lines is less than the threshold or by determining that the difference between strength values for each sense line of the second plurality of adjacent sense lines is less than the threshold. 13. The electronic device of claim 1, wherein the touch screen controller determines whether the object is adjacent to the touch sensitive display by at least one of:
driving a first plurality of adjacent force lines for a first side of the touch sensitive display and determining that the object is adjacent the first side as a function of a difference between strength values for sense lines of the touch sensitive display intersecting different ones of the first plurality of adjacent force lines being greater than a threshold; and driving a second plurality of adjacent force lines for a second side of the touch sensitive display opposite to the first side and determining that the object is adjacent the second side as a function of a difference between strength values for sense lines of the touch sensitive display intersecting different ones of the second plurality of adjacent force lines being greater than the threshold. 14. The electronic device of claim 13, wherein the touch screen controller determines that the object is hovering above the touch sensitive display by determining that the difference between strength values for each sense line intersecting different ones of the first plurality of force lines is less than the threshold or by determining that the difference between strength values for each sense line intersecting different ones of the second plurality of force lines is less than the threshold 15. The electronic device of claim 1, wherein the touch screen controller determines whether the object is adjacent to the touch sensitive display by at least one of:
scanning first and second adjacent sense lines for a first side of the touch sensitive display and determining that the object is adjacent the first side as a function a strength value for the first sense line for the first side being greater than a first threshold and a strength value for the second sense line for the first side being less than a second threshold, the second threshold being less than the first threshold; and scanning first and second adjacent sense lines for a second side of the touch sensitive display opposite to the first side and determining that the object is adjacent the second side as a function of a strength value for the first sense line for the second side being greater than the first threshold and a strength value for the second sense line for the second side being less than the second threshold. 16. The electronic device of claim 15, wherein the first threshold is predetermined;
and wherein the second threshold is defined as a predetermined percentage of a maximum possible strength value for the first sense line for the first side. 17. The electronic device of claim 15, wherein the touch screen controller determines that the object is hovering above the touch sensitive display by determining that the strength value for at least the second sense line for the first side is greater than the second threshold or by determining that the strength value for the at least second sense line for the second side is greater than the second threshold. 18. The electronic device of claim 1, wherein the touch screen controller determines whether the object is adjacent to the touch sensitive display by at least one of:
driving first and second adjacent force lines for a first side of the touch sensitive display and determining that the object is adjacent the first side as a function of strength values for sense lines of the touch sensitive display intersecting the first force line for the first side being greater than a first threshold and strength values for sense lines of the touch sensitive display intersecting the second force line for the first side being less than a second threshold, the second threshold being less than the first threshold; and driving first and second adjacent force lines for a second side of the touch sensitive display opposite to the first side and determining that the object is adjacent the second side as a function of a strength value for each sense line intersecting the first force line for the second side being greater than the first threshold and a strength value for each sense line intersecting the second force line for the second side being less than the second threshold. 19. The electronic device of claim 18, wherein the first threshold is predetermined;
and wherein the second threshold is defined as a predetermined percentage of a maximum possible strength value for the first sense line for the first side. 20. The electronic device of claim 18, wherein the touch screen controller determines that the object is hovering above the touch sensitive display by determining that the strength values for each sense line intersecting the second force line for the first side is greater than the second threshold or by determining that the strength value for each sense line intersecting the second force line for the second side is greater than the second threshold. 21. The electronic device of claim 1, wherein the touch screen controller determines whether the object is adjacent to the touch sensitive display by first determining whether the object is adjacent to a first side of the touch sensitive display, and then determining whether the object is adjacent to a second side of the touch sensitive display opposite to the first side. 22. The electronic device of claim 1, wherein the touch screen controller determines whether the object is adjacent to the touch sensitive display by simultaneously determining whether the object is adjacent to a first side of the touch sensitive display and whether the object is adjacent to a second side of the touch sensitive display opposite to the first side. 23. The electronic device of claim 1, wherein the touch screen controller is further configured to:
set detection boundary areas about each location where the object is in contact with the portable housing; determine that the object has tapped the portable housing as a function of the object leaving a detection boundary area, returning to the detection boundary area, remaining within the detection boundary area for a given period of time, and then leaving the detection boundary area. 24. The electronic device of claim 1, wherein the touch screen controller is further configured to:
set detection boundary areas about each location where the object is in contact with the portable housing; determine that the object has tapped the portable housing as a function of an additional portion of the object coming into contact with the portable housing outside of the detection boundary area and then leaving contact with the portable housing, within a given period of time. 25. The electronic device of claim 1, wherein the touch screen controller is further configured to:
set detection boundary areas about each location where the object is in contact with the portable housing; determine that the object is a user's hand gripping the portable housing as a function of the object remaining within the detection boundary area for at least a threshold period of time. 26. The electronic device of claim 25, wherein the touch screen controller is further configured to:
set detection boundary areas about each location where the object is in contact with the portable housing; determine that the object has moved in a gesture as a function of a portion of the object leaving one detection boundary area and moving in a predetermined pattern while maintaining contact with the portable housing. 27. The electronic device of claim 25, wherein the touch screen controller is further configured to:
set at least one additional detection boundary area about each additional location where an additional portion of the object comes into contact with the portable housing after the determination that the object is the user's hand gripping the portable housing; determine that the addition portion of the object is the user's hand gripping the portable housing as a function of the additional portion of the object remaining within the at least one additional detection boundary area for at least an additional threshold period of time. 28. The electronic device of claim 25, further comprising processing circuitry; and
wherein the touch screen controller is configured to output to the processing circuitry each location where the object remained within an associated boundary area for at least the threshold period of time as a grip location. 29. An electronic device, comprising:
a portable housing; a touch sensitive display carried by the portable housing, the touch sensitive display including a plurality of sense lines; a touch screen controller coupled to the plurality of sense lines and configured to:
operate in a screen touch detection mode to detect a user's hand being in contact with the touch sensitive display as a function of reading strength values from at least some of the plurality of sense lines;
wherein, in the screen touch detection mode, the user's hand is detected as being in contact with the touch sensitive display as a function of read strength values being greater than a first threshold; operate in a portable housing touch detection mode to detect the user's hand being adjacent to the touch sensitive display, based on lack of detection of the user's hand being in contact with the touch sensitive display and as a function of reading strength values from at least some of the plurality of sense lines; where, in the portable housing touch detection mode, the user's hand is detected as being adjacent to the touch sensitive display as a function of read strength values being greater than a second threshold; wherein the second threshold is less than the first threshold; determine whether the user's hand is in contact with the portable housing by detecting whether the user's hand is adjacent opposite sides of the touch sensitive display, based on detection of the user's hand being adjacent to the touch sensitive display. 30. The electronic device of claim 29, wherein, when operating in the screen touch detection mode, the touch screen controller reads strength values from each sense line of the plurality thereof; and wherein, when operating in the portable housing touch detection mode, the touch screen controller reads strength values from a subset of the plurality of sense lines and not each of the plurality of sense lines. 31. The electronic device of claim 30, wherein the subset of the sense lines includes sense lines located toward sides of the touch sensitive display and not sense lines located toward a center of the touch sensitive display. 32. The electronic device of claim 30, wherein the subset of the sense lines includes a pair of sense lines adjacent each side of the touch sensitive display and excludes other sense lines. 33. The electronic device of claim 29, wherein the touch sensitive display includes a plurality of force lines; wherein, when operating in the screen touch detection mode, the touch screen controller drives each force line of the plurality thereof; and wherein, when operating in the portable housing touch detection mode, the touch screen controller drives a subset of the plurality of force lines and not each of the plurality of force lines but reads strength values from each of the plurality of sense lines. 34. The electronic device of claim 33, wherein the subset of the force lines includes force lines located toward sides of the touch sensitive display and not force lines located toward a center of the touch sensitive display. 35. The electronic device of claim 33, wherein the subset of the force lines includes a pair of force lines adjacent each side of the touch sensitive display and excludes other force lines. 36. The electronic device of claim 33, wherein the touch screen controller detects whether the user's hand is adjacent to the touch sensitive display by detecting whether the user's hand is adjacent to a first side of the touch sensitive display and then detecting whether the user's hand is adjacent to a second side of the touch sensitive display opposite from the first side. 37. The electronic device of claim 33, wherein the touch screen controller detects whether the user's hand is adjacent the touch sensitive display by simultaneously detecting whether the user's hand is adjacent to first and second opposing sides of the touch sensitive display. 38. The electronic device of claim 30, wherein the touch screen controller detects whether the user's hand is adjacent to the touch sensitive display by detecting whether the user's hand is adjacent to a first side of the touch sensitive display and then detecting whether the user's hand is adjacent to a second side of the touch sensitive display opposite from the first side. 39. The electronic device of claim 30, wherein the touch screen controller detects whether the user's hand is adjacent the touch sensitive display by simultaneously detecting whether the user's hand is adjacent to first and second opposing sides of the touch sensitive display. 40. The electronic device of claim 30, wherein the touch screen controller determines whether the user's hand is adjacent to opposing sides of the touch sensitive display by:
reading strength values from a first pair of adjacent sense lines for a first side of the touch sensitive display and determining that the user's hand is adjacent the first side as a function of a difference between the strength values for each sense line of the first pair of adjacent sense lines being greater than a threshold; and reading strength values from a second pair of adjacent sense lines for a second side of the touch sensitive display opposite to the first side and determining that the user's hand is adjacent the second side as a function of a difference between the strength values for each sense line of the second pair of adjacent sense lines being greater than the threshold. 41. The electronic device of claim 30, wherein the touch screen controller determines whether the user's hand is adjacent to opposing sides of the touch sensitive display by:
reading strength values for first and second adjacent sense lines for a first side of the touch sensitive display and determining that the user's hand is adjacent the first side as a function of the strength value for the first sense line for the first side being greater than a first threshold and the strength value for the second sense line for the first side being less than a second threshold, the second threshold being less than the first threshold; reading strength values for first and second adjacent sense lines for a second side of the touch sensitive display opposite to the first side and determining that the user's hand is adjacent the second side as a function of the strength value for the first sense line for the second side being greater than the first threshold and the strength value for the second sense line for the second side being less than the second threshold. 42. The electronic device of claim 41, wherein the first threshold is predetermined;
and wherein the second threshold is defined as a predetermined percentage of a maximum possible strength value for the first sense line for the first side. 43. The electronic device of claim 33, wherein the touch screen controller determines whether the user's hand is adjacent to opposing sides of the touch sensitive display by:
reading strength values from sense lines intersecting a first pair of adjacent force lines for a first side of the touch sensitive display and determining that the user's hand is adjacent the first side as a function of a difference between the strength values for each sense line intersecting the first pair of adjacent force lines being greater than a threshold; and reading strength values from sense lines intersecting a second pair of adjacent force lines for a second side of the touch sensitive display opposite to the first side and determining that the user's hand is adjacent the second side as a function of a difference between the strength values for each sense line intersecting the second pair of adjacent force lines being greater than the threshold. 44. The electronic device of claim 33, wherein the touch screen controller determines whether the user's hand is adjacent to opposing sides of the touch sensitive display by:
reading strength values for sense lines intersecting first and second adjacent force lines for a first side of the touch sensitive display and determining that the user's hand is adjacent the first side as a function of the strength values for each sense line intersecting the first force line for the first side being greater than a first threshold and the strength values for each sense line intersecting the second force line for the first side being less than a second threshold, the second threshold being less than the first threshold; reading strength values for sense lines intersecting first and second adjacent force lines for a second side of the touch sensitive display opposite to the first side and determining that the user's hand is adjacent the second side as a function of the strength values sense lines intersecting the first force line for the second side being greater than the first threshold and the strength value for the sense lines intersecting the second force line for the second side being less than the second threshold. 45. The electronic device of claim 44, wherein the first threshold is predetermined; and wherein the second threshold is defined as a predetermined percentage of a maximum possible strength value of the sense lines intersecting the first force line for the first side. 46. A touch screen controller chip for a touch sensitive display carried by a portable housing, the touch screen controller chip, comprising:
circuitry configured to: operate in a high detection threshold mode to determine whether an object is in contact with the touch sensitive display; operate in a low detection threshold mode to determine whether the object is adjacent to the touch sensitive display, based on lack of detection of the object being in contact with the touch sensitive display; and determine whether the object is in contact with a peripheral edge of the portable housing based on determination of the object being adjacent to the touch sensitive display. 47. The touch screen controller chip of claim 46, wherein, when operating in the high detection threshold mode, the circuitry scans each sense line of the touch sensitive display; and
wherein, when operating in the low detection threshold mode, the circuitry scans a subset of the sense lines of the touch sensitive display. 48. The touch screen controller chip of claim 47, wherein the subset of the sense lines of the touch sensitive display includes sense lines located toward at least one side of the touch sensitive display and not sense lines located toward a center of the touch sensitive display. 49. The touch screen controller chip of claim 47, wherein the subset of the sense lines of the touch sensitive display includes a plurality of sense lines adjacent at least one side of the touch sensitive display and excludes other sense lines of the touch sensitive display. 50. The touch screen controller chip of claim 46, wherein the circuitry is further configured to:
set detection boundary areas about each location where the object is in contact with the portable housing; determine that the object is a user's hand gripping the portable housing as a function of the object remaining within the detection boundary area for at least a threshold period of time. 51. A method of operating a touch screen controller for a touch sensitive display carried by a portable housing, the method comprising:
operating in a high detection threshold mode to determine whether an object is in contact with the touch sensitive display; operating in a low detection threshold mode to determine whether the object is adjacent to the touch sensitive display, based on lack of detection of the object being in contact with the touch sensitive display; and determining whether the object is in contact with a peripheral edge of the portable housing based on determination of the object being adjacent to the touch sensitive display. 52. The method of claim 51, wherein, when operating in the high detection threshold mode, the each sense line of the touch sensitive display is scanned; and wherein, when operating in the low detection threshold mode, a subset of the sense lines of the touch sensitive display is scanned. 53. The method of claim 51, further comprising:
setting detection boundary areas about each location where the object is in contact with the portable housing; and determining that the object has tapped the portable housing as a function of the object leaving a detection boundary area, returning to the detection boundary area, remaining within the detection boundary area for a given period of time, and then leaving the detection boundary area. 54. The method of claim 51, further comprising:
setting detection boundary areas about each location where the object is in contact with the portable housing; and determining that the object has tapped the portable housing as a function of an additional portion of the object coming into contact with the portable housing outside of the detection boundary area and then leaving contact with the portable housing, within a given period of time. 55. The method of claim 51, further comprising:
setting detection boundary areas about each location where the object is in contact with the portable housing; determining that the object is a user's hand gripping the portable housing as a function of the object remaining within the detection boundary area for at least a threshold period of time. | An electronic device described herein includes a touch screen for a touch sensitive display carried by a portable housing. The electronic device is configured to operate in a high. detection threshold mode to determine whether an object is in contact with the touch sensitive display, and operate in a low detection threshold mode to determine whether the object is adjacent to the touch sensitive display, based on lack of detection of the object being in contact with the touch sensitive display. The electronic device is further configured to determine whether the object is in contact with a peripheral edge of the portable housing by determining whether the object is adjacent opposite sides of the touch sensitive display, based on detection of the object being adjacent to the touch sensitive display.1. An electronic device, comprising:
a touch screen controller for a touch sensitive display carried by a portable housing configured to: operate in a high detection threshold mode to determine whether an object is in contact with the touch sensitive display; operate in a low detection threshold mode to determine whether the object is adjacent to the touch sensitive display, based on lack of detection of the object being in contact with the touch sensitive display; and determine whether the object is in contact with a peripheral edge of the portable housing based on determination of the object being adjacent to the touch sensitive display. 2. The electronic device of claim 1, wherein, when operating in the high detection threshold mode, the touch screen controller scans each sense line of the touch sensitive display; and wherein, when operating in the low detection threshold mode, the touch screen controller scans a subset of the sense lines of the touch sensitive display. 3. The electronic device of claim 2, wherein the subset of the sense lines of the touch sensitive display includes sense lines located toward at least one side of the touch sensitive display and not sense lines located toward a center of the touch sensitive display. 4. The electronic device of claim 2, wherein the subset of the sense lines of the touch sensitive display includes a pair of sense lines adjacent each side of the touch sensitive display and excludes other sense lines of the touch sensitive display. 5. The electronic device of claim 1, wherein, when operating in the high detection threshold mode, the touch screen controller drives each force line of the touch sensitive display; and wherein, when operating in the low detection threshold mode, the touch screen controller drives a subset of the force lines of the touch sensitive display and scans each sense line of the touch sensitive display. 6. The electronic device of claim 5, wherein the subset of the force lines of the touch sensitive display includes force lines located toward at least one side of the touch sensitive display and not force lines located toward a center of the touch sensitive display. 7. The electronic device of claim 5, wherein the subset of the force lines of the touch sensitive display includes a pair of force lines adjacent each side of the touch sensitive display and excludes other force lines of the touch sensitive display. 8. The electronic device of claim 1, wherein the touch screen controller determines whether the object is adjacent to the touch sensitive display by determining whether the object is adjacent to a first side of the touch sensitive display and then determining whether the object is adjacent to a second side of the touch sensitive display opposite from the first side. 9. The electronic device of claim 1, wherein the touch screen controller determines whether the object is adjacent the touch sensitive display by simultaneously determining whether the object is adjacent to first and second opposing sides of the touch sensitive display. 10. The electronic device of claim 1, wherein the touch screen controller determines that the object is hovering above the touch sensitive display based on detection of the object being adjacent to the touch sensitive display but not being adjacent to first and second opposing sides of the touch sensitive display. 11. The electronic device of claim 1, wherein the touch screen controller determines whether the object is adjacent to the touch sensitive display by at least one of:
scanning a first plurality of adjacent sense lines for a first side of the touch sensitive display and determining that the object is adjacent the first side as a function of a difference between strength values for each sense line of the first plurality of adjacent sense lines being greater than a threshold; and scanning a second plurality of adjacent sense lines for a second side of the touch sensitive display opposite to the first side and determining that the object is adjacent the second side as a function of a difference between strength values for each sense line of the second plurality of adjacent sense lines being greater than the threshold. 12. The electronic device of claim 11, wherein the touch screen controller determines that the object is hovering above the touch sensitive display by determining that the difference between strength values for each sense line of the first plurality of adjacent sense lines is less than the threshold or by determining that the difference between strength values for each sense line of the second plurality of adjacent sense lines is less than the threshold. 13. The electronic device of claim 1, wherein the touch screen controller determines whether the object is adjacent to the touch sensitive display by at least one of:
driving a first plurality of adjacent force lines for a first side of the touch sensitive display and determining that the object is adjacent the first side as a function of a difference between strength values for sense lines of the touch sensitive display intersecting different ones of the first plurality of adjacent force lines being greater than a threshold; and driving a second plurality of adjacent force lines for a second side of the touch sensitive display opposite to the first side and determining that the object is adjacent the second side as a function of a difference between strength values for sense lines of the touch sensitive display intersecting different ones of the second plurality of adjacent force lines being greater than the threshold. 14. The electronic device of claim 13, wherein the touch screen controller determines that the object is hovering above the touch sensitive display by determining that the difference between strength values for each sense line intersecting different ones of the first plurality of force lines is less than the threshold or by determining that the difference between strength values for each sense line intersecting different ones of the second plurality of force lines is less than the threshold 15. The electronic device of claim 1, wherein the touch screen controller determines whether the object is adjacent to the touch sensitive display by at least one of:
scanning first and second adjacent sense lines for a first side of the touch sensitive display and determining that the object is adjacent the first side as a function a strength value for the first sense line for the first side being greater than a first threshold and a strength value for the second sense line for the first side being less than a second threshold, the second threshold being less than the first threshold; and scanning first and second adjacent sense lines for a second side of the touch sensitive display opposite to the first side and determining that the object is adjacent the second side as a function of a strength value for the first sense line for the second side being greater than the first threshold and a strength value for the second sense line for the second side being less than the second threshold. 16. The electronic device of claim 15, wherein the first threshold is predetermined;
and wherein the second threshold is defined as a predetermined percentage of a maximum possible strength value for the first sense line for the first side. 17. The electronic device of claim 15, wherein the touch screen controller determines that the object is hovering above the touch sensitive display by determining that the strength value for at least the second sense line for the first side is greater than the second threshold or by determining that the strength value for the at least second sense line for the second side is greater than the second threshold. 18. The electronic device of claim 1, wherein the touch screen controller determines whether the object is adjacent to the touch sensitive display by at least one of:
driving first and second adjacent force lines for a first side of the touch sensitive display and determining that the object is adjacent the first side as a function of strength values for sense lines of the touch sensitive display intersecting the first force line for the first side being greater than a first threshold and strength values for sense lines of the touch sensitive display intersecting the second force line for the first side being less than a second threshold, the second threshold being less than the first threshold; and driving first and second adjacent force lines for a second side of the touch sensitive display opposite to the first side and determining that the object is adjacent the second side as a function of a strength value for each sense line intersecting the first force line for the second side being greater than the first threshold and a strength value for each sense line intersecting the second force line for the second side being less than the second threshold. 19. The electronic device of claim 18, wherein the first threshold is predetermined;
and wherein the second threshold is defined as a predetermined percentage of a maximum possible strength value for the first sense line for the first side. 20. The electronic device of claim 18, wherein the touch screen controller determines that the object is hovering above the touch sensitive display by determining that the strength values for each sense line intersecting the second force line for the first side is greater than the second threshold or by determining that the strength value for each sense line intersecting the second force line for the second side is greater than the second threshold. 21. The electronic device of claim 1, wherein the touch screen controller determines whether the object is adjacent to the touch sensitive display by first determining whether the object is adjacent to a first side of the touch sensitive display, and then determining whether the object is adjacent to a second side of the touch sensitive display opposite to the first side. 22. The electronic device of claim 1, wherein the touch screen controller determines whether the object is adjacent to the touch sensitive display by simultaneously determining whether the object is adjacent to a first side of the touch sensitive display and whether the object is adjacent to a second side of the touch sensitive display opposite to the first side. 23. The electronic device of claim 1, wherein the touch screen controller is further configured to:
set detection boundary areas about each location where the object is in contact with the portable housing; determine that the object has tapped the portable housing as a function of the object leaving a detection boundary area, returning to the detection boundary area, remaining within the detection boundary area for a given period of time, and then leaving the detection boundary area. 24. The electronic device of claim 1, wherein the touch screen controller is further configured to:
set detection boundary areas about each location where the object is in contact with the portable housing; determine that the object has tapped the portable housing as a function of an additional portion of the object coming into contact with the portable housing outside of the detection boundary area and then leaving contact with the portable housing, within a given period of time. 25. The electronic device of claim 1, wherein the touch screen controller is further configured to:
set detection boundary areas about each location where the object is in contact with the portable housing; determine that the object is a user's hand gripping the portable housing as a function of the object remaining within the detection boundary area for at least a threshold period of time. 26. The electronic device of claim 25, wherein the touch screen controller is further configured to:
set detection boundary areas about each location where the object is in contact with the portable housing; determine that the object has moved in a gesture as a function of a portion of the object leaving one detection boundary area and moving in a predetermined pattern while maintaining contact with the portable housing. 27. The electronic device of claim 25, wherein the touch screen controller is further configured to:
set at least one additional detection boundary area about each additional location where an additional portion of the object comes into contact with the portable housing after the determination that the object is the user's hand gripping the portable housing; determine that the addition portion of the object is the user's hand gripping the portable housing as a function of the additional portion of the object remaining within the at least one additional detection boundary area for at least an additional threshold period of time. 28. The electronic device of claim 25, further comprising processing circuitry; and
wherein the touch screen controller is configured to output to the processing circuitry each location where the object remained within an associated boundary area for at least the threshold period of time as a grip location. 29. An electronic device, comprising:
a portable housing; a touch sensitive display carried by the portable housing, the touch sensitive display including a plurality of sense lines; a touch screen controller coupled to the plurality of sense lines and configured to:
operate in a screen touch detection mode to detect a user's hand being in contact with the touch sensitive display as a function of reading strength values from at least some of the plurality of sense lines;
wherein, in the screen touch detection mode, the user's hand is detected as being in contact with the touch sensitive display as a function of read strength values being greater than a first threshold; operate in a portable housing touch detection mode to detect the user's hand being adjacent to the touch sensitive display, based on lack of detection of the user's hand being in contact with the touch sensitive display and as a function of reading strength values from at least some of the plurality of sense lines; where, in the portable housing touch detection mode, the user's hand is detected as being adjacent to the touch sensitive display as a function of read strength values being greater than a second threshold; wherein the second threshold is less than the first threshold; determine whether the user's hand is in contact with the portable housing by detecting whether the user's hand is adjacent opposite sides of the touch sensitive display, based on detection of the user's hand being adjacent to the touch sensitive display. 30. The electronic device of claim 29, wherein, when operating in the screen touch detection mode, the touch screen controller reads strength values from each sense line of the plurality thereof; and wherein, when operating in the portable housing touch detection mode, the touch screen controller reads strength values from a subset of the plurality of sense lines and not each of the plurality of sense lines. 31. The electronic device of claim 30, wherein the subset of the sense lines includes sense lines located toward sides of the touch sensitive display and not sense lines located toward a center of the touch sensitive display. 32. The electronic device of claim 30, wherein the subset of the sense lines includes a pair of sense lines adjacent each side of the touch sensitive display and excludes other sense lines. 33. The electronic device of claim 29, wherein the touch sensitive display includes a plurality of force lines; wherein, when operating in the screen touch detection mode, the touch screen controller drives each force line of the plurality thereof; and wherein, when operating in the portable housing touch detection mode, the touch screen controller drives a subset of the plurality of force lines and not each of the plurality of force lines but reads strength values from each of the plurality of sense lines. 34. The electronic device of claim 33, wherein the subset of the force lines includes force lines located toward sides of the touch sensitive display and not force lines located toward a center of the touch sensitive display. 35. The electronic device of claim 33, wherein the subset of the force lines includes a pair of force lines adjacent each side of the touch sensitive display and excludes other force lines. 36. The electronic device of claim 33, wherein the touch screen controller detects whether the user's hand is adjacent to the touch sensitive display by detecting whether the user's hand is adjacent to a first side of the touch sensitive display and then detecting whether the user's hand is adjacent to a second side of the touch sensitive display opposite from the first side. 37. The electronic device of claim 33, wherein the touch screen controller detects whether the user's hand is adjacent the touch sensitive display by simultaneously detecting whether the user's hand is adjacent to first and second opposing sides of the touch sensitive display. 38. The electronic device of claim 30, wherein the touch screen controller detects whether the user's hand is adjacent to the touch sensitive display by detecting whether the user's hand is adjacent to a first side of the touch sensitive display and then detecting whether the user's hand is adjacent to a second side of the touch sensitive display opposite from the first side. 39. The electronic device of claim 30, wherein the touch screen controller detects whether the user's hand is adjacent the touch sensitive display by simultaneously detecting whether the user's hand is adjacent to first and second opposing sides of the touch sensitive display. 40. The electronic device of claim 30, wherein the touch screen controller determines whether the user's hand is adjacent to opposing sides of the touch sensitive display by:
reading strength values from a first pair of adjacent sense lines for a first side of the touch sensitive display and determining that the user's hand is adjacent the first side as a function of a difference between the strength values for each sense line of the first pair of adjacent sense lines being greater than a threshold; and reading strength values from a second pair of adjacent sense lines for a second side of the touch sensitive display opposite to the first side and determining that the user's hand is adjacent the second side as a function of a difference between the strength values for each sense line of the second pair of adjacent sense lines being greater than the threshold. 41. The electronic device of claim 30, wherein the touch screen controller determines whether the user's hand is adjacent to opposing sides of the touch sensitive display by:
reading strength values for first and second adjacent sense lines for a first side of the touch sensitive display and determining that the user's hand is adjacent the first side as a function of the strength value for the first sense line for the first side being greater than a first threshold and the strength value for the second sense line for the first side being less than a second threshold, the second threshold being less than the first threshold; reading strength values for first and second adjacent sense lines for a second side of the touch sensitive display opposite to the first side and determining that the user's hand is adjacent the second side as a function of the strength value for the first sense line for the second side being greater than the first threshold and the strength value for the second sense line for the second side being less than the second threshold. 42. The electronic device of claim 41, wherein the first threshold is predetermined;
and wherein the second threshold is defined as a predetermined percentage of a maximum possible strength value for the first sense line for the first side. 43. The electronic device of claim 33, wherein the touch screen controller determines whether the user's hand is adjacent to opposing sides of the touch sensitive display by:
reading strength values from sense lines intersecting a first pair of adjacent force lines for a first side of the touch sensitive display and determining that the user's hand is adjacent the first side as a function of a difference between the strength values for each sense line intersecting the first pair of adjacent force lines being greater than a threshold; and reading strength values from sense lines intersecting a second pair of adjacent force lines for a second side of the touch sensitive display opposite to the first side and determining that the user's hand is adjacent the second side as a function of a difference between the strength values for each sense line intersecting the second pair of adjacent force lines being greater than the threshold. 44. The electronic device of claim 33, wherein the touch screen controller determines whether the user's hand is adjacent to opposing sides of the touch sensitive display by:
reading strength values for sense lines intersecting first and second adjacent force lines for a first side of the touch sensitive display and determining that the user's hand is adjacent the first side as a function of the strength values for each sense line intersecting the first force line for the first side being greater than a first threshold and the strength values for each sense line intersecting the second force line for the first side being less than a second threshold, the second threshold being less than the first threshold; reading strength values for sense lines intersecting first and second adjacent force lines for a second side of the touch sensitive display opposite to the first side and determining that the user's hand is adjacent the second side as a function of the strength values sense lines intersecting the first force line for the second side being greater than the first threshold and the strength value for the sense lines intersecting the second force line for the second side being less than the second threshold. 45. The electronic device of claim 44, wherein the first threshold is predetermined; and wherein the second threshold is defined as a predetermined percentage of a maximum possible strength value of the sense lines intersecting the first force line for the first side. 46. A touch screen controller chip for a touch sensitive display carried by a portable housing, the touch screen controller chip, comprising:
circuitry configured to: operate in a high detection threshold mode to determine whether an object is in contact with the touch sensitive display; operate in a low detection threshold mode to determine whether the object is adjacent to the touch sensitive display, based on lack of detection of the object being in contact with the touch sensitive display; and determine whether the object is in contact with a peripheral edge of the portable housing based on determination of the object being adjacent to the touch sensitive display. 47. The touch screen controller chip of claim 46, wherein, when operating in the high detection threshold mode, the circuitry scans each sense line of the touch sensitive display; and
wherein, when operating in the low detection threshold mode, the circuitry scans a subset of the sense lines of the touch sensitive display. 48. The touch screen controller chip of claim 47, wherein the subset of the sense lines of the touch sensitive display includes sense lines located toward at least one side of the touch sensitive display and not sense lines located toward a center of the touch sensitive display. 49. The touch screen controller chip of claim 47, wherein the subset of the sense lines of the touch sensitive display includes a plurality of sense lines adjacent at least one side of the touch sensitive display and excludes other sense lines of the touch sensitive display. 50. The touch screen controller chip of claim 46, wherein the circuitry is further configured to:
set detection boundary areas about each location where the object is in contact with the portable housing; determine that the object is a user's hand gripping the portable housing as a function of the object remaining within the detection boundary area for at least a threshold period of time. 51. A method of operating a touch screen controller for a touch sensitive display carried by a portable housing, the method comprising:
operating in a high detection threshold mode to determine whether an object is in contact with the touch sensitive display; operating in a low detection threshold mode to determine whether the object is adjacent to the touch sensitive display, based on lack of detection of the object being in contact with the touch sensitive display; and determining whether the object is in contact with a peripheral edge of the portable housing based on determination of the object being adjacent to the touch sensitive display. 52. The method of claim 51, wherein, when operating in the high detection threshold mode, the each sense line of the touch sensitive display is scanned; and wherein, when operating in the low detection threshold mode, a subset of the sense lines of the touch sensitive display is scanned. 53. The method of claim 51, further comprising:
setting detection boundary areas about each location where the object is in contact with the portable housing; and determining that the object has tapped the portable housing as a function of the object leaving a detection boundary area, returning to the detection boundary area, remaining within the detection boundary area for a given period of time, and then leaving the detection boundary area. 54. The method of claim 51, further comprising:
setting detection boundary areas about each location where the object is in contact with the portable housing; and determining that the object has tapped the portable housing as a function of an additional portion of the object coming into contact with the portable housing outside of the detection boundary area and then leaving contact with the portable housing, within a given period of time. 55. The method of claim 51, further comprising:
setting detection boundary areas about each location where the object is in contact with the portable housing; determining that the object is a user's hand gripping the portable housing as a function of the object remaining within the detection boundary area for at least a threshold period of time. | 2,600 |
10,772 | 10,772 | 14,941,924 | 2,612 | Among other things, one or more client devices, techniques, and/or systems are provided for orientation selection. A first user and a second user are detected within a range of a detection component, such as a camera, of a client device. A position of a face of the first user is identified relative to a screen of the client device (e.g., near a right side). A second position of a second face of the second user is identified relative to the screen (e.g., near a left side). An element of an application, displayed by the client device, is presented in a first orientation (e.g., a landscape orientation facing to the right side) based upon the position of the face and a second element of the application is presented in a second orientation (e.g., the landscape orientation facing to the left side) based upon the second position of the second face. | 1. A method of orientation selection, comprising:
detecting a first user within a range of a detection component of a client device; detecting a second user within the range of the detection component; identifying a position of a face of the first user relative to a screen of the client device; identifying a second position of a second face of the second user relative to the screen; presenting an element of an application, displayed by the client device, in a first orientation based upon the position of the face; and presenting a second element of the application in a second orientation based upon the second position of the second face. 2. The method of claim 1, comprising:
identifying an identity of the first user utilizing facial recognition; identifying a second identity of the second user utilizing facial recognition; presenting information in the first orientation based upon the identity of the first user; and presenting second information in the second orientation based upon the second identity of the second user. 3. The method of claim 2, comprising:
receiving an input corresponding to the identity of the first user; and receiving a second input corresponding to the second identify of the second user. 4. The method of claim 2, comprising
registering the first user and an interest of the first user based upon the identity; and registering the second user and a second interest of the second user based upon the second identity. 5. The method of claim 1, comprising:
identifying a role of the first user utilizing a symbol associated with the first user; identifying a second role of the second user utilizing a second symbol associated with the second user; presenting information in the first orientation based upon the role of the first user; and presenting second information in the second orientation based upon the second role of the second user. 6. The method of claim 5, at least one of the role or the second role comprising at least one of:
a job, a field, a title, or a specialty. 7. The method of claim 1, comprising:
identifying a distance of the first user from the client device; and presenting the element in a first size based upon the distance. 8. The method of claim 1, comprising:
identifying a second distance of the second user from the client device; and presenting the second element in a second size based upon the second distance. 9. The method of claim 1, comprising at least one of:
responsive to a light sensor, of the client device, detecting a light within a consistency range for a light duration threshold, deactivating the detection component; or responsive to at least one of a motion sensor or a magnetic sensor, of the client device, detecting stability of the client device for a stability duration threshold, deactivating the detection component. 10. The method of claim 1, comprising:
facilitating collaborative interaction for two or more users, with the application on the client device, comprising presenting elements in orientations corresponding to positions of the two or more users. 11. The method of claim 1, comprising:
responsive to detecting a change in position of the face of the first user, at least one or reorienting, repositioning, or resizing the element. 12. A client device for orientation selection, the client device comprising:
a processor; a display; and a memory storing instructions that, when executed on the processor, provide a system comprising:
an orientation selection component configured to:
detect a first user within a range of a detection component of the client device;
detect a second user within the range of the detection component;
identify a position of a face of the first user relative to a screen of the client device;
identify an identity of the first user utilizing facial recognition;
identify a second position of a second face of the second user relative to the screen;
identify a second identity of the second user utilizing facial recognition;
present information, based upon the identity of the first user, on the client device, in a first orientation based upon the position of the face; and
present second information, based upon the second identity of the second user, on the client device, in a second orientation based upon the second position of the second face. 13. The client device of claim 12, the orientation selection component configured to:
receive an input of the identity of the first user; and receive an input of the second identify of the second user. 14. The client device of claim 13, the orientation selection component configured to at least one of:
register the first user and an interest of the first user; or register the second user and second interest of the second user. 15. The client device of claim 12, the orientation selection component configured to:
identify a distance of the first user from the client device; and present the element in a first size based upon the distance. 16. The client device of claim 12, the orientation selection component configured to:
identify a second distance of the second user from the client device; and present the second element in a second size based upon the second distance. 17. The client device of claim 12, the orientation selection component configured to:
responsive to a light sensor, of the client device, detecting a light within a consistency range for a light duration threshold, deactivate the detection component; and responsive to the light sensor detecting the light outside the consistency range, reactivate the detection component. 18. The client device of claim 12, the orientation selection component configured to:
responsive to at least one of a motion sensor or a magnetic sensor, of the client device, detecting stability of the client device for a stability duration threshold, deactivate the detection component; and responsive to at least one of the motion sensor or the magnetic sensor detecting an instability of the client device, reactivate the detection component. 19. A computer readable medium comprising instructions which when executed perform a method for orientation selection, comprising:
detecting a first user within a range of a detection component of a client device; detecting a second user within the range of the detection component; identifying a position of a face of the first user relative to a screen of the client device; identifying a role of the first user utilizing a symbol associated with the first user; identifying a second position of a second face of the second user relative to the screen; identifying a second role of the second user utilizing a second symbol associated with the second user; presenting information, based upon the role of the first user, on the client device, in a first orientation based upon the position of the face; and presenting second information, based upon the second role of the second user, on the client device, in a second orientation based upon the second position of the second face. 20. The method of claim 19, at least one of the role or the second role comprising at least one of:
a job, a field, a title, or a specialty. | Among other things, one or more client devices, techniques, and/or systems are provided for orientation selection. A first user and a second user are detected within a range of a detection component, such as a camera, of a client device. A position of a face of the first user is identified relative to a screen of the client device (e.g., near a right side). A second position of a second face of the second user is identified relative to the screen (e.g., near a left side). An element of an application, displayed by the client device, is presented in a first orientation (e.g., a landscape orientation facing to the right side) based upon the position of the face and a second element of the application is presented in a second orientation (e.g., the landscape orientation facing to the left side) based upon the second position of the second face.1. A method of orientation selection, comprising:
detecting a first user within a range of a detection component of a client device; detecting a second user within the range of the detection component; identifying a position of a face of the first user relative to a screen of the client device; identifying a second position of a second face of the second user relative to the screen; presenting an element of an application, displayed by the client device, in a first orientation based upon the position of the face; and presenting a second element of the application in a second orientation based upon the second position of the second face. 2. The method of claim 1, comprising:
identifying an identity of the first user utilizing facial recognition; identifying a second identity of the second user utilizing facial recognition; presenting information in the first orientation based upon the identity of the first user; and presenting second information in the second orientation based upon the second identity of the second user. 3. The method of claim 2, comprising:
receiving an input corresponding to the identity of the first user; and receiving a second input corresponding to the second identify of the second user. 4. The method of claim 2, comprising
registering the first user and an interest of the first user based upon the identity; and registering the second user and a second interest of the second user based upon the second identity. 5. The method of claim 1, comprising:
identifying a role of the first user utilizing a symbol associated with the first user; identifying a second role of the second user utilizing a second symbol associated with the second user; presenting information in the first orientation based upon the role of the first user; and presenting second information in the second orientation based upon the second role of the second user. 6. The method of claim 5, at least one of the role or the second role comprising at least one of:
a job, a field, a title, or a specialty. 7. The method of claim 1, comprising:
identifying a distance of the first user from the client device; and presenting the element in a first size based upon the distance. 8. The method of claim 1, comprising:
identifying a second distance of the second user from the client device; and presenting the second element in a second size based upon the second distance. 9. The method of claim 1, comprising at least one of:
responsive to a light sensor, of the client device, detecting a light within a consistency range for a light duration threshold, deactivating the detection component; or responsive to at least one of a motion sensor or a magnetic sensor, of the client device, detecting stability of the client device for a stability duration threshold, deactivating the detection component. 10. The method of claim 1, comprising:
facilitating collaborative interaction for two or more users, with the application on the client device, comprising presenting elements in orientations corresponding to positions of the two or more users. 11. The method of claim 1, comprising:
responsive to detecting a change in position of the face of the first user, at least one or reorienting, repositioning, or resizing the element. 12. A client device for orientation selection, the client device comprising:
a processor; a display; and a memory storing instructions that, when executed on the processor, provide a system comprising:
an orientation selection component configured to:
detect a first user within a range of a detection component of the client device;
detect a second user within the range of the detection component;
identify a position of a face of the first user relative to a screen of the client device;
identify an identity of the first user utilizing facial recognition;
identify a second position of a second face of the second user relative to the screen;
identify a second identity of the second user utilizing facial recognition;
present information, based upon the identity of the first user, on the client device, in a first orientation based upon the position of the face; and
present second information, based upon the second identity of the second user, on the client device, in a second orientation based upon the second position of the second face. 13. The client device of claim 12, the orientation selection component configured to:
receive an input of the identity of the first user; and receive an input of the second identify of the second user. 14. The client device of claim 13, the orientation selection component configured to at least one of:
register the first user and an interest of the first user; or register the second user and second interest of the second user. 15. The client device of claim 12, the orientation selection component configured to:
identify a distance of the first user from the client device; and present the element in a first size based upon the distance. 16. The client device of claim 12, the orientation selection component configured to:
identify a second distance of the second user from the client device; and present the second element in a second size based upon the second distance. 17. The client device of claim 12, the orientation selection component configured to:
responsive to a light sensor, of the client device, detecting a light within a consistency range for a light duration threshold, deactivate the detection component; and responsive to the light sensor detecting the light outside the consistency range, reactivate the detection component. 18. The client device of claim 12, the orientation selection component configured to:
responsive to at least one of a motion sensor or a magnetic sensor, of the client device, detecting stability of the client device for a stability duration threshold, deactivate the detection component; and responsive to at least one of the motion sensor or the magnetic sensor detecting an instability of the client device, reactivate the detection component. 19. A computer readable medium comprising instructions which when executed perform a method for orientation selection, comprising:
detecting a first user within a range of a detection component of a client device; detecting a second user within the range of the detection component; identifying a position of a face of the first user relative to a screen of the client device; identifying a role of the first user utilizing a symbol associated with the first user; identifying a second position of a second face of the second user relative to the screen; identifying a second role of the second user utilizing a second symbol associated with the second user; presenting information, based upon the role of the first user, on the client device, in a first orientation based upon the position of the face; and presenting second information, based upon the second role of the second user, on the client device, in a second orientation based upon the second position of the second face. 20. The method of claim 19, at least one of the role or the second role comprising at least one of:
a job, a field, a title, or a specialty. | 2,600 |
10,773 | 10,773 | 15,903,223 | 2,626 | A data transmitting and receiving device includes: a data transmitting circuit transmitting a clock signal and a data signal synchronized to the clock signal; and a data receiving circuit receiving the clock signal and the data signal; wherein: the data receiving circuit includes a phase error detection circuit detecting a phase error between the data signal and the clock signal; and the data transmitting circuit includes a phase adjusting circuit adjusting a phase of at least one of the clock signal and the data signal based on the phase error. | 1. A data transmitting and receiving device, comprising:
a data transmitting circuit transmitting a clock signal and a data signal synchronized to the clock signal; and a data receiving circuit receiving the clock signal and the data signal; wherein: the data receiving circuit includes a phase error detection circuit detecting a phase error between the data signal and the clock signal; and the data transmitting circuit includes a phase adjusting circuit adjusting a phase of at least one of the clock signal and the data signal based on the phase error. 2. The data transmitting and receiving device according to claim 1, wherein the phase error detection circuit feeds phase error information regarding the detected phase error back to the data transmitting circuit. 3. The data transmitting and receiving device according to claim 1, wherein:
the data transmitting circuit transmits the clock signal and the data signal adjusted by the phase adjusting circuit to the data receiving circuit; and the data receiving circuit includes a data synchronizing circuit synchronizing the clock signal and the data signal adjusted by the phase adjusting circuit. 4. A display apparatus, comprising:
the data transmitting and receiving device according to claim 1; and a liquid crystal panel including a plurality of display pixels each including a pixel electrode; wherein: the data transmitting circuit generates the data signal from image data; and the data receiving circuit writes the data signal to the pixel electrode. | A data transmitting and receiving device includes: a data transmitting circuit transmitting a clock signal and a data signal synchronized to the clock signal; and a data receiving circuit receiving the clock signal and the data signal; wherein: the data receiving circuit includes a phase error detection circuit detecting a phase error between the data signal and the clock signal; and the data transmitting circuit includes a phase adjusting circuit adjusting a phase of at least one of the clock signal and the data signal based on the phase error.1. A data transmitting and receiving device, comprising:
a data transmitting circuit transmitting a clock signal and a data signal synchronized to the clock signal; and a data receiving circuit receiving the clock signal and the data signal; wherein: the data receiving circuit includes a phase error detection circuit detecting a phase error between the data signal and the clock signal; and the data transmitting circuit includes a phase adjusting circuit adjusting a phase of at least one of the clock signal and the data signal based on the phase error. 2. The data transmitting and receiving device according to claim 1, wherein the phase error detection circuit feeds phase error information regarding the detected phase error back to the data transmitting circuit. 3. The data transmitting and receiving device according to claim 1, wherein:
the data transmitting circuit transmits the clock signal and the data signal adjusted by the phase adjusting circuit to the data receiving circuit; and the data receiving circuit includes a data synchronizing circuit synchronizing the clock signal and the data signal adjusted by the phase adjusting circuit. 4. A display apparatus, comprising:
the data transmitting and receiving device according to claim 1; and a liquid crystal panel including a plurality of display pixels each including a pixel electrode; wherein: the data transmitting circuit generates the data signal from image data; and the data receiving circuit writes the data signal to the pixel electrode. | 2,600 |
10,774 | 10,774 | 15,674,733 | 2,649 | A configurable passive mixer is described herein. According to one exemplary embodiment, a passive mixer for a wireless receiver comprises a plurality of passive mixer cores coupled in parallel with each mixer core configured to receive a same set of radio frequency input signals and a separately driven set of local oscillator input signals. Further, each mixer core is configured to be separately enabled or disabled so that the passive mixer can be selectively configured during operation to convert the same set of radio frequency input signals to a set of downconverted output signals that satisfy a certain performance requirement or performance parameter of the passive mixer. | 1. A passive mixer for a wireless receiver, comprising:
a plurality of passive mixer cores coupled in parallel with each mixer core configured to receive a same set of radio frequency input signals and a separately driven set of local oscillator input signals, each mixer core configured to be separately enabled or disabled so that the passive mixer can be selectively configured during operation to convert the same set of radio frequency input signals to a set of downconverted output signals that satisfy a certain performance requirement or performance parameter of the passive mixer. 2. The passive mixer of claim 1, wherein each mixer core is configured to be separately enabled or disabled so that it does not conduct current when disabled. 3. The passive mixer of claim 2, wherein each mixer core includes a bias voltage that is controllable to enable or disable that mixer core. 4. The passive mixer of claim 1, further comprising:
a plurality of local oscillator drive circuits with each drive circuit configured to receive a same set of local oscillator input signal, each drive circuit being separately enabled or disabled and configured to drive the same set of local oscillator input signals to obtain one of the separately driven set of local oscillator input signals for input to a corresponding mixer core. 5. The passive mixer of claim 1, further comprising:
a controller configured to selectively configure the passive mixer by selectively enabling one or more mixer cores and disabling any remaining mixer cores. 6. The passive mixer of claim 5, wherein said selectively enabling and disabling includes the controller being further configured to:
control, for each local oscillator drive circuit, a separate drive circuit enable or disable signal to enable or disable that local oscillator drive circuit; and control, for each mixer core, a separate mixer core enable or disable signal to enable or disable that mixer core. 7. The passive mixer of claim 6, wherein the separate mixer core enable or disable signal controls a bias voltage of that mixer core to enable or disable that mixer core. 8. The passive mixer of claim 1 wherein each mixer core includes a plurality of transistors, wherein a channel width of each transistor for one mixer core is different from a channel width of each transistor for another mixer core. 9. The method of claim 1, wherein the performance parameter of the passive mixer includes at least one of a linearity, a power consumption and a conversion gain. 10. The method of claim 1, wherein the performance requirement of the passive mixer includes at least one of a second-order intermodulation product (IM2) and a third-order intermodulation product (IM3). 11. The passive mixer of claim 1, wherein the plurality of mixer cores connected in parallel include:
a first set of mixer cores connected in parallel for an in-phase passive mixer; and a second set of mixer cores connected in parallel for a quadrature-phase passive mixer. 12. The passive mixer of claim 1, wherein each passive mixer core includes a complementary passive mixer core comprising an N-mixer of parallel connected cascaded NMOS transistors connected in parallel to a P-mixer of parallel connected cascaded PMOS transistors. 13. The passive mixer of claim 12, wherein the passive mixer is selectively configured by selectively enabling the P-mixer in one or more of the passive mixer cores and selectively disabling the P-mixer in the remaining mixer cores. 14. The passive mixer of claim 12, wherein the effective transistor size of the enabled P-mixer differs from the effective transistor size of the enabled N-mixer to change a balance ratio between the N-mixer and P-mixer in one or more of the passive mixer cores. 15. The passive mixer of claim 1 wherein each passive mixer core includes an N-mixer of cascaded NMOS transistors connected in parallel. 16. A method by a controller for controlling a passive mixer for a wireless receiver, the passive mixer having a plurality of passive mixer cores coupled in parallel, with each mixer core configured to receive a same set of radio frequency input signals and a separately driven set of local oscillator input signals, the method comprising:
selectively configuring the passive mixer to convert the same set of radio frequency input signals to a set of downconverted output signals that satisfy a certain performance requirement or performance parameter of the passive mixer by selectively enabling one or more mixer cores and disabling any remaining mixer cores. 17. The method of claim 16, wherein said selectively enabling and disabling includes:
controlling, for each mixer core, a separate mixer core enable or disable signal to enable or disable that mixer core. 18. The method of claim 16, wherein said selectively enabling and disabling includes:
controlling, for each local oscillator drive circuit, a separate drive circuit enable or disable signal to enable or disable that local oscillator drive circuit. 19. The method of claim 16, wherein said selectively enabling and disabling includes:
controlling, for each mixer core, a separate bias voltage of that mixer core to enable or disable that mixer core. 20. The method of claim 16, wherein each mixer core is separately enabled or disabled so that it does not conduct current when disabled. | A configurable passive mixer is described herein. According to one exemplary embodiment, a passive mixer for a wireless receiver comprises a plurality of passive mixer cores coupled in parallel with each mixer core configured to receive a same set of radio frequency input signals and a separately driven set of local oscillator input signals. Further, each mixer core is configured to be separately enabled or disabled so that the passive mixer can be selectively configured during operation to convert the same set of radio frequency input signals to a set of downconverted output signals that satisfy a certain performance requirement or performance parameter of the passive mixer.1. A passive mixer for a wireless receiver, comprising:
a plurality of passive mixer cores coupled in parallel with each mixer core configured to receive a same set of radio frequency input signals and a separately driven set of local oscillator input signals, each mixer core configured to be separately enabled or disabled so that the passive mixer can be selectively configured during operation to convert the same set of radio frequency input signals to a set of downconverted output signals that satisfy a certain performance requirement or performance parameter of the passive mixer. 2. The passive mixer of claim 1, wherein each mixer core is configured to be separately enabled or disabled so that it does not conduct current when disabled. 3. The passive mixer of claim 2, wherein each mixer core includes a bias voltage that is controllable to enable or disable that mixer core. 4. The passive mixer of claim 1, further comprising:
a plurality of local oscillator drive circuits with each drive circuit configured to receive a same set of local oscillator input signal, each drive circuit being separately enabled or disabled and configured to drive the same set of local oscillator input signals to obtain one of the separately driven set of local oscillator input signals for input to a corresponding mixer core. 5. The passive mixer of claim 1, further comprising:
a controller configured to selectively configure the passive mixer by selectively enabling one or more mixer cores and disabling any remaining mixer cores. 6. The passive mixer of claim 5, wherein said selectively enabling and disabling includes the controller being further configured to:
control, for each local oscillator drive circuit, a separate drive circuit enable or disable signal to enable or disable that local oscillator drive circuit; and control, for each mixer core, a separate mixer core enable or disable signal to enable or disable that mixer core. 7. The passive mixer of claim 6, wherein the separate mixer core enable or disable signal controls a bias voltage of that mixer core to enable or disable that mixer core. 8. The passive mixer of claim 1 wherein each mixer core includes a plurality of transistors, wherein a channel width of each transistor for one mixer core is different from a channel width of each transistor for another mixer core. 9. The method of claim 1, wherein the performance parameter of the passive mixer includes at least one of a linearity, a power consumption and a conversion gain. 10. The method of claim 1, wherein the performance requirement of the passive mixer includes at least one of a second-order intermodulation product (IM2) and a third-order intermodulation product (IM3). 11. The passive mixer of claim 1, wherein the plurality of mixer cores connected in parallel include:
a first set of mixer cores connected in parallel for an in-phase passive mixer; and a second set of mixer cores connected in parallel for a quadrature-phase passive mixer. 12. The passive mixer of claim 1, wherein each passive mixer core includes a complementary passive mixer core comprising an N-mixer of parallel connected cascaded NMOS transistors connected in parallel to a P-mixer of parallel connected cascaded PMOS transistors. 13. The passive mixer of claim 12, wherein the passive mixer is selectively configured by selectively enabling the P-mixer in one or more of the passive mixer cores and selectively disabling the P-mixer in the remaining mixer cores. 14. The passive mixer of claim 12, wherein the effective transistor size of the enabled P-mixer differs from the effective transistor size of the enabled N-mixer to change a balance ratio between the N-mixer and P-mixer in one or more of the passive mixer cores. 15. The passive mixer of claim 1 wherein each passive mixer core includes an N-mixer of cascaded NMOS transistors connected in parallel. 16. A method by a controller for controlling a passive mixer for a wireless receiver, the passive mixer having a plurality of passive mixer cores coupled in parallel, with each mixer core configured to receive a same set of radio frequency input signals and a separately driven set of local oscillator input signals, the method comprising:
selectively configuring the passive mixer to convert the same set of radio frequency input signals to a set of downconverted output signals that satisfy a certain performance requirement or performance parameter of the passive mixer by selectively enabling one or more mixer cores and disabling any remaining mixer cores. 17. The method of claim 16, wherein said selectively enabling and disabling includes:
controlling, for each mixer core, a separate mixer core enable or disable signal to enable or disable that mixer core. 18. The method of claim 16, wherein said selectively enabling and disabling includes:
controlling, for each local oscillator drive circuit, a separate drive circuit enable or disable signal to enable or disable that local oscillator drive circuit. 19. The method of claim 16, wherein said selectively enabling and disabling includes:
controlling, for each mixer core, a separate bias voltage of that mixer core to enable or disable that mixer core. 20. The method of claim 16, wherein each mixer core is separately enabled or disabled so that it does not conduct current when disabled. | 2,600 |
10,775 | 10,775 | 15,085,437 | 2,647 | One embodiment provides a method, including: detecting, at an electronic device, an event has occurred; detecting, using a device sensor, that the electronic device is proximate to at least one other person; accessing, in a storage location, a rule set including a rule regarding the detecting that the electronic device is proximate to at least one other person; identifying, using a processor of the electronic device, a type of notification for the event based on the rule set; and providing, using an output device of the electronic device, a notification of the type identified. Other aspects are described and claimed. | 1. A method, comprising:
detecting, at an electronic device, an event has occurred; detecting, using a device sensor, that the electronic device is proximate to at least one other person; accessing, in a storage location, a rule set comprising a rule regarding the detecting that the electronic device is proximate to at least one other person; identifying, using a processor of the electronic device, a type of notification for the event based on the rule set; and providing, using an output device of the electronic device, a notification of the type identified. 2. The method of claim 1, further comprising:
determining a user response to the notification; and modifying the rule set based on the user response. 3. The method of claim 1, wherein the device sensor is used to detect another device;
wherein the type of notification is modified based on the detection of a particular device identified in the rule set. 4. The method of claim 3, wherein the type of notification for the event based on the rule set is changed from a default type selected in the rule set based on the detection of the particular device. 5. The method of claim 1, wherein the device sensor is used to detect a person;
wherein the type of notification is modified based on the detection of a particular person identified in the rule set. 6. The method of claim 5, wherein the device sensor is used to collect biometric data selected from the group consisting of electromyography data, sub-audible data, microphone data, and camera data. 7. The method of claim 5, wherein the type of notification for the event based on the rule set is changed from a default type selected in the rule set based on the detection of the particular person. 8. The method of claim 1, wherein the device sensor is used to detect a geographic location;
wherein the type of notification is modified based on the detection of a particular geographic location identified in the rule set. 9. The method of claim 2, wherein the event comprises a message being received, and wherein the determining a user response to the notification comprises detecting that the message has been opened. 10. The method of claim 1, further comprising overriding the notification in response to user input. 11. A system, comprising:
a device sensor; a processor operatively coupled to the device sensor; and a memory device that stores instructions executable by the processor to: detect an event has occurred; detect, using the device sensor, that a user of the electronic device is proximate to at least one other person; access a rule set comprising a rule regarding the detection that the electronic device is proximate to at least one other person; identify a type of notification for the event based on the rule set; and provide a notification of the type identified. 12. The system of claim 11, wherein the instructions are executable by the processor to:
determine a user response to the notification; and modify the rule set based on the user response. 13. The system of claim 11, wherein the device sensor is used to detect another device;
wherein the type of notification is modified based on the detection of a particular device identified in the rule set. 14. The system of claim 13, wherein the type of notification for the event based on the rule set is changed from a default type selected in the rule set based on the detection of the particular device. 15. The system of claim 11, wherein the device sensor is used to detect a person;
wherein the type of notification is modified based on the detection of a particular person identified in the rule set. 16. The system of claim 15, wherein the device sensor is used to collect biometric data selected from the group consisting of electromyography data, sub-audible data, microphone data and camera data. 17. The system of claim 15, wherein the type of notification for the event based on the rule set is changed from a default type selected in the rule set based on the detection of the particular person. 18. The system of claim 11, wherein the device sensor is used to detect a geographic location;
wherein the type of notification is modified based on the detection of a particular geographic location identified in the rule set. 19. The system of claim 12, wherein the event comprises a message being received, and wherein the instructions that determine a user response to the notification comprise instructions that detect that the message has been opened. 20. A product, comprising:
a storage device having code stored therewith, the code being executable by a processor and comprising: code that detects, at an electronic device, an event has occurred; code that detects, using a device sensor, that a user of the electronic device is proximate to at least one other person; code that accesses, in a storage location, a rule set comprising a rule regarding the detection that the electronic device is proximate to at least one other person; code that identifies, using a processor of the electronic device, a type of notification for the event based on the rule set; and code that provides, using an output device of the electronic device, a notification of the type identified. | One embodiment provides a method, including: detecting, at an electronic device, an event has occurred; detecting, using a device sensor, that the electronic device is proximate to at least one other person; accessing, in a storage location, a rule set including a rule regarding the detecting that the electronic device is proximate to at least one other person; identifying, using a processor of the electronic device, a type of notification for the event based on the rule set; and providing, using an output device of the electronic device, a notification of the type identified. Other aspects are described and claimed.1. A method, comprising:
detecting, at an electronic device, an event has occurred; detecting, using a device sensor, that the electronic device is proximate to at least one other person; accessing, in a storage location, a rule set comprising a rule regarding the detecting that the electronic device is proximate to at least one other person; identifying, using a processor of the electronic device, a type of notification for the event based on the rule set; and providing, using an output device of the electronic device, a notification of the type identified. 2. The method of claim 1, further comprising:
determining a user response to the notification; and modifying the rule set based on the user response. 3. The method of claim 1, wherein the device sensor is used to detect another device;
wherein the type of notification is modified based on the detection of a particular device identified in the rule set. 4. The method of claim 3, wherein the type of notification for the event based on the rule set is changed from a default type selected in the rule set based on the detection of the particular device. 5. The method of claim 1, wherein the device sensor is used to detect a person;
wherein the type of notification is modified based on the detection of a particular person identified in the rule set. 6. The method of claim 5, wherein the device sensor is used to collect biometric data selected from the group consisting of electromyography data, sub-audible data, microphone data, and camera data. 7. The method of claim 5, wherein the type of notification for the event based on the rule set is changed from a default type selected in the rule set based on the detection of the particular person. 8. The method of claim 1, wherein the device sensor is used to detect a geographic location;
wherein the type of notification is modified based on the detection of a particular geographic location identified in the rule set. 9. The method of claim 2, wherein the event comprises a message being received, and wherein the determining a user response to the notification comprises detecting that the message has been opened. 10. The method of claim 1, further comprising overriding the notification in response to user input. 11. A system, comprising:
a device sensor; a processor operatively coupled to the device sensor; and a memory device that stores instructions executable by the processor to: detect an event has occurred; detect, using the device sensor, that a user of the electronic device is proximate to at least one other person; access a rule set comprising a rule regarding the detection that the electronic device is proximate to at least one other person; identify a type of notification for the event based on the rule set; and provide a notification of the type identified. 12. The system of claim 11, wherein the instructions are executable by the processor to:
determine a user response to the notification; and modify the rule set based on the user response. 13. The system of claim 11, wherein the device sensor is used to detect another device;
wherein the type of notification is modified based on the detection of a particular device identified in the rule set. 14. The system of claim 13, wherein the type of notification for the event based on the rule set is changed from a default type selected in the rule set based on the detection of the particular device. 15. The system of claim 11, wherein the device sensor is used to detect a person;
wherein the type of notification is modified based on the detection of a particular person identified in the rule set. 16. The system of claim 15, wherein the device sensor is used to collect biometric data selected from the group consisting of electromyography data, sub-audible data, microphone data and camera data. 17. The system of claim 15, wherein the type of notification for the event based on the rule set is changed from a default type selected in the rule set based on the detection of the particular person. 18. The system of claim 11, wherein the device sensor is used to detect a geographic location;
wherein the type of notification is modified based on the detection of a particular geographic location identified in the rule set. 19. The system of claim 12, wherein the event comprises a message being received, and wherein the instructions that determine a user response to the notification comprise instructions that detect that the message has been opened. 20. A product, comprising:
a storage device having code stored therewith, the code being executable by a processor and comprising: code that detects, at an electronic device, an event has occurred; code that detects, using a device sensor, that a user of the electronic device is proximate to at least one other person; code that accesses, in a storage location, a rule set comprising a rule regarding the detection that the electronic device is proximate to at least one other person; code that identifies, using a processor of the electronic device, a type of notification for the event based on the rule set; and code that provides, using an output device of the electronic device, a notification of the type identified. | 2,600 |
10,776 | 10,776 | 15,559,106 | 2,622 | A patient monitoring system includes an input unit and a patient monitor. The input unit has a first display section that displays a setting screen for the patient monitor, an input section that receives an input of setting information for the patient monitor, and a first communication section that transmits the input setting information to the patient monitor. The patient monitor has a second communication section that receives a biological signal of a patient and the setting information from the input unit, a second display section that displays vital sign information of the patient, and a controller that converts the biological signal to the vital sign information to control a display on the second display section, and changes, upon receipt of the setting information, a setting of the patient monitor in a state in which the vital sign information is displayed on the second display section. | 1. An input unit comprising:
a display section configured to display a setting screen for changing a setting of a patient monitor connected to the input unit; an input section configured to receive an input of setting information for changing the setting of the patient monitor; and a communication section configured to transmit a biological signal of a patient acquired from a sensor to the patient monitor to allow the patient monitor to display vital sign information of the patient and also to transmit the setting information that is input from the input section to the patient monitor. 2. The input unit according to claim 1, wherein the display section and the input section are integrated as a touch panel. 3. The input unit according to claim 1, wherein the communication section is configured to receive a current setting status from the patient monitor, and
wherein the display section is configured to display the setting screen for the patient monitor, the setting screen being based on the received setting status. 4. The input unit according to claim 3, further comprising a storage section configured to store the setting status received by the communication section. 5. The input unit according to claim 2, wherein the touch panel is configured to display a cursor, and
wherein the communication section is configured to transmit information on an operation of the cursor to the patient monitor. 6. The input unit according to claim 5, wherein the communication section is configured to transmit coordinates, to which the cursor is operated on the touch panel, to the patient monitor, and to receive setting screen information corresponding to the coordinates from the patient monitor, and
the display section is configured to display the setting screen corresponding to the setting screen information. 7. A patient monitor comprising:
a communication section configured to receive a biological signal of a patient and setting information from an input unit; a display section configured to display vital sign information of the patient; and a controller configured to convert the biological signal to the vital sign information to control a display on the display section, and to change, upon receipt of the setting information, a setting of the patient monitor in a state in which the vital sign information is displayed on the display section. 8. The patient monitor according to claim 7, wherein the controller displays, upon receipt of an input of cursor operation information from the input unit, a cursor at a corresponding position on the display section based on the cursor operation information. 9. The patient monitor according to claim 7, wherein the communication section is configured to transmit a current setting status to the input unit. 10. The patient monitor according to claim 7, further comprising an input section configured to receive an input from an operator,
wherein the controller is configured to display, upon receipt of the setting information from the input section, the vital sign information and a setting screen in a superimposed manner on the display section. 11. A patient monitoring system comprising an input unit and a patient monitor,
wherein the input unit comprises: a first display section configured to display a setting screen for changing a setting of the patient monitor connected to the input unit; an input section configured to receive an input of setting information for changing the setting of the patient monitor, and a first communication section configured to transmit a biological signal of a patient acquired from a sensor and the setting information that is input from the input section to the patient monitor, and wherein the patient monitor comprises: a second communication section configured to receive the biological signal and the setting information from the input unit; a second display section configured to display vital sign information of the patient; and a controller configured to convert the biological signal to the vital sign information to control a display on the second display section, and to change, upon receipt of the setting information, a setting of the patient monitor in a state in which the vital sign information is displayed on the second display section. 12. The input unit according to claim 1, wherein the communication section is implemented via at least one processor and at least one memory. 13. The input unit according to claim 1, wherein the communication section is configured to transmit the setting information that is input from the input section to the patient monitor to change the setting of the patient monitor in a state in which the vital sign information is displayed on the patient monitor. 14. The input unit according to claim 1, further comprising a storage section storing information defining the setting screen for the patient monitor. 15. The input unit according to claim 3, wherein the display section is configured to display the setting screen such that currently set values are distinguishable from other values. 16. The patient monitor according to claim 7, wherein the communication section and the controller are each implemented via at least one processor and at least one memory. 17. The patient monitor according to claim 7, wherein the display section is configured to display the vital sign information including at least one of a vital sign waveform and a measurement value. 18. The patient monitoring system according to claim 11, wherein the first communication section of the input unit is implemented via at least one processor and at least one memory, and
wherein the second communication section and the controller of the patient monitor are each implemented via at least another processor and at least another memory. | A patient monitoring system includes an input unit and a patient monitor. The input unit has a first display section that displays a setting screen for the patient monitor, an input section that receives an input of setting information for the patient monitor, and a first communication section that transmits the input setting information to the patient monitor. The patient monitor has a second communication section that receives a biological signal of a patient and the setting information from the input unit, a second display section that displays vital sign information of the patient, and a controller that converts the biological signal to the vital sign information to control a display on the second display section, and changes, upon receipt of the setting information, a setting of the patient monitor in a state in which the vital sign information is displayed on the second display section.1. An input unit comprising:
a display section configured to display a setting screen for changing a setting of a patient monitor connected to the input unit; an input section configured to receive an input of setting information for changing the setting of the patient monitor; and a communication section configured to transmit a biological signal of a patient acquired from a sensor to the patient monitor to allow the patient monitor to display vital sign information of the patient and also to transmit the setting information that is input from the input section to the patient monitor. 2. The input unit according to claim 1, wherein the display section and the input section are integrated as a touch panel. 3. The input unit according to claim 1, wherein the communication section is configured to receive a current setting status from the patient monitor, and
wherein the display section is configured to display the setting screen for the patient monitor, the setting screen being based on the received setting status. 4. The input unit according to claim 3, further comprising a storage section configured to store the setting status received by the communication section. 5. The input unit according to claim 2, wherein the touch panel is configured to display a cursor, and
wherein the communication section is configured to transmit information on an operation of the cursor to the patient monitor. 6. The input unit according to claim 5, wherein the communication section is configured to transmit coordinates, to which the cursor is operated on the touch panel, to the patient monitor, and to receive setting screen information corresponding to the coordinates from the patient monitor, and
the display section is configured to display the setting screen corresponding to the setting screen information. 7. A patient monitor comprising:
a communication section configured to receive a biological signal of a patient and setting information from an input unit; a display section configured to display vital sign information of the patient; and a controller configured to convert the biological signal to the vital sign information to control a display on the display section, and to change, upon receipt of the setting information, a setting of the patient monitor in a state in which the vital sign information is displayed on the display section. 8. The patient monitor according to claim 7, wherein the controller displays, upon receipt of an input of cursor operation information from the input unit, a cursor at a corresponding position on the display section based on the cursor operation information. 9. The patient monitor according to claim 7, wherein the communication section is configured to transmit a current setting status to the input unit. 10. The patient monitor according to claim 7, further comprising an input section configured to receive an input from an operator,
wherein the controller is configured to display, upon receipt of the setting information from the input section, the vital sign information and a setting screen in a superimposed manner on the display section. 11. A patient monitoring system comprising an input unit and a patient monitor,
wherein the input unit comprises: a first display section configured to display a setting screen for changing a setting of the patient monitor connected to the input unit; an input section configured to receive an input of setting information for changing the setting of the patient monitor, and a first communication section configured to transmit a biological signal of a patient acquired from a sensor and the setting information that is input from the input section to the patient monitor, and wherein the patient monitor comprises: a second communication section configured to receive the biological signal and the setting information from the input unit; a second display section configured to display vital sign information of the patient; and a controller configured to convert the biological signal to the vital sign information to control a display on the second display section, and to change, upon receipt of the setting information, a setting of the patient monitor in a state in which the vital sign information is displayed on the second display section. 12. The input unit according to claim 1, wherein the communication section is implemented via at least one processor and at least one memory. 13. The input unit according to claim 1, wherein the communication section is configured to transmit the setting information that is input from the input section to the patient monitor to change the setting of the patient monitor in a state in which the vital sign information is displayed on the patient monitor. 14. The input unit according to claim 1, further comprising a storage section storing information defining the setting screen for the patient monitor. 15. The input unit according to claim 3, wherein the display section is configured to display the setting screen such that currently set values are distinguishable from other values. 16. The patient monitor according to claim 7, wherein the communication section and the controller are each implemented via at least one processor and at least one memory. 17. The patient monitor according to claim 7, wherein the display section is configured to display the vital sign information including at least one of a vital sign waveform and a measurement value. 18. The patient monitoring system according to claim 11, wherein the first communication section of the input unit is implemented via at least one processor and at least one memory, and
wherein the second communication section and the controller of the patient monitor are each implemented via at least another processor and at least another memory. | 2,600 |
10,777 | 10,777 | 16,147,238 | 2,663 | A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies. | 1. A computer-implemented method for evaluating objects detected in a video stream, the method comprising:
detecting a plurality of foreground objects present in the video stream; for each foreground object of the plurality of foreground objects, building a trajectory characterizing each foreground object in a series of successive frames of the single video stream; storing each trajectory in a memory; identifying one or more patterns of behavior of objects in the video stream using the stored trajectories; detecting a successive foreground object in the video stream; building a trajectory of the successive foreground object; determining a probability distribution that the trajectory of the successive object is anomalous based on the stored trajectories; and if the trajectory of the successive object is determined to be anomalous, generating an alert. 2. The method of claim 1, wherein each trajectory in memory comprises a sequence of vectors storing kinematic data derived from observations of the foreground object in the single video stream. 3. The method of claim 2, wherein the sequences of vectors are mapped to nodes of a self organizing map (SOM) and wherein nodes of the SOM are clustered using an adaptive resonance theory (ART) network to generate a sequence of SOM nodes. 4. The method of claim 3, wherein determining the anomalous interaction comprises determining a probability of observing the sequence of SOM nodes of the successive object in conjunction with a probability of observing a sequence of SOM nodes associated with the plurality of foreground objects. 5. The method of claim 1, wherein each of the plurality of foreground objects are classified as being instances of an object type sharing similar microfeature vectors characterizing the foreground object in the series of successive frames of the video stream. 6. A non-transitory computer storage medium, which, when executed on a processor, performs an operation for evaluating objects detected in a video stream, comprising:
detecting a plurality of foreground objects present in the video stream; for each foreground object of the plurality of foreground objects, building a trajectory characterizing each foreground object in a series of successive frames of the single video stream; storing each trajectory in a memory; identifying one or more patterns of behavior of objects in the video stream using the stored trajectories; detecting a successive foreground object in the video stream; building a trajectory of the successive foreground object; determining a probability distribution that the trajectory of the successive object is anomalous based on the stored trajectories; and if the trajectory of the successive object is determined to be anomalous, generating an alert. 7. The non-transitory computer storage medium of claim 6, wherein each trajectory in memory comprises a sequence of vectors storing kinematic data derived from observations of the foreground object in the single video stream. 8. The non-transitory computer storage medium of claim 7, wherein the sequences of vectors are mapped to nodes of a self organizing map (SOM) and wherein nodes of the SOM are clustered using an adaptive resonance theory (ART) network to generate a sequence of SOM nodes. 9. The non-transitory computer storage medium of claim 8, wherein determining the anomalous interaction comprises determining a probability of observing the sequence of SOM nodes of the successive object in conjunction with a probability of observing a sequence of SOM nodes associated with the plurality of foreground objects. 10. The non-transitory computer storage medium of claim 9, wherein each of the plurality of foreground objects are classified as being instances of an object type sharing similar microfeature vectors characterizing the foreground object in the series of successive frames of the video stream. 11. A video surveillance system, comprising:
a video source configured to provide a single input video stream captured by a video camera; a processor; and a memory containing a program, which, when executed on the processor is configured to perform an operation for evaluating objects detected in the single input video stream, the operation comprising: detecting a plurality of foreground objects present in the video stream; for each foreground object of the plurality of foreground objects, building a trajectory characterizing each foreground object in a series of successive frames of the single video stream; storing each trajectory in the memory; identifying one or more patterns of behavior of objects in the video stream using the stored trajectories; detecting a successive foreground object in the video stream; building a trajectory of the successive foreground object; determining a probability distribution that the trajectory of the successive object is anomalous based on the stored trajectories; and if the trajectory of the successive object is determined to be anomalous, generating an alert. 12. The system of claim 11, wherein each trajectory in memory comprises a sequence of vectors storing kinematic data derived from observations of the foreground object in the single video stream. 13. The system of claim 12, wherein the sequences of vectors are mapped to nodes of a self organizing map (SOM) and wherein nodes of the SOM are clustered using an adaptive resonance theory (ART) network to generate a sequence of SOM nodes. 14. The system of claim 13, wherein determining the anomalous interaction comprises determining a probability of observing the sequence of SOM nodes of the successive object in conjunction with a probability of observing a sequence of SOM nodes associated with the plurality of foreground objects. 15. The system of claim 14, wherein each of the plurality of foreground objects are classified as being instances of an object type sharing similar microfeature vectors characterizing the foreground object in the series of successive frames of the video stream. | A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies.1. A computer-implemented method for evaluating objects detected in a video stream, the method comprising:
detecting a plurality of foreground objects present in the video stream; for each foreground object of the plurality of foreground objects, building a trajectory characterizing each foreground object in a series of successive frames of the single video stream; storing each trajectory in a memory; identifying one or more patterns of behavior of objects in the video stream using the stored trajectories; detecting a successive foreground object in the video stream; building a trajectory of the successive foreground object; determining a probability distribution that the trajectory of the successive object is anomalous based on the stored trajectories; and if the trajectory of the successive object is determined to be anomalous, generating an alert. 2. The method of claim 1, wherein each trajectory in memory comprises a sequence of vectors storing kinematic data derived from observations of the foreground object in the single video stream. 3. The method of claim 2, wherein the sequences of vectors are mapped to nodes of a self organizing map (SOM) and wherein nodes of the SOM are clustered using an adaptive resonance theory (ART) network to generate a sequence of SOM nodes. 4. The method of claim 3, wherein determining the anomalous interaction comprises determining a probability of observing the sequence of SOM nodes of the successive object in conjunction with a probability of observing a sequence of SOM nodes associated with the plurality of foreground objects. 5. The method of claim 1, wherein each of the plurality of foreground objects are classified as being instances of an object type sharing similar microfeature vectors characterizing the foreground object in the series of successive frames of the video stream. 6. A non-transitory computer storage medium, which, when executed on a processor, performs an operation for evaluating objects detected in a video stream, comprising:
detecting a plurality of foreground objects present in the video stream; for each foreground object of the plurality of foreground objects, building a trajectory characterizing each foreground object in a series of successive frames of the single video stream; storing each trajectory in a memory; identifying one or more patterns of behavior of objects in the video stream using the stored trajectories; detecting a successive foreground object in the video stream; building a trajectory of the successive foreground object; determining a probability distribution that the trajectory of the successive object is anomalous based on the stored trajectories; and if the trajectory of the successive object is determined to be anomalous, generating an alert. 7. The non-transitory computer storage medium of claim 6, wherein each trajectory in memory comprises a sequence of vectors storing kinematic data derived from observations of the foreground object in the single video stream. 8. The non-transitory computer storage medium of claim 7, wherein the sequences of vectors are mapped to nodes of a self organizing map (SOM) and wherein nodes of the SOM are clustered using an adaptive resonance theory (ART) network to generate a sequence of SOM nodes. 9. The non-transitory computer storage medium of claim 8, wherein determining the anomalous interaction comprises determining a probability of observing the sequence of SOM nodes of the successive object in conjunction with a probability of observing a sequence of SOM nodes associated with the plurality of foreground objects. 10. The non-transitory computer storage medium of claim 9, wherein each of the plurality of foreground objects are classified as being instances of an object type sharing similar microfeature vectors characterizing the foreground object in the series of successive frames of the video stream. 11. A video surveillance system, comprising:
a video source configured to provide a single input video stream captured by a video camera; a processor; and a memory containing a program, which, when executed on the processor is configured to perform an operation for evaluating objects detected in the single input video stream, the operation comprising: detecting a plurality of foreground objects present in the video stream; for each foreground object of the plurality of foreground objects, building a trajectory characterizing each foreground object in a series of successive frames of the single video stream; storing each trajectory in the memory; identifying one or more patterns of behavior of objects in the video stream using the stored trajectories; detecting a successive foreground object in the video stream; building a trajectory of the successive foreground object; determining a probability distribution that the trajectory of the successive object is anomalous based on the stored trajectories; and if the trajectory of the successive object is determined to be anomalous, generating an alert. 12. The system of claim 11, wherein each trajectory in memory comprises a sequence of vectors storing kinematic data derived from observations of the foreground object in the single video stream. 13. The system of claim 12, wherein the sequences of vectors are mapped to nodes of a self organizing map (SOM) and wherein nodes of the SOM are clustered using an adaptive resonance theory (ART) network to generate a sequence of SOM nodes. 14. The system of claim 13, wherein determining the anomalous interaction comprises determining a probability of observing the sequence of SOM nodes of the successive object in conjunction with a probability of observing a sequence of SOM nodes associated with the plurality of foreground objects. 15. The system of claim 14, wherein each of the plurality of foreground objects are classified as being instances of an object type sharing similar microfeature vectors characterizing the foreground object in the series of successive frames of the video stream. | 2,600 |
10,778 | 10,778 | 15,211,999 | 2,647 | In a geographic area with limited satellite coverage, multiple signal sources are statically disposed along a path through the geographic area. To automatically determine geographic positions of the signal sources, signal data collected by a receiver moving along the path is received, where the signal data indicates changes, over a period of time, in strength of respective signals emitted by the signal sources. Indications of a first position of the receiver at a first time prior to entering the geographic area and a second position of the receiver at a second time subsequent to leaving the geographic area are received, and positions for the signal sources are determined using the received signal data and the received indications of the positions and the times. The determined positions for the signal sources are used to geoposition a device moving along the path. | 1. A method for automatically determining geographic positions of signal sources in areas with limited satellite coverage, the method comprising:
receiving, by one or more processors, signal data collected by a receiver moving along a path through a geographic area with limited satellite coverage, the signal data being indicative of changes, over a period of time, in strength of respective signals emitted by multiple signal sources statically disposed along the path; receiving, by the one or more processors, indications of a first position of the receiver at a first time prior to entering the geographic area and a second position of the receiver at a second time subsequent to leaving the geographic area; determining, by the one or more processors, positions for the signal sources using the received signal data and the received indications of the positions and the times; and using the determined positions for the signal sources to geoposition a device moving along the path. 2. The method of claim 1, wherein determining the estimated positions includes:
determining, for each of the signal sources, a time at which the strength of the signal emitted by the signal source reaches its peak value at the receiver, and determining an order in which the signal sources are arranged along the path using the determined times at which the signals reach their corresponding peak values. 3. The method of claim 2, wherein determining the time at which the strength of the signal emitted by the signal source reaches its peak value at the receiver includes:
identifying, within the signal data related to the signal source, a pair of peak values in the strength of the signal, the pair of peak values separated in time approximately by how long it takes for a length of a car to pass by the signal source at the determined speed, and extrapolating the time at which the strength of the signal reaches its peak value at the receiver from the pair of peak values. 4. The method of claim 2, wherein determining the positions for the signal sources further includes:
determining a length of the path, determining a number of the signal sources, determining an average distance between the signal sources using the length of the path and the determined number; and determining the positions for the signal sources using the received signal data and the determined order in which the signal sources are arranged along the path. 5. The method of claim 2, further comprising determining an average speed at which the receiver moves past the signal source based at least in part on a difference between the first position and the second position and a difference between the first time and the second time;
wherein determining the positions for the signal sources includes using the determined average speed, the determined peak values corresponding to the signal sources, and the determined order in which the signal sources are arranged along the path. 6. The method of claim 2, wherein determining the positions for the signal sources further includes:
for each of the signal sources, determining a respective amount of time during which the strength of the signal from the signal source is above a threshold value at the receiver, and using the determined amounts of time, determining a speed at which the receiver moves past the signal source, wherein the receiver moves at least two of the signal sources at different speeds. 7. The method of claim 1, wherein the signal data further includes data collected by multiple receivers moving along the path; and wherein determining the positions for the signal sources includes using combining the data collected by multiple receivers. 8. The method of claim 7, further comprising using the data collected by the multiple receivers to determine a profile for each of the multiple signal sources, the profile specifying how signals from at least one of the multiple signal sources should be adjusted when geopositioing the device p. 9. The method of claim 1, wherein determining the positions for the signal sources includes using an indication of known geometry of the path along which the signal sources are arranged. 10. A system for geopositioning receivers in areas with areas with limited satellite coverage, the system comprising:
one or more processors; a non-transitory computer-readable memory coupled to the one or more processors and storing thereon instructions that, when executed by the one or more processor, cause the system to: receive signal data collected by a receiver moving along a path through a geographic area with limited satellite coverage, the signal data being indicative of changes, over a period of time, in strength of respective signals emitted by multiple signal sources statically disposed along the path, receive indications of a first position of the receiver at a first time prior to entering the geographic area and a second position of the receiver at a second time subsequent to leaving the geographic area, determine positions for the signal sources using the received signal data and the received indications of the positions and the times, and use the determined positions for the signal sources to geoposition a device moving along the path. 11. The system of claim 10, wherein to determine the estimated positions, the instructions further cause the system to:
determine, for each of the signal sources, a time at which the strength of the signal emitted by the signal source reaches its peak value at the receiver, and determine an order in which the signal sources are arranged along the path using the determined times at which the signals reach their corresponding peak values. 12. The system of claim 11, wherein to determine the time at which the strength of the signal emitted by the signal source reaches its peak value at the receiver, the instructions cause the system to:
identify, within the signal data related to the signal source, a pair of peak values in the strength of the signal, the pair of peak values separated in time approximately by how long it takes for a length of a car to pass by the signal source at the determined speed, and extrapolate the time at which the strength of the signal reaches its peak value at the receiver from the pair of peak values. 13. The system of claim 11, wherein to determine the positions for the signal sources, the instructions cause the system to:
determine a length of the path, determine a number of the signal sources, determine an average distance between the signal sources using the length of the path and the determined number; and determine the positions for the signal sources using the received signal data and the determined order in which the signal sources are arranged along the path 14. The system of claim 11, wherein the instructions further cause the system to:
determine an average speed at which the receiver moves past the signal source based at least in part on a difference between the first position and the second position and a difference between the first time and the second time; wherein to determine the positions for the signal sources, the instructions cause the system to use the determined average speed, the determined peak values corresponding to the signal sources, and the determined order in which the signal sources are arranged along the path. 15. A method for automatically determining positions of signal sources in areas with limited satellite coverage, the method comprising:
receiving, by one or more processors, a description of geometry of a path along which multiple signal sources are arranged, the path traversing a geographic area with limited satellite coverage; receiving, by the one or more processors from the multiple of signal sources, signal data indicative of distances between at least several of the multiple signal sources, the signal data generated by the plurality of signal sources transmitting management frames; determining, using the indication of the path and the received signal data, positions of the multiple signal sources along the path; and using the determined positions for the signal sources to geoposition a device moving along the path. 16. The method of claim 15, wherein the multiple signal sources include a first signal source positioned where the path enters the geographic area, a second signal source positioned where the path exits the geographic area, and several signal sources between the first signal source and the second signal source; the method further comprising:
receiving indications of positions of the first signal source and the second signal source, and determining the positions for the several signal sources disposed between the first signal source and the second signal source further using the received indications of positions of the first signal source and the second signal source. 17. The method of claim 15, wherein the signal data includes, for each of the multiple of signal sources, an indication of a distance to at least one other signal source. 18. The method of claim 15, further comprising providing the determined positions of the signal sources to the respective signal sources for subsequent transmission in management frames. 19. The method of claim 15, wherein the signal data includes round trip-time (RTT) measurements. 20. The method of claim 15, wherein the signal data includes received signal strength indication (RSSI). | In a geographic area with limited satellite coverage, multiple signal sources are statically disposed along a path through the geographic area. To automatically determine geographic positions of the signal sources, signal data collected by a receiver moving along the path is received, where the signal data indicates changes, over a period of time, in strength of respective signals emitted by the signal sources. Indications of a first position of the receiver at a first time prior to entering the geographic area and a second position of the receiver at a second time subsequent to leaving the geographic area are received, and positions for the signal sources are determined using the received signal data and the received indications of the positions and the times. The determined positions for the signal sources are used to geoposition a device moving along the path.1. A method for automatically determining geographic positions of signal sources in areas with limited satellite coverage, the method comprising:
receiving, by one or more processors, signal data collected by a receiver moving along a path through a geographic area with limited satellite coverage, the signal data being indicative of changes, over a period of time, in strength of respective signals emitted by multiple signal sources statically disposed along the path; receiving, by the one or more processors, indications of a first position of the receiver at a first time prior to entering the geographic area and a second position of the receiver at a second time subsequent to leaving the geographic area; determining, by the one or more processors, positions for the signal sources using the received signal data and the received indications of the positions and the times; and using the determined positions for the signal sources to geoposition a device moving along the path. 2. The method of claim 1, wherein determining the estimated positions includes:
determining, for each of the signal sources, a time at which the strength of the signal emitted by the signal source reaches its peak value at the receiver, and determining an order in which the signal sources are arranged along the path using the determined times at which the signals reach their corresponding peak values. 3. The method of claim 2, wherein determining the time at which the strength of the signal emitted by the signal source reaches its peak value at the receiver includes:
identifying, within the signal data related to the signal source, a pair of peak values in the strength of the signal, the pair of peak values separated in time approximately by how long it takes for a length of a car to pass by the signal source at the determined speed, and extrapolating the time at which the strength of the signal reaches its peak value at the receiver from the pair of peak values. 4. The method of claim 2, wherein determining the positions for the signal sources further includes:
determining a length of the path, determining a number of the signal sources, determining an average distance between the signal sources using the length of the path and the determined number; and determining the positions for the signal sources using the received signal data and the determined order in which the signal sources are arranged along the path. 5. The method of claim 2, further comprising determining an average speed at which the receiver moves past the signal source based at least in part on a difference between the first position and the second position and a difference between the first time and the second time;
wherein determining the positions for the signal sources includes using the determined average speed, the determined peak values corresponding to the signal sources, and the determined order in which the signal sources are arranged along the path. 6. The method of claim 2, wherein determining the positions for the signal sources further includes:
for each of the signal sources, determining a respective amount of time during which the strength of the signal from the signal source is above a threshold value at the receiver, and using the determined amounts of time, determining a speed at which the receiver moves past the signal source, wherein the receiver moves at least two of the signal sources at different speeds. 7. The method of claim 1, wherein the signal data further includes data collected by multiple receivers moving along the path; and wherein determining the positions for the signal sources includes using combining the data collected by multiple receivers. 8. The method of claim 7, further comprising using the data collected by the multiple receivers to determine a profile for each of the multiple signal sources, the profile specifying how signals from at least one of the multiple signal sources should be adjusted when geopositioing the device p. 9. The method of claim 1, wherein determining the positions for the signal sources includes using an indication of known geometry of the path along which the signal sources are arranged. 10. A system for geopositioning receivers in areas with areas with limited satellite coverage, the system comprising:
one or more processors; a non-transitory computer-readable memory coupled to the one or more processors and storing thereon instructions that, when executed by the one or more processor, cause the system to: receive signal data collected by a receiver moving along a path through a geographic area with limited satellite coverage, the signal data being indicative of changes, over a period of time, in strength of respective signals emitted by multiple signal sources statically disposed along the path, receive indications of a first position of the receiver at a first time prior to entering the geographic area and a second position of the receiver at a second time subsequent to leaving the geographic area, determine positions for the signal sources using the received signal data and the received indications of the positions and the times, and use the determined positions for the signal sources to geoposition a device moving along the path. 11. The system of claim 10, wherein to determine the estimated positions, the instructions further cause the system to:
determine, for each of the signal sources, a time at which the strength of the signal emitted by the signal source reaches its peak value at the receiver, and determine an order in which the signal sources are arranged along the path using the determined times at which the signals reach their corresponding peak values. 12. The system of claim 11, wherein to determine the time at which the strength of the signal emitted by the signal source reaches its peak value at the receiver, the instructions cause the system to:
identify, within the signal data related to the signal source, a pair of peak values in the strength of the signal, the pair of peak values separated in time approximately by how long it takes for a length of a car to pass by the signal source at the determined speed, and extrapolate the time at which the strength of the signal reaches its peak value at the receiver from the pair of peak values. 13. The system of claim 11, wherein to determine the positions for the signal sources, the instructions cause the system to:
determine a length of the path, determine a number of the signal sources, determine an average distance between the signal sources using the length of the path and the determined number; and determine the positions for the signal sources using the received signal data and the determined order in which the signal sources are arranged along the path 14. The system of claim 11, wherein the instructions further cause the system to:
determine an average speed at which the receiver moves past the signal source based at least in part on a difference between the first position and the second position and a difference between the first time and the second time; wherein to determine the positions for the signal sources, the instructions cause the system to use the determined average speed, the determined peak values corresponding to the signal sources, and the determined order in which the signal sources are arranged along the path. 15. A method for automatically determining positions of signal sources in areas with limited satellite coverage, the method comprising:
receiving, by one or more processors, a description of geometry of a path along which multiple signal sources are arranged, the path traversing a geographic area with limited satellite coverage; receiving, by the one or more processors from the multiple of signal sources, signal data indicative of distances between at least several of the multiple signal sources, the signal data generated by the plurality of signal sources transmitting management frames; determining, using the indication of the path and the received signal data, positions of the multiple signal sources along the path; and using the determined positions for the signal sources to geoposition a device moving along the path. 16. The method of claim 15, wherein the multiple signal sources include a first signal source positioned where the path enters the geographic area, a second signal source positioned where the path exits the geographic area, and several signal sources between the first signal source and the second signal source; the method further comprising:
receiving indications of positions of the first signal source and the second signal source, and determining the positions for the several signal sources disposed between the first signal source and the second signal source further using the received indications of positions of the first signal source and the second signal source. 17. The method of claim 15, wherein the signal data includes, for each of the multiple of signal sources, an indication of a distance to at least one other signal source. 18. The method of claim 15, further comprising providing the determined positions of the signal sources to the respective signal sources for subsequent transmission in management frames. 19. The method of claim 15, wherein the signal data includes round trip-time (RTT) measurements. 20. The method of claim 15, wherein the signal data includes received signal strength indication (RSSI). | 2,600 |
10,779 | 10,779 | 15,427,624 | 2,653 | A system and method for converting a passive protector earmuff to a communication and/or active noise reduction (ANR) headset include mounting active components to a frame subassembly configured for insertion into the passive earcup to divide the earcup volume into a front cavity without additional passive leak paths and a back cavity having a volume that improves speaker/driver power efficiency with a resistive vent to atmosphere. An earcup having an external shell includes a frame configured for positioning within the external shell and having a first support adapted to contact an interior of the shell and a second circumferential support cooperating with a seal to contact an ear seal plate of the earcup to form the front and back cavities. The frame may support a speaker between the front and back cavity, and secure circuitry within the back cavity. | 1. A method for converting a passive hearing protector having a circumaural earcup, comprising:
securing a frame subassembly within the earcup to divide an earcup cavity into a front cavity and a back cavity, the frame subassembly having a speaker extending between the front and back cavities and processing circuitry secured to the frame and coupled to the speaker, the processing circuitry configured for connection to a microphone. 2. The method of claim 1 further comprising securing the microphone to the frame. 3. The method of claim 1 further comprising installing a resistive vent through the earcup to couple the back cavity to atmosphere. 4. The method of claim 3 further comprising creating a hole in the earcup adapted to receive the resistive vent. 5. The method of claim 3 wherein the microphone comprises an ambient microphone. 6. The method of claim 3 wherein the resistive vent includes an integrated ambient microphone, the method further comprising connecting the ambient microphone to the processing circuitry prior to securing the frame subassembly within the earcup. 7. The method of claim 1 wherein the microphone comprises a speech microphone, the method further comprising coupling the speech microphone to the processing circuitry through a hole in the earcup within the back cavity. 8. The method of claim 7 further comprising creating the hole in the earcup within the back cavity by at least one of machining or removing a plug. 9. The method of claim 7 wherein the speech microphone comprises a boom microphone, the method further comprising attaching a strain relief connector associated with the boom microphone to at least one ear cup prior to securing the frame subassembly within the earcup. 10. The method of claim 1 wherein the processing circuitry comprises active noise reduction (ANR) circuitry. 11. The method of claim 10 wherein the processing circuitry comprises a microprocessor programmed to generate ANR signals for the speaker based on signals received from the microphone. 12. A method for converting a passive hearing protection circumaural earcup, comprising:
creating at least one opening in the earcup adapted to receive an ambient microphone and a resistive vent; creating at least one additional opening in the earcup adapted to receive a speech microphone; connecting the speech microphone and the ambient microphone to processing circuitry mounted on a frame, the frame including a driver extending through the frame and a sense microphone mounted on the frame; and inserting the frame into the earcup to form a back cavity between the frame and the earcup and sealed relative to a front cavity formed between the frame and a cushion centroid of a circumaural cushion surrounding a periphery of an earcup opening. 13. The method of claim 12 wherein the speech microphone comprises a boom microphone, the method further comprising, attaching the boom microphone to the earcup. 14. A headset comprising:
a circumaural earcup having a shell; an acoustic damping membrane positioned on an interior surface of the shell; a frame positioned within the shell to separate an interior volume of the shell into a back cavity between the frame and the shell and front cavity sealed from the back cavity, the frame configured to receive: a speaker extending between the front and back cavity, and processing circuitry in the back cavity; and a communication microphone coupled to the processing circuitry. 15. The headset of claim 14 further comprising a sense microphone mounted to the frame and coupled to the processing circuitry. 16. The headset of claim 14 further comprising a resistive vent coupling the back cavity to atmosphere. 17. The headset of claim 16 further comprising an ambient microphone coupled to the processing circuitry. 18. The headset of claim 17 wherein the ambient microphone is integrated with the resistive vent and coupled to the processing circuitry. 19. The headset of claim 17 wherein the processing circuitry generates an ANR signal based on signals from the sense microphone and the ambient microphone and outputs the ANR signal to the speaker. | A system and method for converting a passive protector earmuff to a communication and/or active noise reduction (ANR) headset include mounting active components to a frame subassembly configured for insertion into the passive earcup to divide the earcup volume into a front cavity without additional passive leak paths and a back cavity having a volume that improves speaker/driver power efficiency with a resistive vent to atmosphere. An earcup having an external shell includes a frame configured for positioning within the external shell and having a first support adapted to contact an interior of the shell and a second circumferential support cooperating with a seal to contact an ear seal plate of the earcup to form the front and back cavities. The frame may support a speaker between the front and back cavity, and secure circuitry within the back cavity.1. A method for converting a passive hearing protector having a circumaural earcup, comprising:
securing a frame subassembly within the earcup to divide an earcup cavity into a front cavity and a back cavity, the frame subassembly having a speaker extending between the front and back cavities and processing circuitry secured to the frame and coupled to the speaker, the processing circuitry configured for connection to a microphone. 2. The method of claim 1 further comprising securing the microphone to the frame. 3. The method of claim 1 further comprising installing a resistive vent through the earcup to couple the back cavity to atmosphere. 4. The method of claim 3 further comprising creating a hole in the earcup adapted to receive the resistive vent. 5. The method of claim 3 wherein the microphone comprises an ambient microphone. 6. The method of claim 3 wherein the resistive vent includes an integrated ambient microphone, the method further comprising connecting the ambient microphone to the processing circuitry prior to securing the frame subassembly within the earcup. 7. The method of claim 1 wherein the microphone comprises a speech microphone, the method further comprising coupling the speech microphone to the processing circuitry through a hole in the earcup within the back cavity. 8. The method of claim 7 further comprising creating the hole in the earcup within the back cavity by at least one of machining or removing a plug. 9. The method of claim 7 wherein the speech microphone comprises a boom microphone, the method further comprising attaching a strain relief connector associated with the boom microphone to at least one ear cup prior to securing the frame subassembly within the earcup. 10. The method of claim 1 wherein the processing circuitry comprises active noise reduction (ANR) circuitry. 11. The method of claim 10 wherein the processing circuitry comprises a microprocessor programmed to generate ANR signals for the speaker based on signals received from the microphone. 12. A method for converting a passive hearing protection circumaural earcup, comprising:
creating at least one opening in the earcup adapted to receive an ambient microphone and a resistive vent; creating at least one additional opening in the earcup adapted to receive a speech microphone; connecting the speech microphone and the ambient microphone to processing circuitry mounted on a frame, the frame including a driver extending through the frame and a sense microphone mounted on the frame; and inserting the frame into the earcup to form a back cavity between the frame and the earcup and sealed relative to a front cavity formed between the frame and a cushion centroid of a circumaural cushion surrounding a periphery of an earcup opening. 13. The method of claim 12 wherein the speech microphone comprises a boom microphone, the method further comprising, attaching the boom microphone to the earcup. 14. A headset comprising:
a circumaural earcup having a shell; an acoustic damping membrane positioned on an interior surface of the shell; a frame positioned within the shell to separate an interior volume of the shell into a back cavity between the frame and the shell and front cavity sealed from the back cavity, the frame configured to receive: a speaker extending between the front and back cavity, and processing circuitry in the back cavity; and a communication microphone coupled to the processing circuitry. 15. The headset of claim 14 further comprising a sense microphone mounted to the frame and coupled to the processing circuitry. 16. The headset of claim 14 further comprising a resistive vent coupling the back cavity to atmosphere. 17. The headset of claim 16 further comprising an ambient microphone coupled to the processing circuitry. 18. The headset of claim 17 wherein the ambient microphone is integrated with the resistive vent and coupled to the processing circuitry. 19. The headset of claim 17 wherein the processing circuitry generates an ANR signal based on signals from the sense microphone and the ambient microphone and outputs the ANR signal to the speaker. | 2,600 |
10,780 | 10,780 | 13,716,315 | 2,694 | A method of providing a multi touch interaction in a portable terminal includes receiving a first touch input, performing a first function corresponding to the received first touch input, receiving a second touch input when the first touch input is maintained, and performing a second function corresponding to the received second touch input while maintaining a movement of at least one specific object selected by the first touch input. | 1. A method of providing a multi touch interaction in a portable terminal having a touch screen, the method comprising:
detecting a first touch input; performing a first function corresponding to the detected first touch input; detecting a second touch input while the first touch input is maintained; and performing a second function corresponding to the detected second touch input. 2. The method of claim 1, wherein the second function is a function related to the first function. 3. The method of claim 2, wherein the first touch input is a touch event that selects at least one specific object in a home screen or a list screen, and the first function is a function of editing the home screen or the list screen. 4. The method of claim 3, wherein the second touch input is a touch movement event that is generated in the home screen or the list screen, and the second function is a function of moving the home screen or the list screen corresponding to the touch movement event while maintaining a movement of the selected at least one specific object. 5. The method of claim 4, further comprising:
when the first touch input is released, changing a location of the at least one selected specific object to a location in which the first touch input is released. 6. The method of claim 2, wherein the first touch input is a touch event that selects at least one specific page in an electronic book content execution screen, and the first function is a function of adding a bookmark to the selected specific page. 7. The method of claim 6, wherein the second touch input is a touch movement event that moves a page of an electronic book content, and the second function is a function of moving the page of the electronic book content corresponding to the touch movement event. 8. The method of claim 7, further comprising:
when the page is moved to a previous or a next page and a touch movement event of the first touch input is inputted, moving to a page to which the bookmark is added. 9. The method of claim 6, further comprising:
when the first touch input is released, removing the bookmark added to the specific page. 10. An apparatus for providing a multi touch interaction in a portable terminal, comprising:
a touch panel; and a controller configured to perform a first function corresponding to a first touch input detected on the touch panel and configured to perform a second function corresponding to a second touch input when the second touch input is detected while the first touch input is maintained. 11. The apparatus of claim 10, wherein the second function is a function related to the first function. 12. The apparatus of claim 11, wherein the first touch input is a touch event that selects at least one specific object in a home screen or a list screen, and the first function is a function of editing the home screen or the list screen. 13. The apparatus of claim 12, wherein the second touch input is a touch movement event that is generated in the home screen or the list screen, and the second function is a function of moving the home screen or the list screen corresponding to the touch movement event while maintaining a movement of the selected at least one specific object. 14. The apparatus of claim 13, wherein, when the first touch input is released, the controller changes a location of the selected at least one specific object to a location in which the first touch input is released. 15. The apparatus of claim 11, wherein the first touch input is a touch event that selects a specific page in an electronic book content execution screen, and the first function is a function of adding a bookmark to the selected specific page. 16. The apparatus of claim 15, wherein the second touch input is a touch movement event that moves a page of an electronic book content, and the second function is a function of moving the page of the electronic book content corresponding to the touch movement event. 17. The apparatus of claim 16, wherein, when the page is moved to a previous or a next page and a touch movement event of the first touch input is inputted, the controller outputs to a page to which the bookmark is added. 18. The apparatus of claim 15, wherein, when the first touch input is released, the controller removes the bookmark added to the specific page. 19. A computer-readable storage medium encoded with instructions that, when executed, cause a device to execute the method of claim 1. | A method of providing a multi touch interaction in a portable terminal includes receiving a first touch input, performing a first function corresponding to the received first touch input, receiving a second touch input when the first touch input is maintained, and performing a second function corresponding to the received second touch input while maintaining a movement of at least one specific object selected by the first touch input.1. A method of providing a multi touch interaction in a portable terminal having a touch screen, the method comprising:
detecting a first touch input; performing a first function corresponding to the detected first touch input; detecting a second touch input while the first touch input is maintained; and performing a second function corresponding to the detected second touch input. 2. The method of claim 1, wherein the second function is a function related to the first function. 3. The method of claim 2, wherein the first touch input is a touch event that selects at least one specific object in a home screen or a list screen, and the first function is a function of editing the home screen or the list screen. 4. The method of claim 3, wherein the second touch input is a touch movement event that is generated in the home screen or the list screen, and the second function is a function of moving the home screen or the list screen corresponding to the touch movement event while maintaining a movement of the selected at least one specific object. 5. The method of claim 4, further comprising:
when the first touch input is released, changing a location of the at least one selected specific object to a location in which the first touch input is released. 6. The method of claim 2, wherein the first touch input is a touch event that selects at least one specific page in an electronic book content execution screen, and the first function is a function of adding a bookmark to the selected specific page. 7. The method of claim 6, wherein the second touch input is a touch movement event that moves a page of an electronic book content, and the second function is a function of moving the page of the electronic book content corresponding to the touch movement event. 8. The method of claim 7, further comprising:
when the page is moved to a previous or a next page and a touch movement event of the first touch input is inputted, moving to a page to which the bookmark is added. 9. The method of claim 6, further comprising:
when the first touch input is released, removing the bookmark added to the specific page. 10. An apparatus for providing a multi touch interaction in a portable terminal, comprising:
a touch panel; and a controller configured to perform a first function corresponding to a first touch input detected on the touch panel and configured to perform a second function corresponding to a second touch input when the second touch input is detected while the first touch input is maintained. 11. The apparatus of claim 10, wherein the second function is a function related to the first function. 12. The apparatus of claim 11, wherein the first touch input is a touch event that selects at least one specific object in a home screen or a list screen, and the first function is a function of editing the home screen or the list screen. 13. The apparatus of claim 12, wherein the second touch input is a touch movement event that is generated in the home screen or the list screen, and the second function is a function of moving the home screen or the list screen corresponding to the touch movement event while maintaining a movement of the selected at least one specific object. 14. The apparatus of claim 13, wherein, when the first touch input is released, the controller changes a location of the selected at least one specific object to a location in which the first touch input is released. 15. The apparatus of claim 11, wherein the first touch input is a touch event that selects a specific page in an electronic book content execution screen, and the first function is a function of adding a bookmark to the selected specific page. 16. The apparatus of claim 15, wherein the second touch input is a touch movement event that moves a page of an electronic book content, and the second function is a function of moving the page of the electronic book content corresponding to the touch movement event. 17. The apparatus of claim 16, wherein, when the page is moved to a previous or a next page and a touch movement event of the first touch input is inputted, the controller outputs to a page to which the bookmark is added. 18. The apparatus of claim 15, wherein, when the first touch input is released, the controller removes the bookmark added to the specific page. 19. A computer-readable storage medium encoded with instructions that, when executed, cause a device to execute the method of claim 1. | 2,600 |
10,781 | 10,781 | 15,705,040 | 2,647 | A device may obtain a device identifier from a monitoring device. The device may provide, to a server device, a request for a network-assigned identifier that is associated with the monitoring device. The request may cause the server device to use the device identifier to search a data structure for the network-assigned identifier. The device may receive the network-assigned identifier from the server device. The device may provide a request to establish a communication session with the monitoring device. The request may include the network-assigned identifier of the monitoring device and a network-assigned identifier of the device. The device may receive, after providing the request, an indication that the communication session is established. | 1. A device, comprising:
one or more processors to:
obtain a device identifier from a monitoring device,
where the monitoring device is a portable monitoring device and does not include a display screen and does not include a speaker;
provide, to a server device, a request for a network-assigned identifier that is associated with the monitoring device,
the request including the device identifier, and
the request to cause the server device to provide the network-assigned identifier to the device; and
establish a communication session with the monitoring device using the network-assigned identifier of the monitoring device and a network-assigned identifier of the device. 2. The device of claim 1, where the device identifier is an international mobile equipment identity (IMEI) and the network-assigned identifier is a mobile directory number (MDN). 3. The device of claim 1, where the one or more processors, when obtaining the device identifier, are to:
scan a code associated with the monitoring device to obtain the device identifier. 4. The device of claim 1, where the one or more processors, when obtaining the device identifier, are to:
pair with the monitoring device to establish a connection via a wireless personal area network (WPAN), and obtain the device identifier via the connection. 5. The device of claim 1, where the one or more processors, when obtaining the device identifier, are to:
receive the device identifier via a user interface of the device. 6. The device of claim 1, where the one or more processors, when establishing the communication session, are to:
provide, to the server device, a request to establish the communication session,
the request including the network-assigned identifier of the monitoring device and the network-assigned identifier of the device, and
receiving the request to cause the server device to provide the request to the monitoring device to cause the communication session to be established. 7. The device of claim 1, where the one or more processors are further to:
obtain, after establishing the communication session, location information indicating a geographic location of the monitoring device. 8. A non-transitory computer-readable medium storing instructions, the instructions comprising:
one or more instructions that, when executed by one or more processors, cause the one or more processors to:
obtain a device identifier from a monitoring device,
where the monitoring device does not include a display screen and does not include a speaker;
obtain, using the device identifier, a network-assigned identifier from a server device,
the network-assigned identifier being associated with the monitoring device;
provide a request to establish a communication session with the monitoring device,
the request including the network-assigned identifier of the monitoring device and a network-assigned identifier of the device; and
receive, after providing the request, an indication that the communication session is established. 9. The non-transitory computer-readable medium of claim 8, where the one or more instructions, that cause the one or more processors to obtain the device identifier, cause the one or more processors to:
scan a code associated with the monitoring device to obtain the device identifier. 10. The non-transitory computer-readable medium of claim 8, where the one or more instructions, that cause the one or more processors to obtain the device identifier, cause the one or more processors to:
power on wireless personal area network (WPAN) capabilities, identify, after powering on WPAN capabilities, the monitoring device using a scan, pair with the monitoring device after identifying the monitoring device with the scan, and obtain, from the monitoring device, the device identifier after pairing with the monitoring device. 11. The non-transitory computer-readable medium of claim 8, where the one or more instructions, that cause the one or more processors to obtain the device identifier, cause the one or more processors to:
receive the device identifier via a user interface of the device. 12. The non-transitory computer-readable medium of claim 8, where the one or more instructions, that cause the one or more processors to provide the request to establish the communication session, cause the one or more processors to:
provide the request to establish the communication session to the server device,
the server device to provide the request to the monitoring device, and
the monitoring device to automatically establish the communication session upon receiving the request; and
where the one or more instructions, that cause the one or more processors to receive the indication that the communication session is established, cause the one or more processors to:
receive the indication that the communication session is established based on the monitoring device automatically establishing the communication session. 13. The non-transitory computer-readable medium of claim 8, where the one or more instructions, when executed by the one or more processors to, further cause the one or more processors to:
obtain, after receiving the indication that the communication session is established, location information indicating a geographic location of the monitoring device. 14. A method, comprising:
obtaining, by a device, a device identifier from a monitoring device; providing, by a device and to a server device, a request for a network-assigned identifier that is associated with the monitoring device,
the request to cause the server device to use the device identifier to search a data structure for the network-assigned identifier;
receiving, by the device and from the server device, the network-assigned identifier; providing, by the device, a request to establish a communication session with the monitoring device,
the request including the network-assigned identifier of the monitoring device and a network-assigned identifier of the device; and
receiving, by the device and after providing the request, an indication that the communication session is established. 15. The method of claim 14, where the monitoring device does not include a display screen or a speaker. 16. The method of claim 14, where obtaining the device identifier comprises:
obtaining the device identifier from the monitoring device,
where the monitoring device obtains the device identifier by providing a registration request to the server device to cause the server device to register the monitoring device and providing an acknowledgement to the monitoring device after registration is complete. 17. The method of claim 14, where obtaining the device identifier comprises:
scanning a code associated with the monitoring device to obtain the device identifier. 18. The method of claim 14, where obtaining the device identifier comprises:
powering on wireless personal area network (WPAN) capabilities, pairing with the monitoring device to establish a communication session via the WPAN, and obtaining the device identifier via the communication session. 19. The method of claim 14, where providing the request to establish the communication session comprises:
providing the request to establish the communication session to the server device,
the server device to provide the request to the monitoring device, and
the monitoring device to establish the communication session; and
where receiving the indication that the communication session is established comprises:
receiving the indication that the communication session is established based on the monitoring device establishing the communication session. 20. The method of claim 14, further comprising:
obtaining, after receiving the indication that the communication session has been established, location information indicating a geographic location of the monitoring device. | A device may obtain a device identifier from a monitoring device. The device may provide, to a server device, a request for a network-assigned identifier that is associated with the monitoring device. The request may cause the server device to use the device identifier to search a data structure for the network-assigned identifier. The device may receive the network-assigned identifier from the server device. The device may provide a request to establish a communication session with the monitoring device. The request may include the network-assigned identifier of the monitoring device and a network-assigned identifier of the device. The device may receive, after providing the request, an indication that the communication session is established.1. A device, comprising:
one or more processors to:
obtain a device identifier from a monitoring device,
where the monitoring device is a portable monitoring device and does not include a display screen and does not include a speaker;
provide, to a server device, a request for a network-assigned identifier that is associated with the monitoring device,
the request including the device identifier, and
the request to cause the server device to provide the network-assigned identifier to the device; and
establish a communication session with the monitoring device using the network-assigned identifier of the monitoring device and a network-assigned identifier of the device. 2. The device of claim 1, where the device identifier is an international mobile equipment identity (IMEI) and the network-assigned identifier is a mobile directory number (MDN). 3. The device of claim 1, where the one or more processors, when obtaining the device identifier, are to:
scan a code associated with the monitoring device to obtain the device identifier. 4. The device of claim 1, where the one or more processors, when obtaining the device identifier, are to:
pair with the monitoring device to establish a connection via a wireless personal area network (WPAN), and obtain the device identifier via the connection. 5. The device of claim 1, where the one or more processors, when obtaining the device identifier, are to:
receive the device identifier via a user interface of the device. 6. The device of claim 1, where the one or more processors, when establishing the communication session, are to:
provide, to the server device, a request to establish the communication session,
the request including the network-assigned identifier of the monitoring device and the network-assigned identifier of the device, and
receiving the request to cause the server device to provide the request to the monitoring device to cause the communication session to be established. 7. The device of claim 1, where the one or more processors are further to:
obtain, after establishing the communication session, location information indicating a geographic location of the monitoring device. 8. A non-transitory computer-readable medium storing instructions, the instructions comprising:
one or more instructions that, when executed by one or more processors, cause the one or more processors to:
obtain a device identifier from a monitoring device,
where the monitoring device does not include a display screen and does not include a speaker;
obtain, using the device identifier, a network-assigned identifier from a server device,
the network-assigned identifier being associated with the monitoring device;
provide a request to establish a communication session with the monitoring device,
the request including the network-assigned identifier of the monitoring device and a network-assigned identifier of the device; and
receive, after providing the request, an indication that the communication session is established. 9. The non-transitory computer-readable medium of claim 8, where the one or more instructions, that cause the one or more processors to obtain the device identifier, cause the one or more processors to:
scan a code associated with the monitoring device to obtain the device identifier. 10. The non-transitory computer-readable medium of claim 8, where the one or more instructions, that cause the one or more processors to obtain the device identifier, cause the one or more processors to:
power on wireless personal area network (WPAN) capabilities, identify, after powering on WPAN capabilities, the monitoring device using a scan, pair with the monitoring device after identifying the monitoring device with the scan, and obtain, from the monitoring device, the device identifier after pairing with the monitoring device. 11. The non-transitory computer-readable medium of claim 8, where the one or more instructions, that cause the one or more processors to obtain the device identifier, cause the one or more processors to:
receive the device identifier via a user interface of the device. 12. The non-transitory computer-readable medium of claim 8, where the one or more instructions, that cause the one or more processors to provide the request to establish the communication session, cause the one or more processors to:
provide the request to establish the communication session to the server device,
the server device to provide the request to the monitoring device, and
the monitoring device to automatically establish the communication session upon receiving the request; and
where the one or more instructions, that cause the one or more processors to receive the indication that the communication session is established, cause the one or more processors to:
receive the indication that the communication session is established based on the monitoring device automatically establishing the communication session. 13. The non-transitory computer-readable medium of claim 8, where the one or more instructions, when executed by the one or more processors to, further cause the one or more processors to:
obtain, after receiving the indication that the communication session is established, location information indicating a geographic location of the monitoring device. 14. A method, comprising:
obtaining, by a device, a device identifier from a monitoring device; providing, by a device and to a server device, a request for a network-assigned identifier that is associated with the monitoring device,
the request to cause the server device to use the device identifier to search a data structure for the network-assigned identifier;
receiving, by the device and from the server device, the network-assigned identifier; providing, by the device, a request to establish a communication session with the monitoring device,
the request including the network-assigned identifier of the monitoring device and a network-assigned identifier of the device; and
receiving, by the device and after providing the request, an indication that the communication session is established. 15. The method of claim 14, where the monitoring device does not include a display screen or a speaker. 16. The method of claim 14, where obtaining the device identifier comprises:
obtaining the device identifier from the monitoring device,
where the monitoring device obtains the device identifier by providing a registration request to the server device to cause the server device to register the monitoring device and providing an acknowledgement to the monitoring device after registration is complete. 17. The method of claim 14, where obtaining the device identifier comprises:
scanning a code associated with the monitoring device to obtain the device identifier. 18. The method of claim 14, where obtaining the device identifier comprises:
powering on wireless personal area network (WPAN) capabilities, pairing with the monitoring device to establish a communication session via the WPAN, and obtaining the device identifier via the communication session. 19. The method of claim 14, where providing the request to establish the communication session comprises:
providing the request to establish the communication session to the server device,
the server device to provide the request to the monitoring device, and
the monitoring device to establish the communication session; and
where receiving the indication that the communication session is established comprises:
receiving the indication that the communication session is established based on the monitoring device establishing the communication session. 20. The method of claim 14, further comprising:
obtaining, after receiving the indication that the communication session has been established, location information indicating a geographic location of the monitoring device. | 2,600 |
10,782 | 10,782 | 14,705,985 | 2,613 | An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 1. A method of displaying augmented reality, comprising:
storing a set of fingerprint data corresponding to a plurality of locations of the real world, wherein the fingerprint data uniquely identifies a location; capturing a set of data corresponding to a user's surroundings through one or more sensors of an augmented reality display system; and identifying a location of the user based at least in part on the captured set of data and the stored set of fingerprint data. 2. The method of claim 1, further comprising processing the captured set of data to modify a format of the captured data to conform with that of the fingerprint data. 3. The method of claim 1, wherein the fingerprint data comprises a color histogram of a location. 4. The method of claim 1, wherein the fingerprint data comprises received signal strength (RSS) data. 5. The method of claim 1, wherein the fingerprint data comprises a GPS data. 6. The method of claim 1, wherein the fingerprint data of a location is generated by combining a set of data pertaining to the location. 7. The method of claim 1, wherein the particular location is a room within a building. 8. The method of claim 1, further comprising retrieving additional data based at least in part on the identified location of the user. 9. The method of claim 8, wherein the additional data comprises geometric map data corresponding to the identified location. 10. The method of claim 9, further comprising displaying one or more virtual objects to the user of the augmented reality system based at least in part on the geometric map of the identified location. 11. The method of claim 1, further comprising constructing a map based at least in part on the set of fingerprint data corresponding to the plurality of locations. 12. The method of claim 11, wherein each fingerprint data that identifies a location comprises a node of the constructed map. 13. The method of claim 12, wherein a first node is connected to a second node if the first and second node have at least one shared augmented reality device in common. 14. The method of claim 11, wherein the map is layered over a geometric map of the real world. 15. The method of claim 1, wherein the captured data comprises an image of the user's surroundings, and wherein the image is processed to generate data that is of the same format as the fingerprint data. 16. The method of claim 15, further comprising generating a color histogram by processing the image of the user's surroundings. 17. The method of claim 15, wherein the constructed map is used to find errors in the geometric map of the real world. | An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems.1. A method of displaying augmented reality, comprising:
storing a set of fingerprint data corresponding to a plurality of locations of the real world, wherein the fingerprint data uniquely identifies a location; capturing a set of data corresponding to a user's surroundings through one or more sensors of an augmented reality display system; and identifying a location of the user based at least in part on the captured set of data and the stored set of fingerprint data. 2. The method of claim 1, further comprising processing the captured set of data to modify a format of the captured data to conform with that of the fingerprint data. 3. The method of claim 1, wherein the fingerprint data comprises a color histogram of a location. 4. The method of claim 1, wherein the fingerprint data comprises received signal strength (RSS) data. 5. The method of claim 1, wherein the fingerprint data comprises a GPS data. 6. The method of claim 1, wherein the fingerprint data of a location is generated by combining a set of data pertaining to the location. 7. The method of claim 1, wherein the particular location is a room within a building. 8. The method of claim 1, further comprising retrieving additional data based at least in part on the identified location of the user. 9. The method of claim 8, wherein the additional data comprises geometric map data corresponding to the identified location. 10. The method of claim 9, further comprising displaying one or more virtual objects to the user of the augmented reality system based at least in part on the geometric map of the identified location. 11. The method of claim 1, further comprising constructing a map based at least in part on the set of fingerprint data corresponding to the plurality of locations. 12. The method of claim 11, wherein each fingerprint data that identifies a location comprises a node of the constructed map. 13. The method of claim 12, wherein a first node is connected to a second node if the first and second node have at least one shared augmented reality device in common. 14. The method of claim 11, wherein the map is layered over a geometric map of the real world. 15. The method of claim 1, wherein the captured data comprises an image of the user's surroundings, and wherein the image is processed to generate data that is of the same format as the fingerprint data. 16. The method of claim 15, further comprising generating a color histogram by processing the image of the user's surroundings. 17. The method of claim 15, wherein the constructed map is used to find errors in the geometric map of the real world. | 2,600 |
10,783 | 10,783 | 16,139,499 | 2,698 | Provided is a monitoring system including an image obtaining apparatus and an image processing apparatus. The image obtaining apparatus includes: a camera; a beacon sensor; a processor configured to match beacon information obtained by detecting, by the beacon sensor, a beacon attached to an object existing in a monitoring region, to an image of the monitoring region captured by the camera; and a memory storing the image matched with the beacon information. | 1. An image obtaining apparatus comprising:
a camera; a beacon sensor; a processor configured to match beacon information with an image of a monitoring region captured by the camera, wherein the beacon information is obtained by detecting, by the beacon sensor, a beacon attached to an object existing in the monitoring region; and a memory configured to store the image matched with the beacon information. 2. The image obtaining apparatus of claim 1, wherein the processor is further configured to generate an event when a registered beacon is detected based on the beacon information, and transmit, to an external apparatus, an image matched with the beacon information corresponding to the registered beacon. 3. The image obtaining apparatus of claim 1, wherein the beacon information comprises information about a distance between the image obtaining apparatus and the beacon. 4. The image obtaining apparatus of claim 1, wherein the processor is further configured to transmit the beacon information to another image obtaining apparatus. 5. The image obtaining apparatus of claim 1, wherein the processor is further configured to back up at least a portion of the image stored in the memory to an external apparatus, at fixed time periods. 6. The image obtaining apparatus of claim 1, wherein the processor is further configured to back up at least a portion of the image stored in the memory to an external apparatus, when the image stored in the memory exceeds a preset storage capacity. 7. The image obtaining apparatus of claim 1, wherein the processor is further configured to control a direction of the camera such that the camera captures an image of a region where a registered beacon is located, when the detected beacon is a registered beacon. 8. The image obtaining apparatus of claim 1, wherein the processor is further configured to set a priority to the beacon and control a direction of the camera such that an image of the monitoring region is captured according to the priority. 9. The image obtaining apparatus of claim 1, wherein the processor is further configured to match information about the beacon, existing in a second region, with a first image which is an image of a first region and a second image which is an image of a second region in which the beacon is located, captured by the camera. 10. The image obtaining apparatus of claim 9, wherein the processor is further configured to receive an image request from an image processing apparatus, and transmit, to the image processing apparatus, the second image matched with the beacon information corresponding to condition information included in the image request. 11. The image obtaining apparatus of claim 1, wherein the processor is further configured to store an image which is not matched with the beacon information, from among captured images of the monitoring region, in an external storage apparatus. 12. An image obtaining apparatus comprising:
a camera arranged at a fixed position and configured to capture a first image of a first region; a beacon sensor configured to detect a beacon attached to an object existing in a second region by receiving a beacon signal from the beacon; a processor configured to control the camera to capture a second image of a second region in response to the beacon sensor detecting the beacon, generate beacon information based on the beacon signal, and tag the beacon with the second image based on the beacon information; and a memory configured to store the second image tagged with the beacon. 13. The image obtaining apparatus, wherein the beacon information comprises at least one of identification information about the beacon, position information about the beacon, time information indicating a time at which the beacon is detected by the beacon sensor, identification information about the image obtaining apparatus, and identification information about a user terminal which transmits an image request corresponding to the second image. 14. The image obtaining apparatus of claim 13, wherein the second region is included in the first region, and
wherein the processor is configured to direct the camera to the second region to capture the second image by at least one of pan, tilt and zoom operations, and control the camera to take an original posture prior to being directed to the second region and resume capturing the first image when the beacon leaves the first region. 15. The image obtaining apparatus of claim 13, wherein the processor is further configured to receive the image request from the user terminal or another apparatus and search for the second image when the image request corresponds to the identification about the beacon. 16. The image obtaining apparatus of claim 12, wherein the second region comprises a plurality of second regions in which a plurality of beacons exist,
wherein the processor is configured to control the camera to capture respective second images of the second regions in response to the beacon sensor detecting the beacons according to an image capturing sequence set based on priorities among the second regions or locations of the beacons. 17. An image processing apparatus comprising:
a storage apparatus configured to store an image matched with beacon information received from an image obtaining apparatus, and store an image which is not matched with beacon information; and a processor configured to receive an image request from a user terminal, request an image corresponding to condition information included in the image request, to the image obtaining apparatus, receive the image matched with the beacon information corresponding to the condition information, from the image obtaining apparatus, and transmit, to the user terminal, the image matched with the beacon information corresponding to the condition information. 18. The image processing apparatus of claim 17, wherein the processor is further configured to provide a map comprising a place corresponding to a background of the image matched with beacon information on a display screen, and display a beacon corresponding to the condition information on the map. 19. The image processing apparatus of claim 18, wherein the processor is further configured to display a beacon corresponding to the condition information and at least one another beacon around the beacon corresponding to the condition information, on the map, and
wherein the beacon corresponding to the condition information and the at least one another beacon are distinguished by at least one of a size, a shape, and a color. 20. The image processing apparatus of claim 17, wherein the processor is further configured to receive a first image from an external storage apparatus outside the image obtaining apparatus, and receive a second image from the image obtaining apparatus,
wherein the first image is an image of a preset first region captured by the image obtaining apparatus and stored in the external storage apparatus, and wherein the second image is an image captured by the image obtaining apparatus in a direction facing a beacon existing in the first region and stored in an internal memory of the image obtaining apparatus. 21. The image processing apparatus of claim 20, wherein the processor is further configured to transmit at least one of the first image and the second image to the user terminal. 22. A user terminal, comprising:
a processor configured to transmit an image request including condition information to an image processing apparatus, and receive, from the image processing apparatus, an image matched with beacon information corresponding to the condition information, from among images matched with beacon information which are transmitted from at least one image obtaining apparatus to the image processing apparatus, wherein the processor is further configured to provide a map corresponding to a background of the image matched with the beacon information to a display, and display a tag based on the beacon information of the image matched with the beacon information on the map. 23. The user terminal of claim 22, wherein the processor is further configured to provide a list of images matched with the beacon information corresponding to the condition information to a display screen, and provide an image selected from the list to the display in a reproducible format. 24. The user terminal of claim 22, wherein the processor is further configured to receive a tag-selecting signal and provide tag information around a selected tag. | Provided is a monitoring system including an image obtaining apparatus and an image processing apparatus. The image obtaining apparatus includes: a camera; a beacon sensor; a processor configured to match beacon information obtained by detecting, by the beacon sensor, a beacon attached to an object existing in a monitoring region, to an image of the monitoring region captured by the camera; and a memory storing the image matched with the beacon information.1. An image obtaining apparatus comprising:
a camera; a beacon sensor; a processor configured to match beacon information with an image of a monitoring region captured by the camera, wherein the beacon information is obtained by detecting, by the beacon sensor, a beacon attached to an object existing in the monitoring region; and a memory configured to store the image matched with the beacon information. 2. The image obtaining apparatus of claim 1, wherein the processor is further configured to generate an event when a registered beacon is detected based on the beacon information, and transmit, to an external apparatus, an image matched with the beacon information corresponding to the registered beacon. 3. The image obtaining apparatus of claim 1, wherein the beacon information comprises information about a distance between the image obtaining apparatus and the beacon. 4. The image obtaining apparatus of claim 1, wherein the processor is further configured to transmit the beacon information to another image obtaining apparatus. 5. The image obtaining apparatus of claim 1, wherein the processor is further configured to back up at least a portion of the image stored in the memory to an external apparatus, at fixed time periods. 6. The image obtaining apparatus of claim 1, wherein the processor is further configured to back up at least a portion of the image stored in the memory to an external apparatus, when the image stored in the memory exceeds a preset storage capacity. 7. The image obtaining apparatus of claim 1, wherein the processor is further configured to control a direction of the camera such that the camera captures an image of a region where a registered beacon is located, when the detected beacon is a registered beacon. 8. The image obtaining apparatus of claim 1, wherein the processor is further configured to set a priority to the beacon and control a direction of the camera such that an image of the monitoring region is captured according to the priority. 9. The image obtaining apparatus of claim 1, wherein the processor is further configured to match information about the beacon, existing in a second region, with a first image which is an image of a first region and a second image which is an image of a second region in which the beacon is located, captured by the camera. 10. The image obtaining apparatus of claim 9, wherein the processor is further configured to receive an image request from an image processing apparatus, and transmit, to the image processing apparatus, the second image matched with the beacon information corresponding to condition information included in the image request. 11. The image obtaining apparatus of claim 1, wherein the processor is further configured to store an image which is not matched with the beacon information, from among captured images of the monitoring region, in an external storage apparatus. 12. An image obtaining apparatus comprising:
a camera arranged at a fixed position and configured to capture a first image of a first region; a beacon sensor configured to detect a beacon attached to an object existing in a second region by receiving a beacon signal from the beacon; a processor configured to control the camera to capture a second image of a second region in response to the beacon sensor detecting the beacon, generate beacon information based on the beacon signal, and tag the beacon with the second image based on the beacon information; and a memory configured to store the second image tagged with the beacon. 13. The image obtaining apparatus, wherein the beacon information comprises at least one of identification information about the beacon, position information about the beacon, time information indicating a time at which the beacon is detected by the beacon sensor, identification information about the image obtaining apparatus, and identification information about a user terminal which transmits an image request corresponding to the second image. 14. The image obtaining apparatus of claim 13, wherein the second region is included in the first region, and
wherein the processor is configured to direct the camera to the second region to capture the second image by at least one of pan, tilt and zoom operations, and control the camera to take an original posture prior to being directed to the second region and resume capturing the first image when the beacon leaves the first region. 15. The image obtaining apparatus of claim 13, wherein the processor is further configured to receive the image request from the user terminal or another apparatus and search for the second image when the image request corresponds to the identification about the beacon. 16. The image obtaining apparatus of claim 12, wherein the second region comprises a plurality of second regions in which a plurality of beacons exist,
wherein the processor is configured to control the camera to capture respective second images of the second regions in response to the beacon sensor detecting the beacons according to an image capturing sequence set based on priorities among the second regions or locations of the beacons. 17. An image processing apparatus comprising:
a storage apparatus configured to store an image matched with beacon information received from an image obtaining apparatus, and store an image which is not matched with beacon information; and a processor configured to receive an image request from a user terminal, request an image corresponding to condition information included in the image request, to the image obtaining apparatus, receive the image matched with the beacon information corresponding to the condition information, from the image obtaining apparatus, and transmit, to the user terminal, the image matched with the beacon information corresponding to the condition information. 18. The image processing apparatus of claim 17, wherein the processor is further configured to provide a map comprising a place corresponding to a background of the image matched with beacon information on a display screen, and display a beacon corresponding to the condition information on the map. 19. The image processing apparatus of claim 18, wherein the processor is further configured to display a beacon corresponding to the condition information and at least one another beacon around the beacon corresponding to the condition information, on the map, and
wherein the beacon corresponding to the condition information and the at least one another beacon are distinguished by at least one of a size, a shape, and a color. 20. The image processing apparatus of claim 17, wherein the processor is further configured to receive a first image from an external storage apparatus outside the image obtaining apparatus, and receive a second image from the image obtaining apparatus,
wherein the first image is an image of a preset first region captured by the image obtaining apparatus and stored in the external storage apparatus, and wherein the second image is an image captured by the image obtaining apparatus in a direction facing a beacon existing in the first region and stored in an internal memory of the image obtaining apparatus. 21. The image processing apparatus of claim 20, wherein the processor is further configured to transmit at least one of the first image and the second image to the user terminal. 22. A user terminal, comprising:
a processor configured to transmit an image request including condition information to an image processing apparatus, and receive, from the image processing apparatus, an image matched with beacon information corresponding to the condition information, from among images matched with beacon information which are transmitted from at least one image obtaining apparatus to the image processing apparatus, wherein the processor is further configured to provide a map corresponding to a background of the image matched with the beacon information to a display, and display a tag based on the beacon information of the image matched with the beacon information on the map. 23. The user terminal of claim 22, wherein the processor is further configured to provide a list of images matched with the beacon information corresponding to the condition information to a display screen, and provide an image selected from the list to the display in a reproducible format. 24. The user terminal of claim 22, wherein the processor is further configured to receive a tag-selecting signal and provide tag information around a selected tag. | 2,600 |
10,784 | 10,784 | 16,295,865 | 2,612 | Some embodiments provide a non-transitory machine-readable medium that stores a program. The program sends a second computing system a spatial filter and a first query for a first set of geo-enriched data associated with a spatial visualization. The program further sends the second computing system the spatial filter and a second query for a second set of geo-enriched data associated with a non-spatial visualization. The program also receives, from the second computing system, a subset of the first set of geo-enriched data. The program further receives, from the second computing system, a subset of the second set of geo-enriched data. The program also generates the spatial visualization to include the subset of the first set of geo-enriched data. The program further generates the non-spatial visualization to include the subset of the second set of geo-enriched data. | 1. A non-transitory machine-readable medium storing a program executable by at least one processing unit of a first computing system, the program comprising sets of instructions for:
sending a second computing system a spatial filter and a first query for a set of geo-enriched data associated with a spatial visualization, wherein each geo-enriched data in the set of geo-enriched data comprises spatial data, location data, and non-location data, wherein the spatial filter specifies a set of geographical regions; sending the second computing system the spatial filter and a second query for the set of geo-enriched data associated with a non-spatial visualization; receiving, from the second computing system, a first subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the first subset of the set of geo-enriched data is within the set of geographical regions of the spatial filter; receiving, from the second computing system, a second subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the second subset of the set of geo-enriched data is within the set of geographical regions of the spatial filter; generating the spatial visualization to include the spatial data and the location data associated with the first subset of the set of geo-enriched data; and generating the non-spatial visualization to include the non-location data associated with the second subset of the set of geo-enriched data. 2. The non-transitory machine-readable medium of claim 1, wherein the program further comprises sets of instructions for displaying the spatial visualization and the non-spatial visualization on a display of the first computing system. 3. The non-transitory machine-readable medium of claim 1, wherein the spatial filter specifies a geometry of a geographical element in the spatial visualization. 4. The non-transitory machine-readable medium of claim 1, wherein the spatial visualization includes a tool for specifying a geometry in the spatial visualization, wherein the spatial filter specifies the geometry in the spatial visualization defined via the tool. 5. The non-transitory machine-readable medium of claim 1, wherein the spatial visualization includes a set of geographical elements, wherein the spatial filter specifies a distance filter that filters for geo-enriched data that is within a defined distance to the set of geographical elements in the spatial visualization. 6. The non-transitory machine-readable medium of claim 1, wherein the set of geographical regions is a first set of geographical regions, wherein the program further comprises sets of instructions for:
receiving a modification to the spatial filter, wherein the modified spatial filter specifies a second set of geographical regions; in response to the modification: sending the second computing system the modified spatial filter and the first query for the set of geo-enriched data associated with the spatial visualization; sending the second computing system the modified spatial filter and the second query for the set of geo-enriched data associated with the non-spatial visualization; receiving, from the second computing system, a third subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the third subset of the set of geo-enriched data is within the second set of geographical regions of the modified spatial filter; receiving, from the second computing system, a fourth subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the fourth subset of the set of geo-enriched data is within the second set of geographical regions of the modified spatial filter; generating the spatial visualization to include the spatial data and the location data associated with the third subset of the set of geo-enriched data; and generating the non-spatial visualization to include the non-location data associated with the fourth subset of the set of geo-enriched data. 7. The non-transitory machine-readable medium of claim 1, wherein the spatial filter is a first spatial filter, wherein sending the second computing system the spatial filter and the first query comprises sending the second computing system the first spatial filter, a second spatial filter, and the first query, wherein sending the second computing system the spatial filter and the second query comprises sending the second computing system the first spatial filter, the second spatial filter, and the second query. 8. For a method performed by a first computing system, the method comprising:
sending a second computing system a spatial filter and a first query for a set of geo-enriched data associated with a spatial visualization, wherein each geo-enriched data in the set of geo-enriched data comprises spatial data, location data, and non-location data, wherein the spatial filter specifies a set of geographical regions; sending the second computing system the spatial filter and a second query for the set of geo-enriched data associated with a non-spatial visualization; receiving, from the second computing system, a first subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the first subset of the set of geo-enriched data is within the set of geographical regions of the spatial filter; receiving, from the second computing system, a second subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the second subset of the set of geo-enriched data is within the set of geographical regions of the spatial filter; generating the spatial visualization to include the spatial data and the location data associated with the first subset of the set of geo-enriched data; and generating the non-spatial visualization to include the non-location data associated with the second subset of the set of geo-enriched data. 9. The method of claim 8 further comprising displaying the spatial visualization and the non-spatial visualization on a display of the first computing system. 10. The method of claim 8, wherein the spatial filter specifies a geometry of a geographical element in the spatial visualization. 11. The method of claim 8, wherein the spatial visualization includes a tool for specifying a geometry in the spatial visualization, wherein the spatial filter specifies the geometry in the spatial visualization defined via the tool. 12. The method of claim 8, wherein the spatial visualization includes a set of geographical elements, wherein the spatial filter specifies a distance filter that filters for geo-enriched data that is within a defined distance to the set of geographical elements in the spatial visualization. 13. The method of claim 8, wherein the method further comprises:
receiving a modification to the spatial filter, wherein the modified spatial filter specifies a second set of geographical regions; in response to the modification: sending the second computing system the modified spatial filter and the first query for the set of geo-enriched data associated with the spatial visualization; sending the second computing system the modified spatial filter and the second query for the set of geo-enriched data associated with the non-spatial visualization; receiving, from the second computing system, a third subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the third subset of the set of geo-enriched data is within the second set of geographical regions of the modified spatial filter; receiving, from the second computing system, a fourth subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the fourth subset of the set of geo-enriched data is within the second set of geographical regions of the modified spatial filter; generating the spatial visualization to include the spatial data and the location data associated with the third subset of the set of geo-enriched data; and generating the non-spatial visualization to include the non-location data associated with the fourth subset of the set of geo-enriched data. 14. The method of claim 8, wherein the spatial filter is a first spatial filter, wherein sending the second computing system the spatial filter and the first query comprises sending the second computing system the first spatial filter, a second spatial filter, and the first query, wherein sending the second computing system the spatial filter and the second query comprises sending the second computing system the first spatial filter, the second spatial filter, and the second query. 15. A system comprising:
a set of processing units; and a non-transitory computer-readable medium storing instructions that when executed by at least one processing unit in the set of processing units cause the at least one processing unit to: send a computing system a spatial filter and a first query for a set of geo-enriched data associated with a spatial visualization, wherein each geo-enriched data in the set of geo-enriched data comprises spatial data, location data, and non-location data, wherein the spatial filter specifies a set of geographical regions; send the computing system the spatial filter and a second query for the set of geo-enriched data associated with a non-spatial visualization; receive, from the computing system, a first subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the first subset of the set of geo-enriched data is within the set of geographical regions of the spatial filter; receive, from the computing system, a second subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the second subset of the set of geo-enriched data is within the set of geographical regions of the spatial filter; generate the spatial visualization to include the spatial data and the location data associated with the first subset of the set of geo-enriched data; and generate the non-spatial visualization to include the non-location data associated with the second subset of the set of geo-enriched data. 16. The system of claim 15, wherein the instructions further cause the at least one processing unit to display the spatial visualization and the non-spatial visualization on a display of the system. 17. The system of claim 15, wherein the spatial filter comprises a geometry of a geographical element in the spatial visualization. 18. The system of claim 15, wherein the spatial visualization includes a tool for specifying a geometry in the spatial visualization, wherein the spatial filter comprises the geometry in the spatial visualization defined via the tool. 19. The system of claim 15, wherein the spatial visualization includes a set of geographical elements, wherein the spatial filter specifies a distance filter that filters for geo-enriched data that is within a defined distance to the set of geographical elements in the spatial visualization. 20. The system of claim 15, wherein the instructions further cause the at least one processing unit to:
receive a modification to the spatial filter, wherein the modified spatial filter specifies a second set of geographical regions; in response to the modification: send the second computing system the modified spatial filter and the first query for the set of geo-enriched data associated with the spatial visualization; send the second computing system the modified spatial filter and the second query for the set of geo-enriched data associated with the non-spatial visualization; receive, from the second computing system, a third subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the third subset of the set of geo-enriched data is within the second set of geographical regions of the modified spatial filter; receive, from the second computing system, a fourth subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the fourth subset of the set of geo-enriched data is within the second set of geographical regions of the modified spatial filter; generate the spatial visualization to include the spatial data and the location data associated with the third subset of the set of geo-enriched data; and generate the non-spatial visualization to include the non-location data associated with the fourth subset of the set of geo-enriched data. | Some embodiments provide a non-transitory machine-readable medium that stores a program. The program sends a second computing system a spatial filter and a first query for a first set of geo-enriched data associated with a spatial visualization. The program further sends the second computing system the spatial filter and a second query for a second set of geo-enriched data associated with a non-spatial visualization. The program also receives, from the second computing system, a subset of the first set of geo-enriched data. The program further receives, from the second computing system, a subset of the second set of geo-enriched data. The program also generates the spatial visualization to include the subset of the first set of geo-enriched data. The program further generates the non-spatial visualization to include the subset of the second set of geo-enriched data.1. A non-transitory machine-readable medium storing a program executable by at least one processing unit of a first computing system, the program comprising sets of instructions for:
sending a second computing system a spatial filter and a first query for a set of geo-enriched data associated with a spatial visualization, wherein each geo-enriched data in the set of geo-enriched data comprises spatial data, location data, and non-location data, wherein the spatial filter specifies a set of geographical regions; sending the second computing system the spatial filter and a second query for the set of geo-enriched data associated with a non-spatial visualization; receiving, from the second computing system, a first subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the first subset of the set of geo-enriched data is within the set of geographical regions of the spatial filter; receiving, from the second computing system, a second subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the second subset of the set of geo-enriched data is within the set of geographical regions of the spatial filter; generating the spatial visualization to include the spatial data and the location data associated with the first subset of the set of geo-enriched data; and generating the non-spatial visualization to include the non-location data associated with the second subset of the set of geo-enriched data. 2. The non-transitory machine-readable medium of claim 1, wherein the program further comprises sets of instructions for displaying the spatial visualization and the non-spatial visualization on a display of the first computing system. 3. The non-transitory machine-readable medium of claim 1, wherein the spatial filter specifies a geometry of a geographical element in the spatial visualization. 4. The non-transitory machine-readable medium of claim 1, wherein the spatial visualization includes a tool for specifying a geometry in the spatial visualization, wherein the spatial filter specifies the geometry in the spatial visualization defined via the tool. 5. The non-transitory machine-readable medium of claim 1, wherein the spatial visualization includes a set of geographical elements, wherein the spatial filter specifies a distance filter that filters for geo-enriched data that is within a defined distance to the set of geographical elements in the spatial visualization. 6. The non-transitory machine-readable medium of claim 1, wherein the set of geographical regions is a first set of geographical regions, wherein the program further comprises sets of instructions for:
receiving a modification to the spatial filter, wherein the modified spatial filter specifies a second set of geographical regions; in response to the modification: sending the second computing system the modified spatial filter and the first query for the set of geo-enriched data associated with the spatial visualization; sending the second computing system the modified spatial filter and the second query for the set of geo-enriched data associated with the non-spatial visualization; receiving, from the second computing system, a third subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the third subset of the set of geo-enriched data is within the second set of geographical regions of the modified spatial filter; receiving, from the second computing system, a fourth subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the fourth subset of the set of geo-enriched data is within the second set of geographical regions of the modified spatial filter; generating the spatial visualization to include the spatial data and the location data associated with the third subset of the set of geo-enriched data; and generating the non-spatial visualization to include the non-location data associated with the fourth subset of the set of geo-enriched data. 7. The non-transitory machine-readable medium of claim 1, wherein the spatial filter is a first spatial filter, wherein sending the second computing system the spatial filter and the first query comprises sending the second computing system the first spatial filter, a second spatial filter, and the first query, wherein sending the second computing system the spatial filter and the second query comprises sending the second computing system the first spatial filter, the second spatial filter, and the second query. 8. For a method performed by a first computing system, the method comprising:
sending a second computing system a spatial filter and a first query for a set of geo-enriched data associated with a spatial visualization, wherein each geo-enriched data in the set of geo-enriched data comprises spatial data, location data, and non-location data, wherein the spatial filter specifies a set of geographical regions; sending the second computing system the spatial filter and a second query for the set of geo-enriched data associated with a non-spatial visualization; receiving, from the second computing system, a first subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the first subset of the set of geo-enriched data is within the set of geographical regions of the spatial filter; receiving, from the second computing system, a second subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the second subset of the set of geo-enriched data is within the set of geographical regions of the spatial filter; generating the spatial visualization to include the spatial data and the location data associated with the first subset of the set of geo-enriched data; and generating the non-spatial visualization to include the non-location data associated with the second subset of the set of geo-enriched data. 9. The method of claim 8 further comprising displaying the spatial visualization and the non-spatial visualization on a display of the first computing system. 10. The method of claim 8, wherein the spatial filter specifies a geometry of a geographical element in the spatial visualization. 11. The method of claim 8, wherein the spatial visualization includes a tool for specifying a geometry in the spatial visualization, wherein the spatial filter specifies the geometry in the spatial visualization defined via the tool. 12. The method of claim 8, wherein the spatial visualization includes a set of geographical elements, wherein the spatial filter specifies a distance filter that filters for geo-enriched data that is within a defined distance to the set of geographical elements in the spatial visualization. 13. The method of claim 8, wherein the method further comprises:
receiving a modification to the spatial filter, wherein the modified spatial filter specifies a second set of geographical regions; in response to the modification: sending the second computing system the modified spatial filter and the first query for the set of geo-enriched data associated with the spatial visualization; sending the second computing system the modified spatial filter and the second query for the set of geo-enriched data associated with the non-spatial visualization; receiving, from the second computing system, a third subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the third subset of the set of geo-enriched data is within the second set of geographical regions of the modified spatial filter; receiving, from the second computing system, a fourth subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the fourth subset of the set of geo-enriched data is within the second set of geographical regions of the modified spatial filter; generating the spatial visualization to include the spatial data and the location data associated with the third subset of the set of geo-enriched data; and generating the non-spatial visualization to include the non-location data associated with the fourth subset of the set of geo-enriched data. 14. The method of claim 8, wherein the spatial filter is a first spatial filter, wherein sending the second computing system the spatial filter and the first query comprises sending the second computing system the first spatial filter, a second spatial filter, and the first query, wherein sending the second computing system the spatial filter and the second query comprises sending the second computing system the first spatial filter, the second spatial filter, and the second query. 15. A system comprising:
a set of processing units; and a non-transitory computer-readable medium storing instructions that when executed by at least one processing unit in the set of processing units cause the at least one processing unit to: send a computing system a spatial filter and a first query for a set of geo-enriched data associated with a spatial visualization, wherein each geo-enriched data in the set of geo-enriched data comprises spatial data, location data, and non-location data, wherein the spatial filter specifies a set of geographical regions; send the computing system the spatial filter and a second query for the set of geo-enriched data associated with a non-spatial visualization; receive, from the computing system, a first subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the first subset of the set of geo-enriched data is within the set of geographical regions of the spatial filter; receive, from the computing system, a second subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the second subset of the set of geo-enriched data is within the set of geographical regions of the spatial filter; generate the spatial visualization to include the spatial data and the location data associated with the first subset of the set of geo-enriched data; and generate the non-spatial visualization to include the non-location data associated with the second subset of the set of geo-enriched data. 16. The system of claim 15, wherein the instructions further cause the at least one processing unit to display the spatial visualization and the non-spatial visualization on a display of the system. 17. The system of claim 15, wherein the spatial filter comprises a geometry of a geographical element in the spatial visualization. 18. The system of claim 15, wherein the spatial visualization includes a tool for specifying a geometry in the spatial visualization, wherein the spatial filter comprises the geometry in the spatial visualization defined via the tool. 19. The system of claim 15, wherein the spatial visualization includes a set of geographical elements, wherein the spatial filter specifies a distance filter that filters for geo-enriched data that is within a defined distance to the set of geographical elements in the spatial visualization. 20. The system of claim 15, wherein the instructions further cause the at least one processing unit to:
receive a modification to the spatial filter, wherein the modified spatial filter specifies a second set of geographical regions; in response to the modification: send the second computing system the modified spatial filter and the first query for the set of geo-enriched data associated with the spatial visualization; send the second computing system the modified spatial filter and the second query for the set of geo-enriched data associated with the non-spatial visualization; receive, from the second computing system, a third subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the third subset of the set of geo-enriched data is within the second set of geographical regions of the modified spatial filter; receive, from the second computing system, a fourth subset of the set of geo-enriched data, wherein the spatial data associated with each geo-enriched data in the fourth subset of the set of geo-enriched data is within the second set of geographical regions of the modified spatial filter; generate the spatial visualization to include the spatial data and the location data associated with the third subset of the set of geo-enriched data; and generate the non-spatial visualization to include the non-location data associated with the fourth subset of the set of geo-enriched data. | 2,600 |
10,785 | 10,785 | 16,180,127 | 2,636 | A method of loading a fiber optic transport system includes detecting optical power in a frequency sub-band of optical spectrum, wherein the frequency sub-band includes data-bearing channels and at least one control channel; measuring optical power in the frequency sub-band; and adjusting optical power of the at least one control channel based on the measured optical power, wherein the optical power is adjusted through one of changing a current of control channel lasers and controlling a set of fast optical attenuators. | 1. A method of loading a fiber optic transport system, the method comprising:
detecting optical power in a frequency sub-band of optical spectrum, wherein the frequency sub-band includes data-bearing channels and at least one control channel; measuring optical power in the frequency sub-band; and adjusting optical power of the at least one control channel based on the measured optical power, wherein the optical power is adjusted through one of changing a current of control channel lasers and controlling a set of fast optical attenuators. 2. The method of claim 1, wherein the frequency sub-band further includes dummy channels which are added to the frequency sub-band via a wavelength selective switch. 3. The method of claim 2, wherein the dummy channels are provided via a separate source from the at least one control channel. 4. The method of claim 2, wherein, during normal operation, all channels including a combination of the data-bearing channels and the dummy channels are present at associated per-channel powers and the at least one control channel is set at a nominal per-channel power level such that the frequency sub-band has a total target power, and
wherein, subsequent to a transient detected by the measuring, the adjusting sets the at least one control channel based on a difference of the measured optical power and the total target power. 5. The method of claim 2, wherein changes with respect to the dummy channels are on the order of 50 ms in time and changes with respect to the at least one control channel are on the order of 100 μs. 6. The method of claim 2, wherein the at least one control channel is added to the frequency sub-band via a coupling mechanism after and separate from the wavelength selective switch used to add the dummy channels. 7. The method of claim 2, wherein, as one or more channels of the data-bearing channels and the dummy channels appear or disappear, the power of the at least one control channel is decreased or increased, respectively. 8. The method of claim 1, wherein each control channel of the at least one control channel includes a pair of signals that are cross-polarized and each of the pair of signals is at a different frequency from one another, the respective pair of signals at the different frequencies have their optical power controlled together. 9. The method of claim 1, wherein the at least one control channel is added to the frequency sub-band via a coupling mechanism after and separate from a wavelength selective switch. 10. The method of claim 1, wherein lasers forming the at least one control channel are dithered to suppress stimulated Brillouin scattering (SBS). 11. A system for loading a fiber optic transport system, the system comprising:
a plurality of optical sources configured to add at least one control channel to a frequency sub-band of optical spectrum, wherein the frequency sub-band includes data-bearing channels and at least one control channel; an optical detector configured to measure optical power in the frequency sub-band; and an optical power control unit configured to adjust optical power of the at least one control channel based on the measured optical power, wherein the optical power is adjusted through one of changing a current of control channel lasers and controlling a set of fast optical attenuators. 12. The system of claim 11, wherein the frequency sub-band further includes dummy channels which are added to the frequency sub-band via a wavelength selective switch. 13. The system of claim 12, wherein the dummy channels are provided via a separate source from the at least one control channel. 14. The system of claim 12, wherein, during normal operation, all channels including a combination of the data-bearing channels and the dummy channels are present at associated per-channel powers and the at least one control channel is set at a nominal per-channel power level such that the frequency sub-band has a total target power, and
wherein, subsequent to a transient detected by the measuring, the optical power control unit sets the at least one control channel based on a difference of the measured optical power and the total target power. 15. The system of claim 12, wherein, as one or more channels of the data-bearing channels and the dummy channels appear or disappear, the power of the at least one control channel is decreased or increased, respectively. 16. The system of claim 11, wherein each control channel of the at least one control channel includes a pair of signals that are cross-polarized and each of the pair of signals is at a different frequency from one another, the respective pair of signals at the different frequencies have their optical power controlled together. 17. The system of claim 11, wherein the at least one control channel is added to the frequency sub-band via a coupling mechanism after and separate from a wavelength selective switch. 18. An optical power control unit for control of loading in a fiber optic transport system, the optical power control unit comprising:
a connection to a plurality of optical sources configured to add at least one control channel to a frequency sub-band of optical spectrum, wherein the frequency sub-band includes data-bearing channels and at least one control channel; a connection to an optical detector; and a processor configured to
obtain measured optical power in the frequency sub-band, and
cause adjustment of optical power of the plurality of optical sources based on the measured optical power, wherein the optical power is adjusted through one of changing a current of control channel lasers and controlling a set of fast optical attenuators. 19. The optical power control unit of claim 18, wherein the frequency sub-band further includes dummy channels which are added to the frequency sub-band via a wavelength selective switch, and wherein the dummy channels are provided via a separate source from the at least one control channel. 20. The optical power control unit of claim 18, wherein, during normal operation, all channels including a combination of the data-bearing channels and dummy channels are present at associated per-channel powers and the at least one control channel is set at a nominal per-channel power level such that the frequency sub-band has a total target power, and
wherein, subsequent to a transient detected by the measuring, the adjustment sets the at least one control channel based on a difference of the measured optical power and the total target power. | A method of loading a fiber optic transport system includes detecting optical power in a frequency sub-band of optical spectrum, wherein the frequency sub-band includes data-bearing channels and at least one control channel; measuring optical power in the frequency sub-band; and adjusting optical power of the at least one control channel based on the measured optical power, wherein the optical power is adjusted through one of changing a current of control channel lasers and controlling a set of fast optical attenuators.1. A method of loading a fiber optic transport system, the method comprising:
detecting optical power in a frequency sub-band of optical spectrum, wherein the frequency sub-band includes data-bearing channels and at least one control channel; measuring optical power in the frequency sub-band; and adjusting optical power of the at least one control channel based on the measured optical power, wherein the optical power is adjusted through one of changing a current of control channel lasers and controlling a set of fast optical attenuators. 2. The method of claim 1, wherein the frequency sub-band further includes dummy channels which are added to the frequency sub-band via a wavelength selective switch. 3. The method of claim 2, wherein the dummy channels are provided via a separate source from the at least one control channel. 4. The method of claim 2, wherein, during normal operation, all channels including a combination of the data-bearing channels and the dummy channels are present at associated per-channel powers and the at least one control channel is set at a nominal per-channel power level such that the frequency sub-band has a total target power, and
wherein, subsequent to a transient detected by the measuring, the adjusting sets the at least one control channel based on a difference of the measured optical power and the total target power. 5. The method of claim 2, wherein changes with respect to the dummy channels are on the order of 50 ms in time and changes with respect to the at least one control channel are on the order of 100 μs. 6. The method of claim 2, wherein the at least one control channel is added to the frequency sub-band via a coupling mechanism after and separate from the wavelength selective switch used to add the dummy channels. 7. The method of claim 2, wherein, as one or more channels of the data-bearing channels and the dummy channels appear or disappear, the power of the at least one control channel is decreased or increased, respectively. 8. The method of claim 1, wherein each control channel of the at least one control channel includes a pair of signals that are cross-polarized and each of the pair of signals is at a different frequency from one another, the respective pair of signals at the different frequencies have their optical power controlled together. 9. The method of claim 1, wherein the at least one control channel is added to the frequency sub-band via a coupling mechanism after and separate from a wavelength selective switch. 10. The method of claim 1, wherein lasers forming the at least one control channel are dithered to suppress stimulated Brillouin scattering (SBS). 11. A system for loading a fiber optic transport system, the system comprising:
a plurality of optical sources configured to add at least one control channel to a frequency sub-band of optical spectrum, wherein the frequency sub-band includes data-bearing channels and at least one control channel; an optical detector configured to measure optical power in the frequency sub-band; and an optical power control unit configured to adjust optical power of the at least one control channel based on the measured optical power, wherein the optical power is adjusted through one of changing a current of control channel lasers and controlling a set of fast optical attenuators. 12. The system of claim 11, wherein the frequency sub-band further includes dummy channels which are added to the frequency sub-band via a wavelength selective switch. 13. The system of claim 12, wherein the dummy channels are provided via a separate source from the at least one control channel. 14. The system of claim 12, wherein, during normal operation, all channels including a combination of the data-bearing channels and the dummy channels are present at associated per-channel powers and the at least one control channel is set at a nominal per-channel power level such that the frequency sub-band has a total target power, and
wherein, subsequent to a transient detected by the measuring, the optical power control unit sets the at least one control channel based on a difference of the measured optical power and the total target power. 15. The system of claim 12, wherein, as one or more channels of the data-bearing channels and the dummy channels appear or disappear, the power of the at least one control channel is decreased or increased, respectively. 16. The system of claim 11, wherein each control channel of the at least one control channel includes a pair of signals that are cross-polarized and each of the pair of signals is at a different frequency from one another, the respective pair of signals at the different frequencies have their optical power controlled together. 17. The system of claim 11, wherein the at least one control channel is added to the frequency sub-band via a coupling mechanism after and separate from a wavelength selective switch. 18. An optical power control unit for control of loading in a fiber optic transport system, the optical power control unit comprising:
a connection to a plurality of optical sources configured to add at least one control channel to a frequency sub-band of optical spectrum, wherein the frequency sub-band includes data-bearing channels and at least one control channel; a connection to an optical detector; and a processor configured to
obtain measured optical power in the frequency sub-band, and
cause adjustment of optical power of the plurality of optical sources based on the measured optical power, wherein the optical power is adjusted through one of changing a current of control channel lasers and controlling a set of fast optical attenuators. 19. The optical power control unit of claim 18, wherein the frequency sub-band further includes dummy channels which are added to the frequency sub-band via a wavelength selective switch, and wherein the dummy channels are provided via a separate source from the at least one control channel. 20. The optical power control unit of claim 18, wherein, during normal operation, all channels including a combination of the data-bearing channels and dummy channels are present at associated per-channel powers and the at least one control channel is set at a nominal per-channel power level such that the frequency sub-band has a total target power, and
wherein, subsequent to a transient detected by the measuring, the adjustment sets the at least one control channel based on a difference of the measured optical power and the total target power. | 2,600 |
10,786 | 10,786 | 14,633,524 | 2,674 | One embodiment provides a method, including, but not limited to: obtaining, from a device physically located on a user, an input indicating the user is speaking; the input being related to a movement of the user; and activating, using a processor, voice processing. Other aspects are described and claimed herein. | 1. A method, comprising:
obtaining, from a device physically located on a user, an input indicating the user is speaking; the input being related to a movement of the user, wherein the movement is caused by the user speaking; and activating, using a processor, voice processing. 2. The method of claim 1, wherein the input comprises data derived from electromyography. 3. The method of claim 1, wherein the input comprises data derived from vibration. 4. The method of claim 1, wherein the activating comprises sending, to a second device, an instruction to activate voice processing. 5. The method of claim 4, further comprising transmitting audio data to the second device. 6. The method of claim 1, wherein the obtaining comprises receiving the input from the device. 7. The method of claim 1, wherein the device comprises an information handling device and the obtaining comprises detecting the input using the information handling device. 8. The method of claim 1, further comprising identifying the user as a user associated with the device. 9. The method of claim 8, wherein the activating comprises activating voice processing based upon the user being identified as associated with the device. 10. The method of claim 1, further comprising receiving audio data. 11. An apparatus, comprising:
a processor; a memory device that stores instructions executable by the processor to: obtain an input indicating a user is speaking; the input being related to a movement of the user, wherein the movement is caused by the user speaking; and activate voice processing. 12. The apparatus of claim 11, wherein the input comprises data derived from electromyography. 13. The apparatus of claim 11, wherein the input comprises data derived from vibration. 14. The apparatus of claim 11, wherein to activate comprises sending, to a second device, an instruction to activate voice processing. 15. The apparatus of claim 14, wherein the instructions are further executable by the processor to transmit audio data to the second device. 16. The apparatus of claim 11, wherein to obtain comprises receiving the input from the device. 17. The apparatus of claim 11, wherein the device comprises an information handling device and to obtain comprises detecting the input using the information handling device. 18. The apparatus of claim 11, wherein the instructions are further executable by the processor to identify the user as a user associated with the device. 19. The apparatus of claim 18, wherein to activate comprises activating voice processing based upon the user being identified as associated with the device. 20. A product, comprising:
a storage device having processor executable code stored therewith, the code being executable by the processor and comprising: code that obtains, from a device physically located on a user, an input indicating the user is speaking; the input being related to a movement of the user, wherein the movement is caused by the user speaking; and code that activates voice processing. | One embodiment provides a method, including, but not limited to: obtaining, from a device physically located on a user, an input indicating the user is speaking; the input being related to a movement of the user; and activating, using a processor, voice processing. Other aspects are described and claimed herein.1. A method, comprising:
obtaining, from a device physically located on a user, an input indicating the user is speaking; the input being related to a movement of the user, wherein the movement is caused by the user speaking; and activating, using a processor, voice processing. 2. The method of claim 1, wherein the input comprises data derived from electromyography. 3. The method of claim 1, wherein the input comprises data derived from vibration. 4. The method of claim 1, wherein the activating comprises sending, to a second device, an instruction to activate voice processing. 5. The method of claim 4, further comprising transmitting audio data to the second device. 6. The method of claim 1, wherein the obtaining comprises receiving the input from the device. 7. The method of claim 1, wherein the device comprises an information handling device and the obtaining comprises detecting the input using the information handling device. 8. The method of claim 1, further comprising identifying the user as a user associated with the device. 9. The method of claim 8, wherein the activating comprises activating voice processing based upon the user being identified as associated with the device. 10. The method of claim 1, further comprising receiving audio data. 11. An apparatus, comprising:
a processor; a memory device that stores instructions executable by the processor to: obtain an input indicating a user is speaking; the input being related to a movement of the user, wherein the movement is caused by the user speaking; and activate voice processing. 12. The apparatus of claim 11, wherein the input comprises data derived from electromyography. 13. The apparatus of claim 11, wherein the input comprises data derived from vibration. 14. The apparatus of claim 11, wherein to activate comprises sending, to a second device, an instruction to activate voice processing. 15. The apparatus of claim 14, wherein the instructions are further executable by the processor to transmit audio data to the second device. 16. The apparatus of claim 11, wherein to obtain comprises receiving the input from the device. 17. The apparatus of claim 11, wherein the device comprises an information handling device and to obtain comprises detecting the input using the information handling device. 18. The apparatus of claim 11, wherein the instructions are further executable by the processor to identify the user as a user associated with the device. 19. The apparatus of claim 18, wherein to activate comprises activating voice processing based upon the user being identified as associated with the device. 20. A product, comprising:
a storage device having processor executable code stored therewith, the code being executable by the processor and comprising: code that obtains, from a device physically located on a user, an input indicating the user is speaking; the input being related to a movement of the user, wherein the movement is caused by the user speaking; and code that activates voice processing. | 2,600 |
10,787 | 10,787 | 15,987,991 | 2,684 | An RFID system includes an antenna of a reader/writer and an antenna of an RFID tag. Transmission and reception of a high-frequency signal of a UHF band is performed between the antenna of the reader/writer and the antenna of the RFID tag that are arranged so as to be adjacent to each other. A loop antenna including a loop conductor is used as the antenna of the reader/writer, and coil antennas including a plurality of laminated coil conductors are used as the antenna of an RFID tag. In addition, the conductor width of the loop conductor in the loop antenna is greater than the conductor widths of the coil conductors in the coil antennas. | 1. (canceled) 2. An RFID tag attached to a surface of an article comprising:
an RFIC chip disposed on a power feeding substrate; and a coil antenna provided in the power feeding substrate that includes a plurality of laminated dielectric layers; wherein the RFIC chip is provided closer to the surface of the article than the coil antenna. 3. The RFID tag according to claim 2, wherein
the coil antenna is defined by a three-dimensional-shaped coil antenna including a plurality of single loop coil conductors provided in or on different layers of the plurality of laminated dielectric layers; the plurality of single loop coil conductors includes a first opening and a second opening defined in the different layers; and the second opening is provided closer to the surface of the article than the first opening, and the first opening has greater size to the second opening. 4. An RFID system comprising:
a handheld reader/writer including an antenna; and an RFID tag including an antenna attached to an article; wherein transmission and/or reception of a high-frequency signal in a UHF band or an SHF band is performed between the antenna of the handheld reader/writer and the antenna of the RFID tag; the antenna of the handheld reader/writer is defined by an approximately one-turn loop antenna; and the RFID tag includes:
an RFIC chip disposed on a power feeding substrate; and
a coil antenna provided in the power feeding substrate that includes a plurality of laminated dielectric layers; wherein
the RFIC chip is provided closer to a surface of the article than the coil antenna. 5. The RFID system according to claim 4, wherein
the coil antenna of the RFID tag is defined by a three-dimensional-shaped coil antenna including a plurality of single loop coil conductors provided in or on different layers of the plurality of laminated dielectric layers; the plurality of single loop coil conductors includes a first opening and a second opening defined in the different layers; and the second opening is provided closer to the surface of the article than the first opening, and the first opening has greater size to the second opening. 6. The RFID system according to claim 5, wherein
the one-turn loop antenna includes a loop conductor; and a conductor width of the loop conductor is greater than a conductor width of the single loop coil conductors. 7. The RFID system according to claim 4, wherein an area occupied by the one-turn loop antenna of the handheld reader/writer is 1 to 6 times as large as an area occupied by the coil antenna of the RFID tag, in planar view. 8. The RFID system according to claim 4, wherein
the power feeding substrate includes a ceramic laminated body; and each of the plurality of laminated dielectric layers is a ceramic dielectric layer. 9. The RFID system according to claim 4, wherein the handheld reader/writer includes a gripper connected to the one-turn loop antenna. 10. The RFID system according to claim 4, wherein the one-turn loop antenna is disposed within a plane. | An RFID system includes an antenna of a reader/writer and an antenna of an RFID tag. Transmission and reception of a high-frequency signal of a UHF band is performed between the antenna of the reader/writer and the antenna of the RFID tag that are arranged so as to be adjacent to each other. A loop antenna including a loop conductor is used as the antenna of the reader/writer, and coil antennas including a plurality of laminated coil conductors are used as the antenna of an RFID tag. In addition, the conductor width of the loop conductor in the loop antenna is greater than the conductor widths of the coil conductors in the coil antennas.1. (canceled) 2. An RFID tag attached to a surface of an article comprising:
an RFIC chip disposed on a power feeding substrate; and a coil antenna provided in the power feeding substrate that includes a plurality of laminated dielectric layers; wherein the RFIC chip is provided closer to the surface of the article than the coil antenna. 3. The RFID tag according to claim 2, wherein
the coil antenna is defined by a three-dimensional-shaped coil antenna including a plurality of single loop coil conductors provided in or on different layers of the plurality of laminated dielectric layers; the plurality of single loop coil conductors includes a first opening and a second opening defined in the different layers; and the second opening is provided closer to the surface of the article than the first opening, and the first opening has greater size to the second opening. 4. An RFID system comprising:
a handheld reader/writer including an antenna; and an RFID tag including an antenna attached to an article; wherein transmission and/or reception of a high-frequency signal in a UHF band or an SHF band is performed between the antenna of the handheld reader/writer and the antenna of the RFID tag; the antenna of the handheld reader/writer is defined by an approximately one-turn loop antenna; and the RFID tag includes:
an RFIC chip disposed on a power feeding substrate; and
a coil antenna provided in the power feeding substrate that includes a plurality of laminated dielectric layers; wherein
the RFIC chip is provided closer to a surface of the article than the coil antenna. 5. The RFID system according to claim 4, wherein
the coil antenna of the RFID tag is defined by a three-dimensional-shaped coil antenna including a plurality of single loop coil conductors provided in or on different layers of the plurality of laminated dielectric layers; the plurality of single loop coil conductors includes a first opening and a second opening defined in the different layers; and the second opening is provided closer to the surface of the article than the first opening, and the first opening has greater size to the second opening. 6. The RFID system according to claim 5, wherein
the one-turn loop antenna includes a loop conductor; and a conductor width of the loop conductor is greater than a conductor width of the single loop coil conductors. 7. The RFID system according to claim 4, wherein an area occupied by the one-turn loop antenna of the handheld reader/writer is 1 to 6 times as large as an area occupied by the coil antenna of the RFID tag, in planar view. 8. The RFID system according to claim 4, wherein
the power feeding substrate includes a ceramic laminated body; and each of the plurality of laminated dielectric layers is a ceramic dielectric layer. 9. The RFID system according to claim 4, wherein the handheld reader/writer includes a gripper connected to the one-turn loop antenna. 10. The RFID system according to claim 4, wherein the one-turn loop antenna is disposed within a plane. | 2,600 |
10,788 | 10,788 | 15,663,588 | 2,625 | Acoustic touch detection (touch sensing) system architectures and methods can be used to detect an object touching a surface. Position of an object touching a surface can be determined using time-of-flight (TOF) bounding box techniques, or acoustic image reconstruction techniques, for example. Acoustic touch sensing can utilize transducers, such as piezoelectric transducers, to transmit ultrasonic waves along a surface and/or through the thickness of an electronic device. Location of the object can be determined, for example, based on the amount of time elapsing between the transmission of the wave and the detection of the reflected wave. An object in contact with the surface can interact with the transmitted wave causing attenuation, redirection and/or reflection of at least a portion of the transmitted wave. Portions of the transmitted wave energy after interaction with the object can be measured to determine the touch location of the object on the surface of the device. | 1. An acoustic sensing system, comprising:
a surface; a plurality of ultrasonic transceivers coupled to edges of the surface; and a processor coupled to the plurality of ultrasonic transceivers and configured to:
determine a first time of flight between a first ultrasonic wave generated by a first transceiver of the plurality of transceivers and a first reflection received at the first transceiver;
determine a second time of flight between a second ultrasonic wave generated by a second transceiver of the plurality of transceivers and a second reflection received at the second transceiver;
detect, based on at least one of the first time of flight or the second time of flight, an object in contact with the surface; and
determine a location of the object based on at least the first time of flight and the second time of flight. 2. The acoustic sensing system of claim 1, the processor further configured to:
determine a first location of a first edge of the object based on the first time of flight and a second location of a second edge of the object based on the second time of flight; and determine the location of the object based on the first location of the first edge, the second location of second edge, and a dimension of the object. 3. The acoustic sensing system of claim 1, the processor further configured to:
determine a third time of flight between a third ultrasonic wave generated by a third transceiver of the plurality of transceivers and a third reflection received at the third transceiver; determine a fourth time of flight between a fourth ultrasonic wave generated by a fourth transceiver of the plurality of transceivers and a fourth reflection received at the fourth transceiver; detect, based on at least one of the first time of flight, the second time of flight, the third time of flight, or the fourth time of flight, the object in contact with the surface; and determine the location of the object based on at least the first time of flight, the second time of flight, the third time of flight and the fourth time of flight. 4. The acoustic sensing system of claim 3, the processor further configured to:
determine a first location of a first edge of the object based on the first time of flight, a second location of a second edge of the object based on the second time of flight, a third location of a third edge of the object based on the third time of flight, and a fourth edge of the object based on the fourth time of flight; and determine the location of the object based on the first location of the first edge, the second location of second edge, the third location of the third edge, and the fourth location of the fourth edge. 5. The acoustic sensing system of claim 4, wherein determining the location of the object comprises determining a centroid of the object based on a bounding box formed by the first edge, the second edge, the third edge, and the fourth edge. 6. The acoustic sensing system of claim 3, the processor further configured to:
determine a fifth time of flight between the first ultrasonic wave generated by the first transceiver of the plurality of transceivers and a fifth reflection received at the first transceiver; determine a sixth time of flight between the second ultrasonic wave generated by the second transceiver of the plurality of transceivers and a sixth reflection received at the second transceiver; determine a seventh time of flight between the third ultrasonic wave generated by the third transceiver of the plurality of transceivers and a seventh reflection received at the third transceiver; determine an eighth time of flight between the fourth ultrasonic wave generated by the fourth transceiver of the plurality of transceivers and an eighth reflection received at the fourth transceiver; detect, based on at least one of the first time of flight, the second time of flight, the third time of flight, the fourth time of flight, the fifth time of flight, the sixth time of flight, the seventh time of flight, or the eighth time of flight, an additional object in contact with the surface; and determine a location of the additional object based on the location of the object and based on the first time of flight, the second time of flight, the third time of flight, the fourth time of flight, the fifth time of flight, the sixth time of flight, the seventh time of flight, and the eighth time of flight. 7. The acoustic sensing system of claim 3, the processor further configured to:
determine a fifth time of flight between a fifth ultrasonic wave generated by the fifth transceiver of the plurality of transceivers and a fifth reflection received at the fifth transceiver; determine a sixth time of flight between a sixth ultrasonic wave generated by the sixth transceiver of the plurality of transceivers and a sixth reflection received at the sixth transceiver; determine a seventh time of flight between a seventh ultrasonic wave generated by the seventh transceiver of the plurality of transceivers and a seventh reflection received at the seventh transceiver; determine an eighth time of flight between an eighth ultrasonic wave generated by the eighth transceiver of the plurality of transceivers and an eighth reflection received at the eighth transceiver; detect, based on at least one of the first time of flight, the second time of flight, the third time of flight, the fourth time of flight, the fifth time of flight, the sixth time of flight, the seventh time of flight, or the eighth time of flight, an additional object in contact with the surface; and determine the location of the object and a location of the additional object based on the first time of flight, the second time of flight, the third time of flight, the fourth time of flight, the fifth time of flight, the sixth time of flight, the seventh time of flight, and the eighth time of flight. 8. The acoustic sensing system of claim 1, wherein the first transceiver is coupled to a first edge of the surface and the second transceiver is coupled to a second edge of the surface, wherein the first edge is perpendicular to the second edge. 9. The acoustic sensing system of claim 1, wherein the first transceiver is coupled to a first edge of the surface, the second transceiver is coupled to a second edge of the surface, the third transceiver is coupled to a third edge of the surface, and the fourth transceiver is coupled to a fourth edge of the surface, wherein the first edge, second edge, third edge, and fourth edge are different edges of the surface. 10. The acoustic sensing system of claim 1, wherein at least two of the plurality of transceivers are mounted on at least one edge of the surface. 11. The acoustic sensing system of claim 1, wherein the surface is a display screen. 12. A method of sensing for an acoustic sensing system comprising a surface and a plurality of ultrasonic transceivers coupled to edges of the surface, the method comprising:
determining a first time of flight between a first ultrasonic wave generated by a first transceiver of the plurality of transceivers and a first reflection received at the first transceiver; determining a second time of flight between a second ultrasonic wave generated by a second transceiver of the plurality of transceivers and a second reflection received at the second transceiver; detecting, based on at least one of the first time of flight or the second time of flight, an object in contact with the surface; and determining a location of the object based on at least the first time of flight and the second time of flight. 13. The method of claim 12, further comprising:
determining a first location of a first edge of the object based on the first time of flight and a second location of a second edge of the object based on the second time of flight; and determining the location of the object based on the first location of the first edge, the second location of second edge, and a dimension of the object. 14. The method of claim 12, further comprising:
determining a third time of flight between a third ultrasonic wave generated by a third transceiver of the plurality of transceivers and a third reflection received at the third transceiver; determining a fourth time of flight between a fourth ultrasonic wave generated by a fourth transceiver of the plurality of transceivers and a fourth reflection received at the fourth transceiver; detecting, based on at least one of the first time of flight, the second time of flight, the third time of flight, or the fourth time of flight, the object in contact with the surface; and determining the location of the object based on at least the first time of flight, the second time of flight, the third time of flight and the fourth time of flight. 15. The method of claim 14, further comprising:
determining a first location of a first edge of the object based on the first time of flight, a second location of a second edge of the object based on the second time of flight, a third location of a third edge of the object based on the third time of flight, and a fourth edge of the object based on the fourth time of flight; and determining the location of the object based on the first location of the first edge, the second location of second edge, the third location of the third edge, and the fourth location of the fourth edge. 16. The method of claim 15, wherein determining the location of the object comprises determining a centroid of the object based on a bounding box formed by the first edge, the second edge, the third edge, and the fourth edge. 17. The method of claim 14, further comprising:
determining a fifth time of flight between the first ultrasonic wave generated by the first transceiver of the plurality of transceivers and a fifth reflection received at the first transceiver; determining a sixth time of flight between the second ultrasonic wave generated by the second transceiver of the plurality of transceivers and a sixth reflection received at the second transceiver; determining a seventh time of flight between the third ultrasonic wave generated by the third transceiver of the plurality of transceivers and a seventh reflection received at the third transceiver; determining an eighth time of flight between the fourth ultrasonic wave generated by the fourth transceiver of the plurality of transceivers and an eighth reflection received at the fourth transceiver; detecting, based on at least one of the first time of flight, the second time of flight, the third time of flight, the fourth time of flight, the fifth time of flight, the sixth time of flight, the seventh time of flight, or the eighth time of flight, an additional object in contact with the surface; and determining a location of the additional object based on the location of the object and based on the first time of flight, the second time of flight, the third time of flight, the fourth time of flight, the fifth time of flight, the sixth time of flight, the seventh time of flight, and the eighth time of flight. 18. The method of claim 14, further comprising:
determining a fifth time of flight between a fifth ultrasonic wave generated by the fifth transceiver of the plurality of transceivers and a fifth reflection received at the fifth transceiver; determining a sixth time of flight between a sixth ultrasonic wave generated by the sixth transceiver of the plurality of transceivers and a sixth reflection received at the sixth transceiver; determining a seventh time of flight between a seventh ultrasonic wave generated by the seventh transceiver of the plurality of transceivers and a seventh reflection received at the seventh transceiver; determining an eighth time of flight between an eighth ultrasonic wave generated by the eighth transceiver of the plurality of transceivers and an eighth reflection received at the eighth transceiver; detecting, based on at least one of the first time of flight, the second time of flight, the third time of flight, the fourth time of flight, the fifth time of flight, the sixth time of flight, the seventh time of flight, or the eighth time of flight, an additional object in contact with the surface; and determining the location of the object and a location of the additional object based on the first time of flight, the second time of flight, the third time of flight, the fourth time of flight, the fifth time of flight, the sixth time of flight, the seventh time of flight, and the eighth time of flight. 19. A non-transitory computer readable storage medium storing instructions, which when executed by a device comprising a surface, a plurality of ultrasonic transceivers coupled to edges of the surface, and one or more processors, cause the one or more processors to perform a method comprising:
determining a first time of flight between a first ultrasonic wave generated by a first transceiver of the plurality of transceivers and a first reflection received at the first transceiver; determining a second time of flight between a second ultrasonic wave generated by a second transceiver of the plurality of transceivers and a second reflection received at the second transceiver; detecting, based on at least one of the first time of flight or the second time of flight, an object in contact with the surface; and determining a location of the object based on at least the first time of flight and the second time of flight. 20. The non-transitory computer readable storage medium storing instructions, which when executed by the device comprising the surface, the plurality of ultrasonic transceivers coupled to edges of the surface, and the one or more processors, cause the one or more processors to perform the method of claim 19 further comprising:
determining a third time of flight between a third ultrasonic wave generated by a third transceiver of the plurality of transceivers and a third reflection received at the third transceiver;
determining a fourth time of flight between a fourth ultrasonic wave generated by a fourth transceiver of the plurality of transceivers and a fourth reflection received at the fourth transceiver;
detecting, based on at least one of the first time of flight, the second time of flight, the third time of flight, or the fourth time of flight, the object in contact with the surface; and
determining the location of the object based on at least the first time of flight, the second time of flight, the third time of flight and the fourth time of flight. 21. An electronic device comprising:
a plurality of acoustic touch sensors capable of detecting whether an object is contacting a specified portion of a surface of the electronic device, wherein a first acoustic touch sensor of the plurality of acoustic touch sensors comprises:
a transducer coupled to a first side of the surface and configured to detect an object contacting a second opposing side of the surface. 22. The electronic of claim 21, wherein the transmitted acoustic energy is a compressive wave transmitted from the first side toward the direction of the opposing first side of the surface. 23. The electronic device of claim 22, wherein the acoustic touch sensor is configured to transmit acoustic energy through the surface toward the second opposing side of the surface, and the acoustic touch sensor is capable of detecting whether the object is contacting the second opposing side directly across from the acoustic touch sensor. 24. The electronic device of claim 21, wherein the acoustic touch sensor is configured to transmit acoustic along the first side of the surface, and the acoustic touch sensor is capable of detecting whether the object is contacting a second side of the surface in a location that is not directly across from the acoustic touch sensor. 25. The electronic device of claim 24, wherein the transmitted acoustic energy is a shear horizontal wave. 26. The electronic device of claim 24, wherein a portion of the surface is curved relative to the direction of motion of the transmitted acoustic energy, and the acoustic touch sensor is capable of detecting contact by an object on the curved portion of the surface. 27. A method comprising:
transmitting acoustic energy from a transducer along a surface of an electronic device; and comparing received reflected energy to a baseline measurement to determine a presence and a location of an object contacting the surface, wherein the baseline measurement is a measurement of reflected acoustic energy from surface discontinuities at different positions along the surface of the electronic device. 28. An electronic device comprising:
an antenna element; an acoustic touch sensor capable of detecting whether an object is contacting a surface of the electronic device proximate to the antenna element; and control circuitry configured to:
in accordance with a determination that an object is contacting the surface of the electronic device proximate to the antenna element, adjusting a parameter of the antenna operation to compensate for the contacting object. 29. The electronic device of claim 28, wherein adjusting the parameter of the antenna operation comprises adjusting an antenna loop length for the antenna element. 30. The electronic device of claim 28, wherein adjusting the parameter of the antenna operation comprises adjusting a value of a variable load element. 31. The electronic device of claim 30 wherein the variable load element is a variable capacitor. 32. The electronic device of claim 28, wherein the acoustic touch sensor is coupled to a first side of the surface, the acoustic touch sensor is configured to transmit acoustic energy through the surface toward a second opposing side of the surface, and the acoustic touch sensor is capable of detecting whether the object is contacting the second opposing side directly across from the acoustic touch sensor. 33. The electronic device of claim 28, wherein the acoustic touch sensor is coupled to a first side of the surface, the acoustic touch sensor is configured to transmit acoustic along the first side of the surface, and the acoustic touch sensor is capable of detecting whether the object is contacting a second side of the surface in a location that is not directly across from the acoustic touch sensor. 34. An electronic device comprising:
a housing; a plurality of acoustic transducers coupled to a surface of the housing; sense circuitry coupled to the plurality of acoustic transducers and configured to detect acoustic energy received by the plurality of acoustic transducers; an orientation sensor for detecting a rotational orientation of the electronic device; and a processor configured to:
detect the rotational orientation of the electronic device;
display an application on a display of the electronic device based on the orientation of the device;
associate a first acoustic transducer of the plurality of acoustic transducers with a first input to the application in a first orientation;
based on a change in the rotational orientation of the device, dissociate the first acoustic transducer from the first input to the application; and
associate a second acoustic transducer of the plurality of acoustic transducers, different from the first acoustic transducer with the first input to the application in a second rotational orientation of the electronic device. 35. The electronic device of claim 34, wherein the first acoustic transducer is associated with a second input to the application in the second orientation. 36. An acoustic touch sensing system, comprising:
an acoustic touch sensing circuit configured to perform a first detection scan of a surface and configured to perform a second detection scan of the surface; and one or more processors coupled to the acoustic touch sensing circuit; wherein at least one of the acoustic touch sensing circuit or the one or more processors is capable of processing results of the first detection scan or results of the second detection scan; wherein the results of the first detection scan indicate a presence of an object touching the surface; and wherein the results of the second detection scan indicate a location of the object touching the surface. 37. The acoustic touch sensing system of claim 36, wherein the one or more processors comprises a host processor and an auxiliary processor. 38. The acoustic touch sensing system of claim 37, wherein the auxiliary processor is configured to receive the results of the first detection scan and the results of the second detection scan; and
wherein the auxiliary processor is capable of:
processing the results of the first detection scan to determine the presence of the object touching the surface; and
processing the results of the second detection scan to determine the location of the object touching the surface. 39. The acoustic touch sensing system of claim 38, wherein the acoustic touch sensing circuit is configured to average digital outputs of the first detection scan to generate the results of the first detection scan. 40. The acoustic touch sensing system of claim 38, wherein the acoustic touch sensing circuit is configured to average digital outputs of the second detection scan to generate the results of the second detection scan. 41. The acoustic touch sensing system of claim 37, wherein the auxiliary processor is configured to receive the results of the first detection scan and the host processor is configured to receive the results of the second detection scan;
wherein the auxiliary processor is capable of processing the results of the first detection scan to determine the presence of the object touching the surface; and wherein the host processor is capable of processing the results of the second detection scan to determine the location of the object touching the surface. 42. The acoustic touch sensing system of claim 41, wherein the results of the first detection scan are transferred to the auxiliary processor by a first communication channel and the results of the second detection scan are transferred to the host processor by a second communication channel, wherein the second communication channel has a bandwidth greater than the first communication channel. 43. The acoustic touch sensing system of claim 36, further comprising:
an acoustic touch sensing digital signal processor configured to receive the results of the second detection scan; wherein the acoustic touch sensing digital signal processor is capable of processing the results of the second detection scan to determine the location of the object touching the surface. 44. The acoustic touch sensing system of claim 36, wherein the acoustic touch sensing circuit is capable of processing the results of the first detection scan and the results of the second detection scan to determine the presence and the location of the object touching the surface. 45. The acoustic touch sensing system of claim 36, further comprising:
a plurality of transducers coupled to the surface; and routing deposited along a side of the surface adjacent to the plurality of transducers; wherein the acoustic touch sensing circuit is coupled to the plurality of transducers via coupling of the acoustic touch sensing circuit to the routing on the routing deposited along the side of the surface. 46. The acoustic touch sensing system of claim 36, further comprising:
a plurality of transducers coupled to the surface; wherein the acoustic touch sensing circuit is coupled to the plurality of transducers via direct bonding between the plurality of transducers and the acoustic touch sensing circuit, via bonding between the plurality of transducers and a flexible circuit board coupled to the acoustic touch sensor circuit, or via bonding between the plurality of transducers and a rigid circuit board coupled to the acoustic touch sensor circuit. 47. The acoustic touch sensing system of claim 36, wherein the acoustic touch sensing circuit comprises an acoustic transmit circuit and an acoustic receive circuit; wherein the acoustic transmit circuit is implemented on a first integrated circuit and the acoustic receive circuit is implemented on a second integrated circuit, separate from the first integrated circuit. 48. The acoustic touch sensing system of claim 36, further comprising a plurality of transmit transducers and a plurality of receive transducers; wherein the acoustic touch sensing circuit is configured to generate an acoustic stimulation signal to apply to the plurality of transmit transducers; and wherein the acoustic touch sensing circuit is configured to receive an acoustic receive signal from the plurality of receive transducers generated in response to the acoustic stimulation signal. 49. A method comprising:
performing a first acoustic detection scan of a surface; processing results of the first acoustic detection scan to determine whether an object is contacting the surface; in accordance with a determination that the object is contacting the surface, performing a second acoustic detection scan of the surface; and processing results of the second acoustic detection scan to determine a location of the object contacting the surface. 50. The method of claim 49,
wherein performing the first acoustic detection scan comprises:
transmitting an acoustic wave into the surface from a first transducer; and
receiving an acoustic reflection corresponding to an edge of the surface opposite the first transducer at the first transducer; and
wherein processing the results of the first acoustic detection scan to determine whether to object is contacting the surface comprises:
determining that the object is contacting the surface when the received acoustic reflection corresponding to the edge of the surface is attenuated more than a threshold amount below a baseline received acoustic reflection corresponding to the edge of the surface. 51. The method of claim 49,
wherein performing the first acoustic detection scan comprises:
transmitting an acoustic wave into the surface from a first transducer; and
receiving the acoustic wave at a second transducer opposite the first transducer; and
processing the results of the first acoustic detection scan to determine whether to object is contacting the surface comprises:
determining that the object is contacting the surface when the received acoustic wave is attenuated more than a threshold amount below a baseline received acoustic wave. 52. The method of claim 49, wherein performing the second acoustic detection scan comprises:
transmitting, by a first transducer of a plurality of transducers, a first acoustic wave in the surface; receiving, by the first transducer of the plurality of transducers, a first acoustic reflection in the surface; transmitting, by a second transducer of the plurality of transducers, a second acoustic wave in the surface; receiving, by the second transducer of the plurality of transducers, a second acoustic reflection in the surface; transmitting, by a third transducer of the plurality of transducers, a third acoustic wave in the surface; receiving, by the third transducer of the plurality of transducers, a third acoustic reflection in the surface; transmitting, by a fourth transducer of the plurality of transducers, a fourth acoustic wave in the surface; and receiving, by the fourth transducer of the plurality of transducers, a fourth acoustic reflection in the surface. 53. The method of claim 52, wherein processing the results of the second acoustic detection scan to determine the location of the object contacting the surface further comprises:
determining a first time of flight between the transmitted first acoustic wave and the received first acoustic reflection; determining a second time of flight between the transmitted second acoustic wave and the received second acoustic reflection; determining a third time of flight between the transmitted third acoustic wave and the received third acoustic reflection; determining a fourth time of flight between the transmitted fourth acoustic wave and the received fourth acoustic reflection; and determining the location of the object based on the first time of flight, the second time of flight, the third time of flight, and the fourth time of flight. 54. The method of claim 49,
wherein performing the first acoustic detection scan comprises:
transmitting an acoustic wave into the surface from a first transducer on a first side of the surface; and
receiving an acoustic reflection corresponding to an edge of the surface opposite the first transducer at a second transducer on the first side of the surface; and
wherein processing the results of the first acoustic detection scan to determine whether to object is contacting the surface comprises:
determining that the object is contacting the surface when the received acoustic reflection corresponding to the edge of the surface is attenuated more than a threshold amount below a baseline received acoustic reflection corresponding to the edge of the surface. 55. The method of claim 49, wherein performing the second acoustic detection scan comprises:
transmitting, by a first transmit transducer of a plurality of transducers, a first acoustic wave in the surface; receiving, by a first receive transducer of the plurality of transducers, a first acoustic reflection in the surface, the first receive transducer collocated with the first transmit transducer; transmitting, by a second transmit transducer of the plurality of transducers, a second acoustic wave in the surface; receiving, by a second receive transducer of the plurality of transducers, a second acoustic reflection in the surface, the second receive transducer collocated with the second transmit transducer; transmitting, by a third transmit transducer of the plurality of transducers, a third acoustic wave in the surface; receiving, by a third receive transducer of the plurality of transducers, a third acoustic reflection in the surface, the third receive transducer collocated with the third transmit transducer; transmitting, by a fourth transmit transducer of the plurality of transducers, a fourth acoustic wave in the surface; and receiving, by a fourth receive transducer of the plurality of transducers, a fourth acoustic reflection in the surface, the fourth receive transducer collocated with the fourth transmit transducer. 56. The method of claim 55, wherein processing the results of the second acoustic detection scan to determine the location of the object contacting the surface further comprises:
determining a first time of flight between the transmitted first acoustic wave and the received first acoustic reflection; determining a second time of flight between the transmitted second acoustic wave and the received second acoustic reflection; determining a third time of flight between the transmitted third acoustic wave and the received third acoustic reflection; determining a fourth time of flight between the transmitted fourth acoustic wave and the received fourth acoustic reflection; and determining the location of the object based on the first time of flight, the second time of flight, the third time of flight, and the fourth time of flight. 57. A non-transitory computer readable storage medium storing instructions, which when executed by a device comprising a surface, a plurality of acoustic transducers coupled to edges of the surface, an acoustic touch sensing circuit, and one or more processors, cause the acoustic touch sensing circuit and the one or more processors to:
in a first state:
perform a first acoustic detection scan of the surface; and
process results of the first acoustic detection scan to determine whether an object is contacting the surface; and
in a second state:
perform a second acoustic detection scan of the surface; and
process results of the second acoustic detection scan to determine a location of the object contacting the surface. 58. The non-transitory computer readable storage medium of claim 57, further comprising instructions, which executed by the device, cause the acoustic touch sensing circuit to transition from the first state to the second state when the object is determined to be contacting the surface based on processing the results of the first acoustic detection scan. 59. The non-transitory computer readable storage medium of claim 57, further comprising instructions, which executed by the device, cause the acoustic touch sensing circuit to transition from the second state to the first state when no object is determined to be contacting the surface based on processing the results of the second acoustic detection scan for a threshold period of time or in response to receiving user input to power down a display of the device. 60. The non-transitory computer readable storage medium of claim 57, wherein the first acoustic detection scan and the second acoustic detection scan are performed by the acoustic touch sensing circuit. 61. The non-transitory computer readable storage medium of claim 57, wherein processing the results of the first acoustic detection scan is performed by a first processor of the one or more processors and processing the results of the second acoustic detection scan is performed by a second processor of the one or more processors. 62. An acoustic touch sensing system, comprising:
an acoustic touch sensing circuit configured to perform a touch detection scan of a surface; wherein the acoustic touch sensing circuit is configured to generate results indicative of an object touching the surface at a first location when an input device contacts the surface at the first location and generate results indicative of no object touching the surface at a second location when a liquid contacts the surface at the second location. 63. An electronic device comprising:
a motion detector; an antenna element; an acoustic touch sensor capable of detecting whether an object is contacting a surface of the electronic device proximate to the antenna element; and control circuitry configured to:
determine whether the electronic device is moving;
determine whether the object is contacting the surface of the electronic device proximate to the antenna element;
in accordance with a determination that the electronic device is not moving, operate the antenna element at a nominal power level; and
in accordance with a determination that the electronic device is moving and that the object is contacting the surface of the electronic device proximate to the antenna element, operate the antenna element at a reduced power level, lower than the nominal power level. 64. The electronic device of claim 63, wherein the control circuitry is further configured to:
in accordance with a determination that the electronic device is moving and that the object is not contacting the surface of the electronic device proximate to the antenna element, operate the antenna element at the nominal power level. 65. The electronic device of claim 63, wherein determining whether the electronic device is moving comprises determining whether motion detected by the motion detector is consistent with motion of a human body. 66. The electronic device of claim 63, wherein the acoustic touch sensor comprises:
an ultrasonic transducer; and a plurality of reflective barriers on a surface of the electronic device, proximate to the antenna element, wherein the ultrasonic transducer transmits an acoustic wave toward the plurality of reflective barriers, and receives reflected signals from the barriers. 67. A method comprising:
determining whether an electronic device is moving; determining whether an object is contacting a surface of the electronic device proximate to an antenna element; in accordance with a determination that the electronic device is not moving, operating the antenna element at a nominal power level; and in accordance with a determination that the electronic device is moving and that the object is contacting the surface of the electronic device proximate to the antenna element, operating the antenna element at a reduced power level, lower than the nominal power level. 68. The method of claim 67, wherein in accordance with a determination that the electronic device is moving and that the object is not contacting the surface of the electronic device proximate to the antenna element, operating the antenna element at the nominal power level. 69. The method of claim 67, wherein determining whether the electronic device is moving comprises determining whether motion detected by the motion detector is consistent with motion of a human body. 70. An electronic device comprising:
a configurable antenna element; an acoustic touch sensor capable of detecting whether an object is contacting a surface of the electronic device proximate to the configurable antenna element; and in accordance with a determination that the object is contacting the surface of the electronic device proximate to the configurable antenna element, adjusting a parameter of the configurable antenna element to compensate for presence of the object 71. The electronic device of claim 70, wherein the acoustic touch sensor is configured to detect contact directly in a transmission path of the configurable antenna element. 72. The electronic device of claim 70, wherein the acoustic touch sensor is capable of detecting whether an object is contacting a portion of the surface of the electronic device within a metal exclusion zone of the surface of the device. 73. The electronic device of claim 72, wherein the metal exclusion zone corresponds to a location of the configurable antenna element within the electronic device. 74. The electronic device of claim 70, wherein the acoustic touch sensor comprises:
an ultrasonic transducer; and a plurality of reflective barriers on a surface of the electronic device, proximate to the antenna element, wherein the ultrasonic transducer transmits an acoustic wave toward the plurality of reflective barriers, and receives reflected signals from the barriers. 75. The electronic device of claim 74, wherein a first reflective barrier of the plurality of reflective barriers exhibits an anisotropic reflectance characteristic that depends upon a direction of travel of an acoustic wave encountering the first reflective barrier. 76. An electronic device comprising:
transmit circuitry configured to provide a stimulation signal to a transducer coupled to a surface of the device; a plurality of surface discontinuities on a surface of the device located proximate to the transducer; and receiver circuitry configured to capture a received signal based on motion of the transducer; and control circuitry configured to:
couple the transmit circuitry to the transducer;
couple the receive circuitry to the transducer;
stimulate the transducer to produce an excitation in the surface of the device proximate to the transducer;
capture a first reflected energy at a first time based on a distance between the transducer and a first surface discontinuity of the plurality of surface discontinuities; and
capture a second reflected energy at a second time based on a distance between the transducer and a second surface discontinuity of the plurality of surface discontinuities,
wherein the second distance is greater than the first distance. 77. The electronic device of claim 76, wherein sampling the first reflected energy comprises integrating received energy during a first integration window interval based on a duration of the stimulation of the transducer and a distance between the transducer and the first surface discontinuity. 78. The electronic device of claim 77, wherein sampling the second reflected energy comprises integrating received energy during a second integration window having a same duration as the first integration window and beginning at a time after the first integration window begins. 79. The electronic device of claim 78, wherein the first integration window and the second integration window are non-overlapping. 80. The electronic device of claim 76, further comprising:
a filter configured to shape the stimulation signal based on a transfer function of a system formed by the transducer, the surface of the device, and the plurality of surface discontinuities on the surface of the device located proximate to the transducer. 81. The electronic device of claim 76, wherein the transducer is coupled to a cover glass of the device. 82. The electronic device of claim 81, wherein the cover glass comprises a transparent display region and a non-transparent border region, the transducer being coupled to the non-transparent border region. 83. The electronic device of claim 76, wherein the transducer is coupled to a metal housing of the device. 84. The electronic device of claim 76, further comprising a processor capable of determining a location of contact by an object with the surface of the device based on the first reflected energy and the second reflected energy. 85. The electronic device of claim 84, wherein the determination is based on a comparison between baseline values of the first reflected energy and the second energy and captured values of the first reflected energy and second reflected energy. 86. The electronic device of claim 85, wherein a determination that a difference between the first baseline value of the first reflected energy and the captured value of the first reflected energy exceeds a threshold difference corresponds to a contact by the object between the transducer and the first surface discontinuity of the plurality of surface discontinuities. 87. The electronic device of claim 76, further comprising:
a second transducer coupled to the surface of the device; and a second plurality of surface discontinuities on the surface of the device located proximate to the second transducer; wherein the control circuitry is further configured to:
decouple the transmit circuitry from the transducer;
couple the transmit circuitry to the second transducer;
decouple the receive circuitry from the transducer;
couple the receive circuitry to the second transducer;
stimulate the second transducer to produce an excitation in the surface of the device proximate to the second transducer;
capture a third reflected energy at a third time based on a distance between the transducer and a third surface discontinuity of the second plurality of surface discontinuities; and
capture a fourth reflected energy at a fourth time based on a distance between the transducer and a fourth surface discontinuity of the second plurality of surface discontinuities. 88. The electronic device of claim 87, further comprising:
a demultiplexer for selectively coupling and uncoupling the transmit circuitry from the transducer; and a multiplexer for selectively coupling and decoupling the receive circuitry from the transducer. 89. The electronic device of claim 88, wherein the demultiplexer connects non-selected transducers to ground. | Acoustic touch detection (touch sensing) system architectures and methods can be used to detect an object touching a surface. Position of an object touching a surface can be determined using time-of-flight (TOF) bounding box techniques, or acoustic image reconstruction techniques, for example. Acoustic touch sensing can utilize transducers, such as piezoelectric transducers, to transmit ultrasonic waves along a surface and/or through the thickness of an electronic device. Location of the object can be determined, for example, based on the amount of time elapsing between the transmission of the wave and the detection of the reflected wave. An object in contact with the surface can interact with the transmitted wave causing attenuation, redirection and/or reflection of at least a portion of the transmitted wave. Portions of the transmitted wave energy after interaction with the object can be measured to determine the touch location of the object on the surface of the device.1. An acoustic sensing system, comprising:
a surface; a plurality of ultrasonic transceivers coupled to edges of the surface; and a processor coupled to the plurality of ultrasonic transceivers and configured to:
determine a first time of flight between a first ultrasonic wave generated by a first transceiver of the plurality of transceivers and a first reflection received at the first transceiver;
determine a second time of flight between a second ultrasonic wave generated by a second transceiver of the plurality of transceivers and a second reflection received at the second transceiver;
detect, based on at least one of the first time of flight or the second time of flight, an object in contact with the surface; and
determine a location of the object based on at least the first time of flight and the second time of flight. 2. The acoustic sensing system of claim 1, the processor further configured to:
determine a first location of a first edge of the object based on the first time of flight and a second location of a second edge of the object based on the second time of flight; and determine the location of the object based on the first location of the first edge, the second location of second edge, and a dimension of the object. 3. The acoustic sensing system of claim 1, the processor further configured to:
determine a third time of flight between a third ultrasonic wave generated by a third transceiver of the plurality of transceivers and a third reflection received at the third transceiver; determine a fourth time of flight between a fourth ultrasonic wave generated by a fourth transceiver of the plurality of transceivers and a fourth reflection received at the fourth transceiver; detect, based on at least one of the first time of flight, the second time of flight, the third time of flight, or the fourth time of flight, the object in contact with the surface; and determine the location of the object based on at least the first time of flight, the second time of flight, the third time of flight and the fourth time of flight. 4. The acoustic sensing system of claim 3, the processor further configured to:
determine a first location of a first edge of the object based on the first time of flight, a second location of a second edge of the object based on the second time of flight, a third location of a third edge of the object based on the third time of flight, and a fourth edge of the object based on the fourth time of flight; and determine the location of the object based on the first location of the first edge, the second location of second edge, the third location of the third edge, and the fourth location of the fourth edge. 5. The acoustic sensing system of claim 4, wherein determining the location of the object comprises determining a centroid of the object based on a bounding box formed by the first edge, the second edge, the third edge, and the fourth edge. 6. The acoustic sensing system of claim 3, the processor further configured to:
determine a fifth time of flight between the first ultrasonic wave generated by the first transceiver of the plurality of transceivers and a fifth reflection received at the first transceiver; determine a sixth time of flight between the second ultrasonic wave generated by the second transceiver of the plurality of transceivers and a sixth reflection received at the second transceiver; determine a seventh time of flight between the third ultrasonic wave generated by the third transceiver of the plurality of transceivers and a seventh reflection received at the third transceiver; determine an eighth time of flight between the fourth ultrasonic wave generated by the fourth transceiver of the plurality of transceivers and an eighth reflection received at the fourth transceiver; detect, based on at least one of the first time of flight, the second time of flight, the third time of flight, the fourth time of flight, the fifth time of flight, the sixth time of flight, the seventh time of flight, or the eighth time of flight, an additional object in contact with the surface; and determine a location of the additional object based on the location of the object and based on the first time of flight, the second time of flight, the third time of flight, the fourth time of flight, the fifth time of flight, the sixth time of flight, the seventh time of flight, and the eighth time of flight. 7. The acoustic sensing system of claim 3, the processor further configured to:
determine a fifth time of flight between a fifth ultrasonic wave generated by the fifth transceiver of the plurality of transceivers and a fifth reflection received at the fifth transceiver; determine a sixth time of flight between a sixth ultrasonic wave generated by the sixth transceiver of the plurality of transceivers and a sixth reflection received at the sixth transceiver; determine a seventh time of flight between a seventh ultrasonic wave generated by the seventh transceiver of the plurality of transceivers and a seventh reflection received at the seventh transceiver; determine an eighth time of flight between an eighth ultrasonic wave generated by the eighth transceiver of the plurality of transceivers and an eighth reflection received at the eighth transceiver; detect, based on at least one of the first time of flight, the second time of flight, the third time of flight, the fourth time of flight, the fifth time of flight, the sixth time of flight, the seventh time of flight, or the eighth time of flight, an additional object in contact with the surface; and determine the location of the object and a location of the additional object based on the first time of flight, the second time of flight, the third time of flight, the fourth time of flight, the fifth time of flight, the sixth time of flight, the seventh time of flight, and the eighth time of flight. 8. The acoustic sensing system of claim 1, wherein the first transceiver is coupled to a first edge of the surface and the second transceiver is coupled to a second edge of the surface, wherein the first edge is perpendicular to the second edge. 9. The acoustic sensing system of claim 1, wherein the first transceiver is coupled to a first edge of the surface, the second transceiver is coupled to a second edge of the surface, the third transceiver is coupled to a third edge of the surface, and the fourth transceiver is coupled to a fourth edge of the surface, wherein the first edge, second edge, third edge, and fourth edge are different edges of the surface. 10. The acoustic sensing system of claim 1, wherein at least two of the plurality of transceivers are mounted on at least one edge of the surface. 11. The acoustic sensing system of claim 1, wherein the surface is a display screen. 12. A method of sensing for an acoustic sensing system comprising a surface and a plurality of ultrasonic transceivers coupled to edges of the surface, the method comprising:
determining a first time of flight between a first ultrasonic wave generated by a first transceiver of the plurality of transceivers and a first reflection received at the first transceiver; determining a second time of flight between a second ultrasonic wave generated by a second transceiver of the plurality of transceivers and a second reflection received at the second transceiver; detecting, based on at least one of the first time of flight or the second time of flight, an object in contact with the surface; and determining a location of the object based on at least the first time of flight and the second time of flight. 13. The method of claim 12, further comprising:
determining a first location of a first edge of the object based on the first time of flight and a second location of a second edge of the object based on the second time of flight; and determining the location of the object based on the first location of the first edge, the second location of second edge, and a dimension of the object. 14. The method of claim 12, further comprising:
determining a third time of flight between a third ultrasonic wave generated by a third transceiver of the plurality of transceivers and a third reflection received at the third transceiver; determining a fourth time of flight between a fourth ultrasonic wave generated by a fourth transceiver of the plurality of transceivers and a fourth reflection received at the fourth transceiver; detecting, based on at least one of the first time of flight, the second time of flight, the third time of flight, or the fourth time of flight, the object in contact with the surface; and determining the location of the object based on at least the first time of flight, the second time of flight, the third time of flight and the fourth time of flight. 15. The method of claim 14, further comprising:
determining a first location of a first edge of the object based on the first time of flight, a second location of a second edge of the object based on the second time of flight, a third location of a third edge of the object based on the third time of flight, and a fourth edge of the object based on the fourth time of flight; and determining the location of the object based on the first location of the first edge, the second location of second edge, the third location of the third edge, and the fourth location of the fourth edge. 16. The method of claim 15, wherein determining the location of the object comprises determining a centroid of the object based on a bounding box formed by the first edge, the second edge, the third edge, and the fourth edge. 17. The method of claim 14, further comprising:
determining a fifth time of flight between the first ultrasonic wave generated by the first transceiver of the plurality of transceivers and a fifth reflection received at the first transceiver; determining a sixth time of flight between the second ultrasonic wave generated by the second transceiver of the plurality of transceivers and a sixth reflection received at the second transceiver; determining a seventh time of flight between the third ultrasonic wave generated by the third transceiver of the plurality of transceivers and a seventh reflection received at the third transceiver; determining an eighth time of flight between the fourth ultrasonic wave generated by the fourth transceiver of the plurality of transceivers and an eighth reflection received at the fourth transceiver; detecting, based on at least one of the first time of flight, the second time of flight, the third time of flight, the fourth time of flight, the fifth time of flight, the sixth time of flight, the seventh time of flight, or the eighth time of flight, an additional object in contact with the surface; and determining a location of the additional object based on the location of the object and based on the first time of flight, the second time of flight, the third time of flight, the fourth time of flight, the fifth time of flight, the sixth time of flight, the seventh time of flight, and the eighth time of flight. 18. The method of claim 14, further comprising:
determining a fifth time of flight between a fifth ultrasonic wave generated by the fifth transceiver of the plurality of transceivers and a fifth reflection received at the fifth transceiver; determining a sixth time of flight between a sixth ultrasonic wave generated by the sixth transceiver of the plurality of transceivers and a sixth reflection received at the sixth transceiver; determining a seventh time of flight between a seventh ultrasonic wave generated by the seventh transceiver of the plurality of transceivers and a seventh reflection received at the seventh transceiver; determining an eighth time of flight between an eighth ultrasonic wave generated by the eighth transceiver of the plurality of transceivers and an eighth reflection received at the eighth transceiver; detecting, based on at least one of the first time of flight, the second time of flight, the third time of flight, the fourth time of flight, the fifth time of flight, the sixth time of flight, the seventh time of flight, or the eighth time of flight, an additional object in contact with the surface; and determining the location of the object and a location of the additional object based on the first time of flight, the second time of flight, the third time of flight, the fourth time of flight, the fifth time of flight, the sixth time of flight, the seventh time of flight, and the eighth time of flight. 19. A non-transitory computer readable storage medium storing instructions, which when executed by a device comprising a surface, a plurality of ultrasonic transceivers coupled to edges of the surface, and one or more processors, cause the one or more processors to perform a method comprising:
determining a first time of flight between a first ultrasonic wave generated by a first transceiver of the plurality of transceivers and a first reflection received at the first transceiver; determining a second time of flight between a second ultrasonic wave generated by a second transceiver of the plurality of transceivers and a second reflection received at the second transceiver; detecting, based on at least one of the first time of flight or the second time of flight, an object in contact with the surface; and determining a location of the object based on at least the first time of flight and the second time of flight. 20. The non-transitory computer readable storage medium storing instructions, which when executed by the device comprising the surface, the plurality of ultrasonic transceivers coupled to edges of the surface, and the one or more processors, cause the one or more processors to perform the method of claim 19 further comprising:
determining a third time of flight between a third ultrasonic wave generated by a third transceiver of the plurality of transceivers and a third reflection received at the third transceiver;
determining a fourth time of flight between a fourth ultrasonic wave generated by a fourth transceiver of the plurality of transceivers and a fourth reflection received at the fourth transceiver;
detecting, based on at least one of the first time of flight, the second time of flight, the third time of flight, or the fourth time of flight, the object in contact with the surface; and
determining the location of the object based on at least the first time of flight, the second time of flight, the third time of flight and the fourth time of flight. 21. An electronic device comprising:
a plurality of acoustic touch sensors capable of detecting whether an object is contacting a specified portion of a surface of the electronic device, wherein a first acoustic touch sensor of the plurality of acoustic touch sensors comprises:
a transducer coupled to a first side of the surface and configured to detect an object contacting a second opposing side of the surface. 22. The electronic of claim 21, wherein the transmitted acoustic energy is a compressive wave transmitted from the first side toward the direction of the opposing first side of the surface. 23. The electronic device of claim 22, wherein the acoustic touch sensor is configured to transmit acoustic energy through the surface toward the second opposing side of the surface, and the acoustic touch sensor is capable of detecting whether the object is contacting the second opposing side directly across from the acoustic touch sensor. 24. The electronic device of claim 21, wherein the acoustic touch sensor is configured to transmit acoustic along the first side of the surface, and the acoustic touch sensor is capable of detecting whether the object is contacting a second side of the surface in a location that is not directly across from the acoustic touch sensor. 25. The electronic device of claim 24, wherein the transmitted acoustic energy is a shear horizontal wave. 26. The electronic device of claim 24, wherein a portion of the surface is curved relative to the direction of motion of the transmitted acoustic energy, and the acoustic touch sensor is capable of detecting contact by an object on the curved portion of the surface. 27. A method comprising:
transmitting acoustic energy from a transducer along a surface of an electronic device; and comparing received reflected energy to a baseline measurement to determine a presence and a location of an object contacting the surface, wherein the baseline measurement is a measurement of reflected acoustic energy from surface discontinuities at different positions along the surface of the electronic device. 28. An electronic device comprising:
an antenna element; an acoustic touch sensor capable of detecting whether an object is contacting a surface of the electronic device proximate to the antenna element; and control circuitry configured to:
in accordance with a determination that an object is contacting the surface of the electronic device proximate to the antenna element, adjusting a parameter of the antenna operation to compensate for the contacting object. 29. The electronic device of claim 28, wherein adjusting the parameter of the antenna operation comprises adjusting an antenna loop length for the antenna element. 30. The electronic device of claim 28, wherein adjusting the parameter of the antenna operation comprises adjusting a value of a variable load element. 31. The electronic device of claim 30 wherein the variable load element is a variable capacitor. 32. The electronic device of claim 28, wherein the acoustic touch sensor is coupled to a first side of the surface, the acoustic touch sensor is configured to transmit acoustic energy through the surface toward a second opposing side of the surface, and the acoustic touch sensor is capable of detecting whether the object is contacting the second opposing side directly across from the acoustic touch sensor. 33. The electronic device of claim 28, wherein the acoustic touch sensor is coupled to a first side of the surface, the acoustic touch sensor is configured to transmit acoustic along the first side of the surface, and the acoustic touch sensor is capable of detecting whether the object is contacting a second side of the surface in a location that is not directly across from the acoustic touch sensor. 34. An electronic device comprising:
a housing; a plurality of acoustic transducers coupled to a surface of the housing; sense circuitry coupled to the plurality of acoustic transducers and configured to detect acoustic energy received by the plurality of acoustic transducers; an orientation sensor for detecting a rotational orientation of the electronic device; and a processor configured to:
detect the rotational orientation of the electronic device;
display an application on a display of the electronic device based on the orientation of the device;
associate a first acoustic transducer of the plurality of acoustic transducers with a first input to the application in a first orientation;
based on a change in the rotational orientation of the device, dissociate the first acoustic transducer from the first input to the application; and
associate a second acoustic transducer of the plurality of acoustic transducers, different from the first acoustic transducer with the first input to the application in a second rotational orientation of the electronic device. 35. The electronic device of claim 34, wherein the first acoustic transducer is associated with a second input to the application in the second orientation. 36. An acoustic touch sensing system, comprising:
an acoustic touch sensing circuit configured to perform a first detection scan of a surface and configured to perform a second detection scan of the surface; and one or more processors coupled to the acoustic touch sensing circuit; wherein at least one of the acoustic touch sensing circuit or the one or more processors is capable of processing results of the first detection scan or results of the second detection scan; wherein the results of the first detection scan indicate a presence of an object touching the surface; and wherein the results of the second detection scan indicate a location of the object touching the surface. 37. The acoustic touch sensing system of claim 36, wherein the one or more processors comprises a host processor and an auxiliary processor. 38. The acoustic touch sensing system of claim 37, wherein the auxiliary processor is configured to receive the results of the first detection scan and the results of the second detection scan; and
wherein the auxiliary processor is capable of:
processing the results of the first detection scan to determine the presence of the object touching the surface; and
processing the results of the second detection scan to determine the location of the object touching the surface. 39. The acoustic touch sensing system of claim 38, wherein the acoustic touch sensing circuit is configured to average digital outputs of the first detection scan to generate the results of the first detection scan. 40. The acoustic touch sensing system of claim 38, wherein the acoustic touch sensing circuit is configured to average digital outputs of the second detection scan to generate the results of the second detection scan. 41. The acoustic touch sensing system of claim 37, wherein the auxiliary processor is configured to receive the results of the first detection scan and the host processor is configured to receive the results of the second detection scan;
wherein the auxiliary processor is capable of processing the results of the first detection scan to determine the presence of the object touching the surface; and wherein the host processor is capable of processing the results of the second detection scan to determine the location of the object touching the surface. 42. The acoustic touch sensing system of claim 41, wherein the results of the first detection scan are transferred to the auxiliary processor by a first communication channel and the results of the second detection scan are transferred to the host processor by a second communication channel, wherein the second communication channel has a bandwidth greater than the first communication channel. 43. The acoustic touch sensing system of claim 36, further comprising:
an acoustic touch sensing digital signal processor configured to receive the results of the second detection scan; wherein the acoustic touch sensing digital signal processor is capable of processing the results of the second detection scan to determine the location of the object touching the surface. 44. The acoustic touch sensing system of claim 36, wherein the acoustic touch sensing circuit is capable of processing the results of the first detection scan and the results of the second detection scan to determine the presence and the location of the object touching the surface. 45. The acoustic touch sensing system of claim 36, further comprising:
a plurality of transducers coupled to the surface; and routing deposited along a side of the surface adjacent to the plurality of transducers; wherein the acoustic touch sensing circuit is coupled to the plurality of transducers via coupling of the acoustic touch sensing circuit to the routing on the routing deposited along the side of the surface. 46. The acoustic touch sensing system of claim 36, further comprising:
a plurality of transducers coupled to the surface; wherein the acoustic touch sensing circuit is coupled to the plurality of transducers via direct bonding between the plurality of transducers and the acoustic touch sensing circuit, via bonding between the plurality of transducers and a flexible circuit board coupled to the acoustic touch sensor circuit, or via bonding between the plurality of transducers and a rigid circuit board coupled to the acoustic touch sensor circuit. 47. The acoustic touch sensing system of claim 36, wherein the acoustic touch sensing circuit comprises an acoustic transmit circuit and an acoustic receive circuit; wherein the acoustic transmit circuit is implemented on a first integrated circuit and the acoustic receive circuit is implemented on a second integrated circuit, separate from the first integrated circuit. 48. The acoustic touch sensing system of claim 36, further comprising a plurality of transmit transducers and a plurality of receive transducers; wherein the acoustic touch sensing circuit is configured to generate an acoustic stimulation signal to apply to the plurality of transmit transducers; and wherein the acoustic touch sensing circuit is configured to receive an acoustic receive signal from the plurality of receive transducers generated in response to the acoustic stimulation signal. 49. A method comprising:
performing a first acoustic detection scan of a surface; processing results of the first acoustic detection scan to determine whether an object is contacting the surface; in accordance with a determination that the object is contacting the surface, performing a second acoustic detection scan of the surface; and processing results of the second acoustic detection scan to determine a location of the object contacting the surface. 50. The method of claim 49,
wherein performing the first acoustic detection scan comprises:
transmitting an acoustic wave into the surface from a first transducer; and
receiving an acoustic reflection corresponding to an edge of the surface opposite the first transducer at the first transducer; and
wherein processing the results of the first acoustic detection scan to determine whether to object is contacting the surface comprises:
determining that the object is contacting the surface when the received acoustic reflection corresponding to the edge of the surface is attenuated more than a threshold amount below a baseline received acoustic reflection corresponding to the edge of the surface. 51. The method of claim 49,
wherein performing the first acoustic detection scan comprises:
transmitting an acoustic wave into the surface from a first transducer; and
receiving the acoustic wave at a second transducer opposite the first transducer; and
processing the results of the first acoustic detection scan to determine whether to object is contacting the surface comprises:
determining that the object is contacting the surface when the received acoustic wave is attenuated more than a threshold amount below a baseline received acoustic wave. 52. The method of claim 49, wherein performing the second acoustic detection scan comprises:
transmitting, by a first transducer of a plurality of transducers, a first acoustic wave in the surface; receiving, by the first transducer of the plurality of transducers, a first acoustic reflection in the surface; transmitting, by a second transducer of the plurality of transducers, a second acoustic wave in the surface; receiving, by the second transducer of the plurality of transducers, a second acoustic reflection in the surface; transmitting, by a third transducer of the plurality of transducers, a third acoustic wave in the surface; receiving, by the third transducer of the plurality of transducers, a third acoustic reflection in the surface; transmitting, by a fourth transducer of the plurality of transducers, a fourth acoustic wave in the surface; and receiving, by the fourth transducer of the plurality of transducers, a fourth acoustic reflection in the surface. 53. The method of claim 52, wherein processing the results of the second acoustic detection scan to determine the location of the object contacting the surface further comprises:
determining a first time of flight between the transmitted first acoustic wave and the received first acoustic reflection; determining a second time of flight between the transmitted second acoustic wave and the received second acoustic reflection; determining a third time of flight between the transmitted third acoustic wave and the received third acoustic reflection; determining a fourth time of flight between the transmitted fourth acoustic wave and the received fourth acoustic reflection; and determining the location of the object based on the first time of flight, the second time of flight, the third time of flight, and the fourth time of flight. 54. The method of claim 49,
wherein performing the first acoustic detection scan comprises:
transmitting an acoustic wave into the surface from a first transducer on a first side of the surface; and
receiving an acoustic reflection corresponding to an edge of the surface opposite the first transducer at a second transducer on the first side of the surface; and
wherein processing the results of the first acoustic detection scan to determine whether to object is contacting the surface comprises:
determining that the object is contacting the surface when the received acoustic reflection corresponding to the edge of the surface is attenuated more than a threshold amount below a baseline received acoustic reflection corresponding to the edge of the surface. 55. The method of claim 49, wherein performing the second acoustic detection scan comprises:
transmitting, by a first transmit transducer of a plurality of transducers, a first acoustic wave in the surface; receiving, by a first receive transducer of the plurality of transducers, a first acoustic reflection in the surface, the first receive transducer collocated with the first transmit transducer; transmitting, by a second transmit transducer of the plurality of transducers, a second acoustic wave in the surface; receiving, by a second receive transducer of the plurality of transducers, a second acoustic reflection in the surface, the second receive transducer collocated with the second transmit transducer; transmitting, by a third transmit transducer of the plurality of transducers, a third acoustic wave in the surface; receiving, by a third receive transducer of the plurality of transducers, a third acoustic reflection in the surface, the third receive transducer collocated with the third transmit transducer; transmitting, by a fourth transmit transducer of the plurality of transducers, a fourth acoustic wave in the surface; and receiving, by a fourth receive transducer of the plurality of transducers, a fourth acoustic reflection in the surface, the fourth receive transducer collocated with the fourth transmit transducer. 56. The method of claim 55, wherein processing the results of the second acoustic detection scan to determine the location of the object contacting the surface further comprises:
determining a first time of flight between the transmitted first acoustic wave and the received first acoustic reflection; determining a second time of flight between the transmitted second acoustic wave and the received second acoustic reflection; determining a third time of flight between the transmitted third acoustic wave and the received third acoustic reflection; determining a fourth time of flight between the transmitted fourth acoustic wave and the received fourth acoustic reflection; and determining the location of the object based on the first time of flight, the second time of flight, the third time of flight, and the fourth time of flight. 57. A non-transitory computer readable storage medium storing instructions, which when executed by a device comprising a surface, a plurality of acoustic transducers coupled to edges of the surface, an acoustic touch sensing circuit, and one or more processors, cause the acoustic touch sensing circuit and the one or more processors to:
in a first state:
perform a first acoustic detection scan of the surface; and
process results of the first acoustic detection scan to determine whether an object is contacting the surface; and
in a second state:
perform a second acoustic detection scan of the surface; and
process results of the second acoustic detection scan to determine a location of the object contacting the surface. 58. The non-transitory computer readable storage medium of claim 57, further comprising instructions, which executed by the device, cause the acoustic touch sensing circuit to transition from the first state to the second state when the object is determined to be contacting the surface based on processing the results of the first acoustic detection scan. 59. The non-transitory computer readable storage medium of claim 57, further comprising instructions, which executed by the device, cause the acoustic touch sensing circuit to transition from the second state to the first state when no object is determined to be contacting the surface based on processing the results of the second acoustic detection scan for a threshold period of time or in response to receiving user input to power down a display of the device. 60. The non-transitory computer readable storage medium of claim 57, wherein the first acoustic detection scan and the second acoustic detection scan are performed by the acoustic touch sensing circuit. 61. The non-transitory computer readable storage medium of claim 57, wherein processing the results of the first acoustic detection scan is performed by a first processor of the one or more processors and processing the results of the second acoustic detection scan is performed by a second processor of the one or more processors. 62. An acoustic touch sensing system, comprising:
an acoustic touch sensing circuit configured to perform a touch detection scan of a surface; wherein the acoustic touch sensing circuit is configured to generate results indicative of an object touching the surface at a first location when an input device contacts the surface at the first location and generate results indicative of no object touching the surface at a second location when a liquid contacts the surface at the second location. 63. An electronic device comprising:
a motion detector; an antenna element; an acoustic touch sensor capable of detecting whether an object is contacting a surface of the electronic device proximate to the antenna element; and control circuitry configured to:
determine whether the electronic device is moving;
determine whether the object is contacting the surface of the electronic device proximate to the antenna element;
in accordance with a determination that the electronic device is not moving, operate the antenna element at a nominal power level; and
in accordance with a determination that the electronic device is moving and that the object is contacting the surface of the electronic device proximate to the antenna element, operate the antenna element at a reduced power level, lower than the nominal power level. 64. The electronic device of claim 63, wherein the control circuitry is further configured to:
in accordance with a determination that the electronic device is moving and that the object is not contacting the surface of the electronic device proximate to the antenna element, operate the antenna element at the nominal power level. 65. The electronic device of claim 63, wherein determining whether the electronic device is moving comprises determining whether motion detected by the motion detector is consistent with motion of a human body. 66. The electronic device of claim 63, wherein the acoustic touch sensor comprises:
an ultrasonic transducer; and a plurality of reflective barriers on a surface of the electronic device, proximate to the antenna element, wherein the ultrasonic transducer transmits an acoustic wave toward the plurality of reflective barriers, and receives reflected signals from the barriers. 67. A method comprising:
determining whether an electronic device is moving; determining whether an object is contacting a surface of the electronic device proximate to an antenna element; in accordance with a determination that the electronic device is not moving, operating the antenna element at a nominal power level; and in accordance with a determination that the electronic device is moving and that the object is contacting the surface of the electronic device proximate to the antenna element, operating the antenna element at a reduced power level, lower than the nominal power level. 68. The method of claim 67, wherein in accordance with a determination that the electronic device is moving and that the object is not contacting the surface of the electronic device proximate to the antenna element, operating the antenna element at the nominal power level. 69. The method of claim 67, wherein determining whether the electronic device is moving comprises determining whether motion detected by the motion detector is consistent with motion of a human body. 70. An electronic device comprising:
a configurable antenna element; an acoustic touch sensor capable of detecting whether an object is contacting a surface of the electronic device proximate to the configurable antenna element; and in accordance with a determination that the object is contacting the surface of the electronic device proximate to the configurable antenna element, adjusting a parameter of the configurable antenna element to compensate for presence of the object 71. The electronic device of claim 70, wherein the acoustic touch sensor is configured to detect contact directly in a transmission path of the configurable antenna element. 72. The electronic device of claim 70, wherein the acoustic touch sensor is capable of detecting whether an object is contacting a portion of the surface of the electronic device within a metal exclusion zone of the surface of the device. 73. The electronic device of claim 72, wherein the metal exclusion zone corresponds to a location of the configurable antenna element within the electronic device. 74. The electronic device of claim 70, wherein the acoustic touch sensor comprises:
an ultrasonic transducer; and a plurality of reflective barriers on a surface of the electronic device, proximate to the antenna element, wherein the ultrasonic transducer transmits an acoustic wave toward the plurality of reflective barriers, and receives reflected signals from the barriers. 75. The electronic device of claim 74, wherein a first reflective barrier of the plurality of reflective barriers exhibits an anisotropic reflectance characteristic that depends upon a direction of travel of an acoustic wave encountering the first reflective barrier. 76. An electronic device comprising:
transmit circuitry configured to provide a stimulation signal to a transducer coupled to a surface of the device; a plurality of surface discontinuities on a surface of the device located proximate to the transducer; and receiver circuitry configured to capture a received signal based on motion of the transducer; and control circuitry configured to:
couple the transmit circuitry to the transducer;
couple the receive circuitry to the transducer;
stimulate the transducer to produce an excitation in the surface of the device proximate to the transducer;
capture a first reflected energy at a first time based on a distance between the transducer and a first surface discontinuity of the plurality of surface discontinuities; and
capture a second reflected energy at a second time based on a distance between the transducer and a second surface discontinuity of the plurality of surface discontinuities,
wherein the second distance is greater than the first distance. 77. The electronic device of claim 76, wherein sampling the first reflected energy comprises integrating received energy during a first integration window interval based on a duration of the stimulation of the transducer and a distance between the transducer and the first surface discontinuity. 78. The electronic device of claim 77, wherein sampling the second reflected energy comprises integrating received energy during a second integration window having a same duration as the first integration window and beginning at a time after the first integration window begins. 79. The electronic device of claim 78, wherein the first integration window and the second integration window are non-overlapping. 80. The electronic device of claim 76, further comprising:
a filter configured to shape the stimulation signal based on a transfer function of a system formed by the transducer, the surface of the device, and the plurality of surface discontinuities on the surface of the device located proximate to the transducer. 81. The electronic device of claim 76, wherein the transducer is coupled to a cover glass of the device. 82. The electronic device of claim 81, wherein the cover glass comprises a transparent display region and a non-transparent border region, the transducer being coupled to the non-transparent border region. 83. The electronic device of claim 76, wherein the transducer is coupled to a metal housing of the device. 84. The electronic device of claim 76, further comprising a processor capable of determining a location of contact by an object with the surface of the device based on the first reflected energy and the second reflected energy. 85. The electronic device of claim 84, wherein the determination is based on a comparison between baseline values of the first reflected energy and the second energy and captured values of the first reflected energy and second reflected energy. 86. The electronic device of claim 85, wherein a determination that a difference between the first baseline value of the first reflected energy and the captured value of the first reflected energy exceeds a threshold difference corresponds to a contact by the object between the transducer and the first surface discontinuity of the plurality of surface discontinuities. 87. The electronic device of claim 76, further comprising:
a second transducer coupled to the surface of the device; and a second plurality of surface discontinuities on the surface of the device located proximate to the second transducer; wherein the control circuitry is further configured to:
decouple the transmit circuitry from the transducer;
couple the transmit circuitry to the second transducer;
decouple the receive circuitry from the transducer;
couple the receive circuitry to the second transducer;
stimulate the second transducer to produce an excitation in the surface of the device proximate to the second transducer;
capture a third reflected energy at a third time based on a distance between the transducer and a third surface discontinuity of the second plurality of surface discontinuities; and
capture a fourth reflected energy at a fourth time based on a distance between the transducer and a fourth surface discontinuity of the second plurality of surface discontinuities. 88. The electronic device of claim 87, further comprising:
a demultiplexer for selectively coupling and uncoupling the transmit circuitry from the transducer; and a multiplexer for selectively coupling and decoupling the receive circuitry from the transducer. 89. The electronic device of claim 88, wherein the demultiplexer connects non-selected transducers to ground. | 2,600 |
10,789 | 10,789 | 16,126,871 | 2,689 | An approach is provided for generating a passenger-based driving profile. The approach involves collecting vehicle sensor data of a vehicle carrying a user as a passenger. The vehicle sensor data indicates at least one driving behavior of the vehicle. The approach also involves collecting user sensor data, user input data, or a combination thereof indicating a reaction of the user to the at least one driving behavior. The approach further involves including or excluding the at least one driving behavior in a passenger profile for the user based on the reaction of the user. | 1. A computer-implemented method comprising:
collecting vehicle sensor data of a vehicle carrying a user as a passenger, wherein the vehicle sensor data indicates at least one driving behavior of the vehicle while the user is riding in the vehicle as the passenger; collecting user sensor data, user input data, or a combination thereof indicating a reaction of the user to the at least one driving behavior; and including the at least one driving behavior in a passenger profile for the user as the passenger based on the reaction of the user. 2. The method of claim 1, further comprising:
determining a comfort level, a discomfort level, or a combination thereof of the user based on the reaction, wherein the including or the excluding of the at least one driving behavior in the passenger profile is further based on the comfort level, the discomfort level, or a combination thereof. 3. The method of claim 1, wherein the user sensor data is collected from one or more sensors of the vehicle, a device associated with the user, or a combination thereof, and
wherein the vehicle or the other vehicle is an autonomous or highly-assisted vehicle. 4. The method of claim 2, wherein the one or more sensors include at least one of:
a camera sensor configured to detect a facial movement, an eye movement, a body movement, or a combination thereof of the user that is indicative of the reaction; a heart rate sensor configured to detect a heart rate, a change in the heart rate, or a combination thereof of the user that is indicative of the reaction; a touch sensor configured to detect a touch of a vehicle component by the user that is indicative of the reaction; and a microphone in combination with a speech recognition module configured to sample a recognition speech or sound from the user that is indicative of the reaction. 5. The method of claim 1, wherein the user input data is received via a user interface device of the vehicle to indicate the reaction. 6. The method of claim 1, wherein the at least one driving behavior is included in the passenger profile with respect to a contextual parameter. 7. The method of claim 6, further comprising:
processing one or more features of the at least one driving behavior, the reaction, the vehicle, the contextual parameter, or a combination thereof using a machine learning model to determine one or more driving scenarios to include or exclude in the passenger profile. 8. The method of claim 6, wherein the contextual parameter includes a road link, a weather condition, a traffic condition, an in-vehicle context, an external context, a user activity, or a combination thereof associated with the at least one driving behavior. 9. The method of claim 6, wherein the contextual parameter includes a visibility of oncoming traffic, a line-of-sight of the user, or a combination thereof. 10. (canceled) 11. An apparatus comprising:
at least one processor; and at least one memory including computer program code for one or more programs, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following,
collect vehicle sensor data of a vehicle carrying a user as a passenger, wherein the vehicle sensor data indicates at least one driving behavior of the vehicle while the user is riding in the vehicle as the passenger;
collect user sensor data, user input data, or a combination thereof indicating a reaction of the user to the at least one driving behavior; and
include the at least one driving behavior in a passenger profile for the user as a passenger based on the reaction of the user. 12. The apparatus of claim 11, wherein the apparatus is further caused to:
determine a comfort level, a discomfort level, or a combination thereof of the user based on the reaction, wherein the including or the excluding of the at least one driving behavior in the passenger profile is further based on the comfort level, the discomfort level, or a combination thereof. 13. The apparatus of claim 11, wherein the user sensor data is collected from one or more sensors of the vehicle, a device associated with the user, or a combination thereof; and wherein the one or more sensors include at least one of:
a camera sensor configured to detect a facial movement, an eye movement, a body movement, or a combination thereof of the user that is indicative of the reaction; a heart rate sensor configured to detect a heart rate, a change in the heart rate, or a combination thereof of the user that is indicative of the reaction; a touch sensor configured to detect a touch of a vehicle component by the user that is indicative of the reaction; and a microphone in combination with a speech recognition module configured to sample a recognition speech or sound from the user that is indicative of the reaction. 14. The apparatus of claim 11, wherein the at least one driving behavior is included in the passenger profile with respect to a contextual parameter. 15. The apparatus of claim 14, wherein the apparatus is further caused to:
process one or more features of the at least one driving behavior, the reaction, the vehicle, the contextual parameter, or a combination thereof using a machine learning model to determine one or more driving scenarios to include or exclude in the passenger profile. 16. A non-transitory computer-readable storage medium carrying one or more sequences of one or more instructions which, when executed by one or more processors, cause an apparatus to perform:
collecting vehicle sensor data of a vehicle carrying a user as a passenger, wherein the vehicle sensor data indicates at least one driving behavior of the vehicle while the user is riding in the vehicle as the passenger; collecting user sensor data, user input data, or a combination thereof indicating a reaction of the user to the at least one driving behavior; and including the at least one driving behavior in a passenger profile for the user as a passenger based on the reaction of the user. 17. The non-transitory computer-readable storage medium of claim 16, wherein the apparatus is caused to further perform:
determining a comfort level, a discomfort level, or a combination thereof of the user based on the reaction, wherein the including or the excluding of the at least one driving behavior in the passenger profile is further based on the comfort level, the discomfort level, or a combination thereof. 18. The non-transitory computer-readable storage medium of claim 16, wherein the user sensor data is collected from one or more sensors of the vehicle, a device associated with the user, or a combination thereof; and wherein the one or more sensors include at least one of:
a camera sensor configured to detect a facial movement, an eye movement, a body movement, or a combination thereof of the user that is indicative of the reaction; a heart rate sensor configured to detect a heart rate, a change in the heart rate, or a combination thereof of the user that is indicative of the reaction; a touch sensor configured to detect a touch of a vehicle component by the user that is indicative of the reaction; and a microphone in combination with a speech recognition module configured to sample a recognition speech or sound from the user that is indicative of the reaction. 19. The non-transitory computer-readable storage medium of claim 16, wherein the at least one driving behavior is included in the passenger profile with respect to a contextual parameter. 20. (canceled) 21. The method of claim 1, further comprising at least one of:
determining that the passenger profile of the user is compatible with at least one driver profile of at least one candidate driver above a threshold value, and recommending the user and the at least one candidate driver to share one vehicle, and determining that the driver profile of the user is compatible with at least one passenger profile of at least one candidate passenger above a threshold value, and recommending the user and the at least one candidate passenger to share one vehicle. 22. The method of claim 1, wherein the passenger profile is a data structure that relates the at least one driving behavior to data indicating the reaction of the user to the at least one driving behavior. | An approach is provided for generating a passenger-based driving profile. The approach involves collecting vehicle sensor data of a vehicle carrying a user as a passenger. The vehicle sensor data indicates at least one driving behavior of the vehicle. The approach also involves collecting user sensor data, user input data, or a combination thereof indicating a reaction of the user to the at least one driving behavior. The approach further involves including or excluding the at least one driving behavior in a passenger profile for the user based on the reaction of the user.1. A computer-implemented method comprising:
collecting vehicle sensor data of a vehicle carrying a user as a passenger, wherein the vehicle sensor data indicates at least one driving behavior of the vehicle while the user is riding in the vehicle as the passenger; collecting user sensor data, user input data, or a combination thereof indicating a reaction of the user to the at least one driving behavior; and including the at least one driving behavior in a passenger profile for the user as the passenger based on the reaction of the user. 2. The method of claim 1, further comprising:
determining a comfort level, a discomfort level, or a combination thereof of the user based on the reaction, wherein the including or the excluding of the at least one driving behavior in the passenger profile is further based on the comfort level, the discomfort level, or a combination thereof. 3. The method of claim 1, wherein the user sensor data is collected from one or more sensors of the vehicle, a device associated with the user, or a combination thereof, and
wherein the vehicle or the other vehicle is an autonomous or highly-assisted vehicle. 4. The method of claim 2, wherein the one or more sensors include at least one of:
a camera sensor configured to detect a facial movement, an eye movement, a body movement, or a combination thereof of the user that is indicative of the reaction; a heart rate sensor configured to detect a heart rate, a change in the heart rate, or a combination thereof of the user that is indicative of the reaction; a touch sensor configured to detect a touch of a vehicle component by the user that is indicative of the reaction; and a microphone in combination with a speech recognition module configured to sample a recognition speech or sound from the user that is indicative of the reaction. 5. The method of claim 1, wherein the user input data is received via a user interface device of the vehicle to indicate the reaction. 6. The method of claim 1, wherein the at least one driving behavior is included in the passenger profile with respect to a contextual parameter. 7. The method of claim 6, further comprising:
processing one or more features of the at least one driving behavior, the reaction, the vehicle, the contextual parameter, or a combination thereof using a machine learning model to determine one or more driving scenarios to include or exclude in the passenger profile. 8. The method of claim 6, wherein the contextual parameter includes a road link, a weather condition, a traffic condition, an in-vehicle context, an external context, a user activity, or a combination thereof associated with the at least one driving behavior. 9. The method of claim 6, wherein the contextual parameter includes a visibility of oncoming traffic, a line-of-sight of the user, or a combination thereof. 10. (canceled) 11. An apparatus comprising:
at least one processor; and at least one memory including computer program code for one or more programs, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following,
collect vehicle sensor data of a vehicle carrying a user as a passenger, wherein the vehicle sensor data indicates at least one driving behavior of the vehicle while the user is riding in the vehicle as the passenger;
collect user sensor data, user input data, or a combination thereof indicating a reaction of the user to the at least one driving behavior; and
include the at least one driving behavior in a passenger profile for the user as a passenger based on the reaction of the user. 12. The apparatus of claim 11, wherein the apparatus is further caused to:
determine a comfort level, a discomfort level, or a combination thereof of the user based on the reaction, wherein the including or the excluding of the at least one driving behavior in the passenger profile is further based on the comfort level, the discomfort level, or a combination thereof. 13. The apparatus of claim 11, wherein the user sensor data is collected from one or more sensors of the vehicle, a device associated with the user, or a combination thereof; and wherein the one or more sensors include at least one of:
a camera sensor configured to detect a facial movement, an eye movement, a body movement, or a combination thereof of the user that is indicative of the reaction; a heart rate sensor configured to detect a heart rate, a change in the heart rate, or a combination thereof of the user that is indicative of the reaction; a touch sensor configured to detect a touch of a vehicle component by the user that is indicative of the reaction; and a microphone in combination with a speech recognition module configured to sample a recognition speech or sound from the user that is indicative of the reaction. 14. The apparatus of claim 11, wherein the at least one driving behavior is included in the passenger profile with respect to a contextual parameter. 15. The apparatus of claim 14, wherein the apparatus is further caused to:
process one or more features of the at least one driving behavior, the reaction, the vehicle, the contextual parameter, or a combination thereof using a machine learning model to determine one or more driving scenarios to include or exclude in the passenger profile. 16. A non-transitory computer-readable storage medium carrying one or more sequences of one or more instructions which, when executed by one or more processors, cause an apparatus to perform:
collecting vehicle sensor data of a vehicle carrying a user as a passenger, wherein the vehicle sensor data indicates at least one driving behavior of the vehicle while the user is riding in the vehicle as the passenger; collecting user sensor data, user input data, or a combination thereof indicating a reaction of the user to the at least one driving behavior; and including the at least one driving behavior in a passenger profile for the user as a passenger based on the reaction of the user. 17. The non-transitory computer-readable storage medium of claim 16, wherein the apparatus is caused to further perform:
determining a comfort level, a discomfort level, or a combination thereof of the user based on the reaction, wherein the including or the excluding of the at least one driving behavior in the passenger profile is further based on the comfort level, the discomfort level, or a combination thereof. 18. The non-transitory computer-readable storage medium of claim 16, wherein the user sensor data is collected from one or more sensors of the vehicle, a device associated with the user, or a combination thereof; and wherein the one or more sensors include at least one of:
a camera sensor configured to detect a facial movement, an eye movement, a body movement, or a combination thereof of the user that is indicative of the reaction; a heart rate sensor configured to detect a heart rate, a change in the heart rate, or a combination thereof of the user that is indicative of the reaction; a touch sensor configured to detect a touch of a vehicle component by the user that is indicative of the reaction; and a microphone in combination with a speech recognition module configured to sample a recognition speech or sound from the user that is indicative of the reaction. 19. The non-transitory computer-readable storage medium of claim 16, wherein the at least one driving behavior is included in the passenger profile with respect to a contextual parameter. 20. (canceled) 21. The method of claim 1, further comprising at least one of:
determining that the passenger profile of the user is compatible with at least one driver profile of at least one candidate driver above a threshold value, and recommending the user and the at least one candidate driver to share one vehicle, and determining that the driver profile of the user is compatible with at least one passenger profile of at least one candidate passenger above a threshold value, and recommending the user and the at least one candidate passenger to share one vehicle. 22. The method of claim 1, wherein the passenger profile is a data structure that relates the at least one driving behavior to data indicating the reaction of the user to the at least one driving behavior. | 2,600 |
10,790 | 10,790 | 16,023,798 | 2,643 | The present disclosure relates to a communication method and system for converging a 5th-Generation (5G) communication system for supporting higher data rates beyond a 4th-Generation (4G) system with a technology for Internet of Things (IoT). The present disclosure may be applied to intelligent services based on the 5G communication technology and the IoT-related technology, such as smart home, smart building, smart city, smart car, connected car, health care, digital education, smart retail, security and safety services. An information format and apparatus used by a base station to make a scheduling decision when the base station allocates resource to a terminal in a mobile communication system are provided. Operations of a terminal to report a maximum transmission power accurately to the base station in a scheduling process are also provided. A method for calculating a maximum transmit power in a constant manner regardless of a channel status is also provided. | 1. A method for reporting a power headroom (PH) by a terminal in a mobile communication system, the method comprising:
receiving information indicating a first type power headroom report (PHR); identifying whether a power backoff has changed more than a threshold; determining a PHR transmission when the power backoff has changed more than the threshold; and transmitting a PHR including a maximum transmit power (PCMAX) and a power headroom (PH) associated with at least one activated carrier using the first type PHR, the PH being obtained based on the PCMAX. 2. The method of claim 1, wherein, if more than one serving cell is configured, the PCMAX and the PH are determined for each serving cell. 3. The method of claim 1,
wherein the first type PHR includes at least one PH field and at least one PCMAX field associated with the at least one activated carrier, and wherein at least one PCMAX field respectively corresponds to at least one PH field. 4. The method of claim 1, wherein the PCMAX value is determined between a highest value and a lowest value of the PCMAX value. 5. The method of claim 4, wherein the highest value of the PCMAX value is determined based on a maximum uplink transmit power available in a cell within which the terminal is located and a PCMAX derived physical characteristic of the terminal. 6. The method of claim 4, wherein the lowest value of the PCMAX value is determined based on a maximum uplink transmit power available in a cell within which the terminal is located and a PCMAX derived physical characteristic of the terminal, a first value determined based on a transmission resource amount allocated to the terminal, a second value defined by a frequency band for an uplink transmission or a local characteristic or an uplink transmission bandwidth, and a third value for allowing an additional transmit power adjustment when the uplink transmission is performed around edges of the frequency band. 7. A terminal for reporting a power headroom (PH) in a mobile communication system, the terminal comprising:
a transceiver; and a controller coupled with the transceiver and configured to control to:
receive information indicating a first type power headroom report (PHR),
identify whether a power backoff has changed more than a threshold,
determine a PHR transmission when the power backoff has changed more than the threshold, and
transmit a PHR including a maximum transmit power (PCMAX) and a power headroom (PH) associated with at least one activated carrier using the first type PHR, the PH being obtained based on the PCMAX. 8. The terminal of claim 7, wherein, if more than one serving cell is configured, the PCMAX and the PH are determined for each serving cell. 9. The terminal of claim 7,
wherein the first type PHR includes at least one PH field and at least one PCMAX field associated with the at least one activated carrier, and wherein at least one PCMAX field respectively corresponds to at least one PH field. 10. The terminal of claim 7, wherein the PCMAX value is determined between a highest value and a lowest value of the PCMAX value. 11. The terminal of claim 10, wherein the highest value of the PCMAX value is determined based on a maximum uplink transmit power available in a cell within which the terminal is located and a PCMAX derived physical characteristic of the terminal. 12. The terminal of claim 10, wherein the lowest value of the PCMAX value is determined based on a maximum uplink transmit power available in a cell within which the terminal is located and a PCMAX derived physical characteristic of the terminal, a first value determined based on a transmission resource amount allocated to the terminal, a second value defined by a frequency band for an uplink transmission or a local characteristic or an uplink transmission bandwidth, and a third value for allowing an additional transmit power adjustment when the uplink transmission is performed around edges of the frequency band. 13. A method for receiving a power headroom (PH) by a base station in a mobile communication system, the method comprising:
transmitting, to a terminal, information indicating a first type power headroom report (PHR); and receiving, from the terminal, a PHR including a maximum transmit power (PCMAX) and a power headroom (PH) associated with at least one activated carrier using the first type PHR, the PH being obtained based on the PCMAX, wherein it is identified by the terminal whether a power backoff has changed more than a threshold, and wherein a transmission of the PHR is determined by the terminal when the power backoff has changed more than the threshold. 14. The method of claim 13, wherein, if more than one serving cell is configured, the PCMAX and the PH are determined for each serving cell. 15. The method of claim 13,
wherein the first type PHR includes at least one PH field and at least one PCMAX field associated with the at least one activated carrier, and wherein at least one PCMAX field respectively corresponds to at least one PH field. 16. The method of claim 13, wherein the PCMAX value is determined between a highest value and a lowest value of the PCMAX value. 17. The method of claim 13, wherein the highest value of the PCMAX value is determined based on a maximum uplink transmit power available in a cell within which the terminal is located and a PCMAX derived physical characteristic of the terminal. 18. The method of claim 13, wherein the lowest value of the PCMAX value is determined based on a maximum uplink transmit power available in a cell within which the terminal is located and a PCMAX derived physical characteristic of the terminal, a first value determined based on a transmission resource amount allocated to the terminal, a second value defined by a frequency band for an uplink transmission or a local characteristic or an uplink transmission bandwidth, and a third value for allowing an additional transmit power adjustment when the uplink transmission is performed around edges of the frequency band. 19. A base station for receiving a power headroom (PH) in a mobile communication system, the terminal comprising:
a transceiver; and a controller coupled with the transceiver and configured to control to:
transmit, to a terminal, information indicating a first type power headroom report (PHR), and
receive, from the terminal, a PHR including a maximum transmit power (PCMAX) and a power headroom (PH) associated with at least one activated carrier using the first type PHR, the PH being obtained based on the PCMAX,
wherein it is identified by the terminal whether a power backoff has changed more than a threshold, and wherein a transmission of the PHR is determined by the terminal when the power backoff has changed more than the threshold. 20. The base station of claim 19, wherein, if more than one serving cell is configured, the PCMAX and the PH are determined for each serving cell. 21. The base station of claim 19,
wherein the first type PHR includes at least one PH field and at least one PCMAX field associated with the at least one activated carrier, and wherein at least one PCMAX field respectively corresponds to at least one PH field. 22. The base station of claim 19, wherein the PCMAX value is determined between a highest value and a lowest value of the PCMAX value. 23. The base station of claim 22, wherein the highest value of the PCMAX value is determined based on a maximum uplink transmit power available in a cell within which the terminal is located and a PCMAX derived physical characteristic of the terminal. 24. The base station of claim 22, wherein the lowest value of the PCMAX value is determined based on a maximum uplink transmit power available in a cell within which the terminal is located and a PCMAX derived physical characteristic of the terminal, a first value determined based on a transmission resource amount allocated to the terminal, a second value defined by a frequency band for an uplink transmission or a local characteristic or an uplink transmission bandwidth, and a third value for allowing an additional transmit power adjustment when the uplink transmission is performed around edges of the frequency band. | The present disclosure relates to a communication method and system for converging a 5th-Generation (5G) communication system for supporting higher data rates beyond a 4th-Generation (4G) system with a technology for Internet of Things (IoT). The present disclosure may be applied to intelligent services based on the 5G communication technology and the IoT-related technology, such as smart home, smart building, smart city, smart car, connected car, health care, digital education, smart retail, security and safety services. An information format and apparatus used by a base station to make a scheduling decision when the base station allocates resource to a terminal in a mobile communication system are provided. Operations of a terminal to report a maximum transmission power accurately to the base station in a scheduling process are also provided. A method for calculating a maximum transmit power in a constant manner regardless of a channel status is also provided.1. A method for reporting a power headroom (PH) by a terminal in a mobile communication system, the method comprising:
receiving information indicating a first type power headroom report (PHR); identifying whether a power backoff has changed more than a threshold; determining a PHR transmission when the power backoff has changed more than the threshold; and transmitting a PHR including a maximum transmit power (PCMAX) and a power headroom (PH) associated with at least one activated carrier using the first type PHR, the PH being obtained based on the PCMAX. 2. The method of claim 1, wherein, if more than one serving cell is configured, the PCMAX and the PH are determined for each serving cell. 3. The method of claim 1,
wherein the first type PHR includes at least one PH field and at least one PCMAX field associated with the at least one activated carrier, and wherein at least one PCMAX field respectively corresponds to at least one PH field. 4. The method of claim 1, wherein the PCMAX value is determined between a highest value and a lowest value of the PCMAX value. 5. The method of claim 4, wherein the highest value of the PCMAX value is determined based on a maximum uplink transmit power available in a cell within which the terminal is located and a PCMAX derived physical characteristic of the terminal. 6. The method of claim 4, wherein the lowest value of the PCMAX value is determined based on a maximum uplink transmit power available in a cell within which the terminal is located and a PCMAX derived physical characteristic of the terminal, a first value determined based on a transmission resource amount allocated to the terminal, a second value defined by a frequency band for an uplink transmission or a local characteristic or an uplink transmission bandwidth, and a third value for allowing an additional transmit power adjustment when the uplink transmission is performed around edges of the frequency band. 7. A terminal for reporting a power headroom (PH) in a mobile communication system, the terminal comprising:
a transceiver; and a controller coupled with the transceiver and configured to control to:
receive information indicating a first type power headroom report (PHR),
identify whether a power backoff has changed more than a threshold,
determine a PHR transmission when the power backoff has changed more than the threshold, and
transmit a PHR including a maximum transmit power (PCMAX) and a power headroom (PH) associated with at least one activated carrier using the first type PHR, the PH being obtained based on the PCMAX. 8. The terminal of claim 7, wherein, if more than one serving cell is configured, the PCMAX and the PH are determined for each serving cell. 9. The terminal of claim 7,
wherein the first type PHR includes at least one PH field and at least one PCMAX field associated with the at least one activated carrier, and wherein at least one PCMAX field respectively corresponds to at least one PH field. 10. The terminal of claim 7, wherein the PCMAX value is determined between a highest value and a lowest value of the PCMAX value. 11. The terminal of claim 10, wherein the highest value of the PCMAX value is determined based on a maximum uplink transmit power available in a cell within which the terminal is located and a PCMAX derived physical characteristic of the terminal. 12. The terminal of claim 10, wherein the lowest value of the PCMAX value is determined based on a maximum uplink transmit power available in a cell within which the terminal is located and a PCMAX derived physical characteristic of the terminal, a first value determined based on a transmission resource amount allocated to the terminal, a second value defined by a frequency band for an uplink transmission or a local characteristic or an uplink transmission bandwidth, and a third value for allowing an additional transmit power adjustment when the uplink transmission is performed around edges of the frequency band. 13. A method for receiving a power headroom (PH) by a base station in a mobile communication system, the method comprising:
transmitting, to a terminal, information indicating a first type power headroom report (PHR); and receiving, from the terminal, a PHR including a maximum transmit power (PCMAX) and a power headroom (PH) associated with at least one activated carrier using the first type PHR, the PH being obtained based on the PCMAX, wherein it is identified by the terminal whether a power backoff has changed more than a threshold, and wherein a transmission of the PHR is determined by the terminal when the power backoff has changed more than the threshold. 14. The method of claim 13, wherein, if more than one serving cell is configured, the PCMAX and the PH are determined for each serving cell. 15. The method of claim 13,
wherein the first type PHR includes at least one PH field and at least one PCMAX field associated with the at least one activated carrier, and wherein at least one PCMAX field respectively corresponds to at least one PH field. 16. The method of claim 13, wherein the PCMAX value is determined between a highest value and a lowest value of the PCMAX value. 17. The method of claim 13, wherein the highest value of the PCMAX value is determined based on a maximum uplink transmit power available in a cell within which the terminal is located and a PCMAX derived physical characteristic of the terminal. 18. The method of claim 13, wherein the lowest value of the PCMAX value is determined based on a maximum uplink transmit power available in a cell within which the terminal is located and a PCMAX derived physical characteristic of the terminal, a first value determined based on a transmission resource amount allocated to the terminal, a second value defined by a frequency band for an uplink transmission or a local characteristic or an uplink transmission bandwidth, and a third value for allowing an additional transmit power adjustment when the uplink transmission is performed around edges of the frequency band. 19. A base station for receiving a power headroom (PH) in a mobile communication system, the terminal comprising:
a transceiver; and a controller coupled with the transceiver and configured to control to:
transmit, to a terminal, information indicating a first type power headroom report (PHR), and
receive, from the terminal, a PHR including a maximum transmit power (PCMAX) and a power headroom (PH) associated with at least one activated carrier using the first type PHR, the PH being obtained based on the PCMAX,
wherein it is identified by the terminal whether a power backoff has changed more than a threshold, and wherein a transmission of the PHR is determined by the terminal when the power backoff has changed more than the threshold. 20. The base station of claim 19, wherein, if more than one serving cell is configured, the PCMAX and the PH are determined for each serving cell. 21. The base station of claim 19,
wherein the first type PHR includes at least one PH field and at least one PCMAX field associated with the at least one activated carrier, and wherein at least one PCMAX field respectively corresponds to at least one PH field. 22. The base station of claim 19, wherein the PCMAX value is determined between a highest value and a lowest value of the PCMAX value. 23. The base station of claim 22, wherein the highest value of the PCMAX value is determined based on a maximum uplink transmit power available in a cell within which the terminal is located and a PCMAX derived physical characteristic of the terminal. 24. The base station of claim 22, wherein the lowest value of the PCMAX value is determined based on a maximum uplink transmit power available in a cell within which the terminal is located and a PCMAX derived physical characteristic of the terminal, a first value determined based on a transmission resource amount allocated to the terminal, a second value defined by a frequency band for an uplink transmission or a local characteristic or an uplink transmission bandwidth, and a third value for allowing an additional transmit power adjustment when the uplink transmission is performed around edges of the frequency band. | 2,600 |
10,791 | 10,791 | 14,515,483 | 2,615 | A method of constructing a patient-specific orthopedic implant comprising: (a) comparing a patient-specific abnormal bone model, derived from an actual anatomy of a patient's abnormal bone, with a reconstructed patient-specific bone model, also derived from the anatomy of the patient's bone, where the reconstructed patient-specific bone model reflects a normalized anatomy of the patient's bone, and where the patient-specific abnormal bone model reflects an actual anatomy of the patient's bone including at least one of a partial bone, a deformed bone, and a shattered bone, wherein the patient-specific abnormal bone model comprises at least one of a patient-specific abnormal point cloud and a patient-specific abnormal bone surface model, and wherein the reconstructed patient-specific bone model comprises at least one of a reconstructed patient-specific point cloud and a reconstructed patient-specific bone surface model; (b) optimizing one or more parameters for a patient-specific orthopedic implant to be mounted to the patient's abnormal bone using data output from comparing the patient-specific abnormal bone model to the reconstructed patient-specific bone model; and, (c) generating an electronic design file for the patient-specific orthopedic implant taking into account the one or more parameters. | 1. A method of constructing a patient-specific orthopedic implant comprising:
comparing a patient-specific abnormal bone model, derived from an actual anatomy of a patient's abnormal bone, with a reconstructed patient-specific bone model, also derived from the anatomy of the patient's bone, where the reconstructed patient-specific bone model reflects a normalized anatomy of the patient's bone, and where the patient-specific abnormal bone model reflects an actual anatomy of the patient's bone including at least one of a partial bone, a deformed bone, and a shattered bone, wherein the patient-specific abnormal bone model comprises at least one of a patient-specific abnormal point cloud and a patient-specific abnormal bone surface model, and wherein the reconstructed patient-specific bone model comprises at least one of a reconstructed patient-specific point cloud and a reconstructed patient-specific bone surface model; optimizing one or more parameters for a patient-specific orthopedic implant to be mounted to the patient's abnormal bone using data output from comparing the patient-specific abnormal bone model to the reconstructed patient-specific bone model; and, generating an electronic design file for the patient-specific orthopedic implant taking into account the one or more parameters. 2. The method of claim 1, further comprising fabricating the patient-specific implant using the electronic design file. 3. The method of claim 1, further comprising:
comparing the patient-specific abnormal bone model to the reconstructed patient-specific bone model to identify missing bone or deformed bone from the patient-specific abnormal bone model; and, localizing the missing bone or deformed bone onto the reconstructed patient-specific bone model. 4. The method of claim 3, further comprising:
generating the patient-specific abnormal bone model from data representative of the patient's abnormal bone; and, generating the reconstructed patient-specific bone model from data representative of the patient's abnormal bone and from data from a statistical atlas, where the statistical atlas data comprises at least one of a point cloud and a surface model of a normal bone analogous to the patient's abnormal bone. 5. The method of claim 4, wherein the data representative of the patient's abnormal bone comprises at least one of magnetic resonance images, computerized tomography images, X-ray images, and ultrasound images. 6. The method of claim 4, wherein the statistical atlas data is derived from at least one of magnetic resonance images, computerized tomography images, X-ray images, and ultrasound images of the normal bone. 7. The method of claim 3, wherein:
the identified missing bone or the deformed bone comprises a set of bounding points; and, localizing the missing bone or the deformed bone onto the reconstructed patient-specific bone model includes associating the set of bounding points with the reconstructed patient-specific bone model. 8. The method of claim 3, wherein comparing the patient-specific abnormal bone model to the reconstructed patient-specific bone model to identify missing bone or deformed bone from the patient-specific abnormal bone model includes outputting at least two lists of data, where the at least two lists of data include a first list identifying the missing bone or the deformed bone, and a second list identifying bone in common between the patient-specific abnormal bone model and the reconstructed patient-specific bone model. 9. The method of claim 8, wherein:
the first list comprises vertices belonging to the missing bone or the deformed bone from the patient-specific abnormal bone model; and, the second list comprises vertices belong to bone in common between the patient-specific abnormal bone model and the reconstructed patient-specific bone model. 10. The method of claim 1, further comprising determining one or more patient-specific orthopedic implant fixation locations using data from the patient-specific abnormal bone model and data from the reconstructed patient-specific bone model. 11. The method of claim 10, wherein determining one or more patient-specific orthopedic implant fixation locations includes excluding any location where the missing bone or the deformed bone has been identified. 12. The method of claim 1, wherein optimizing one or more parameters for a patient-specific orthopedic implant includes using an implant parameterizing template to establishing general parameters that are thereafter optimized using the reconstructed patient-specific bone model. 13. The method of claim 12, wherein the parameters include at least one of angle parameters, depth parameters, curvature parameters, and fixation device location parameters. 14. The method of claim 1, further comprising constructing an initial iteration of a surface model of the patient-specific orthopedic implant. 15. The method of claims 14, wherein constructing the initial iteration of the surface model includes combining contours from the patient-specific abnormal bone model and contours from the reconstructed patient-specific bone model. 16. The method of claims 14, wherein constructing the initial iteration of the surface model includes accounting for an intended implantation location for the patient-specific orthopedic implant. 17. The method of claim 14, further comprising constructing a subsequent iteration of the surface model of the patient-specific orthopedic implant. 18. The method of claim 17, wherein constructing the subsequent iteration of the surface model of the patient-specific orthopedic implant includes a manual review of the subsequent iteration of the surface model and the reconstructed patient-specific bone model to discern if a further iteration of the surface model is required. 19. The method of claim 1, wherein the electronic design file includes at least one of a computer aided design file, a computer numerical control file, and a rapid manufacturing instruction file. 20. The method of claim 1, further comprising generating an electronic design file for a patient-specific implant placement guide using the one or more parameters optimized for the patient-specific orthopedic implant. 21-70. (canceled) | A method of constructing a patient-specific orthopedic implant comprising: (a) comparing a patient-specific abnormal bone model, derived from an actual anatomy of a patient's abnormal bone, with a reconstructed patient-specific bone model, also derived from the anatomy of the patient's bone, where the reconstructed patient-specific bone model reflects a normalized anatomy of the patient's bone, and where the patient-specific abnormal bone model reflects an actual anatomy of the patient's bone including at least one of a partial bone, a deformed bone, and a shattered bone, wherein the patient-specific abnormal bone model comprises at least one of a patient-specific abnormal point cloud and a patient-specific abnormal bone surface model, and wherein the reconstructed patient-specific bone model comprises at least one of a reconstructed patient-specific point cloud and a reconstructed patient-specific bone surface model; (b) optimizing one or more parameters for a patient-specific orthopedic implant to be mounted to the patient's abnormal bone using data output from comparing the patient-specific abnormal bone model to the reconstructed patient-specific bone model; and, (c) generating an electronic design file for the patient-specific orthopedic implant taking into account the one or more parameters.1. A method of constructing a patient-specific orthopedic implant comprising:
comparing a patient-specific abnormal bone model, derived from an actual anatomy of a patient's abnormal bone, with a reconstructed patient-specific bone model, also derived from the anatomy of the patient's bone, where the reconstructed patient-specific bone model reflects a normalized anatomy of the patient's bone, and where the patient-specific abnormal bone model reflects an actual anatomy of the patient's bone including at least one of a partial bone, a deformed bone, and a shattered bone, wherein the patient-specific abnormal bone model comprises at least one of a patient-specific abnormal point cloud and a patient-specific abnormal bone surface model, and wherein the reconstructed patient-specific bone model comprises at least one of a reconstructed patient-specific point cloud and a reconstructed patient-specific bone surface model; optimizing one or more parameters for a patient-specific orthopedic implant to be mounted to the patient's abnormal bone using data output from comparing the patient-specific abnormal bone model to the reconstructed patient-specific bone model; and, generating an electronic design file for the patient-specific orthopedic implant taking into account the one or more parameters. 2. The method of claim 1, further comprising fabricating the patient-specific implant using the electronic design file. 3. The method of claim 1, further comprising:
comparing the patient-specific abnormal bone model to the reconstructed patient-specific bone model to identify missing bone or deformed bone from the patient-specific abnormal bone model; and, localizing the missing bone or deformed bone onto the reconstructed patient-specific bone model. 4. The method of claim 3, further comprising:
generating the patient-specific abnormal bone model from data representative of the patient's abnormal bone; and, generating the reconstructed patient-specific bone model from data representative of the patient's abnormal bone and from data from a statistical atlas, where the statistical atlas data comprises at least one of a point cloud and a surface model of a normal bone analogous to the patient's abnormal bone. 5. The method of claim 4, wherein the data representative of the patient's abnormal bone comprises at least one of magnetic resonance images, computerized tomography images, X-ray images, and ultrasound images. 6. The method of claim 4, wherein the statistical atlas data is derived from at least one of magnetic resonance images, computerized tomography images, X-ray images, and ultrasound images of the normal bone. 7. The method of claim 3, wherein:
the identified missing bone or the deformed bone comprises a set of bounding points; and, localizing the missing bone or the deformed bone onto the reconstructed patient-specific bone model includes associating the set of bounding points with the reconstructed patient-specific bone model. 8. The method of claim 3, wherein comparing the patient-specific abnormal bone model to the reconstructed patient-specific bone model to identify missing bone or deformed bone from the patient-specific abnormal bone model includes outputting at least two lists of data, where the at least two lists of data include a first list identifying the missing bone or the deformed bone, and a second list identifying bone in common between the patient-specific abnormal bone model and the reconstructed patient-specific bone model. 9. The method of claim 8, wherein:
the first list comprises vertices belonging to the missing bone or the deformed bone from the patient-specific abnormal bone model; and, the second list comprises vertices belong to bone in common between the patient-specific abnormal bone model and the reconstructed patient-specific bone model. 10. The method of claim 1, further comprising determining one or more patient-specific orthopedic implant fixation locations using data from the patient-specific abnormal bone model and data from the reconstructed patient-specific bone model. 11. The method of claim 10, wherein determining one or more patient-specific orthopedic implant fixation locations includes excluding any location where the missing bone or the deformed bone has been identified. 12. The method of claim 1, wherein optimizing one or more parameters for a patient-specific orthopedic implant includes using an implant parameterizing template to establishing general parameters that are thereafter optimized using the reconstructed patient-specific bone model. 13. The method of claim 12, wherein the parameters include at least one of angle parameters, depth parameters, curvature parameters, and fixation device location parameters. 14. The method of claim 1, further comprising constructing an initial iteration of a surface model of the patient-specific orthopedic implant. 15. The method of claims 14, wherein constructing the initial iteration of the surface model includes combining contours from the patient-specific abnormal bone model and contours from the reconstructed patient-specific bone model. 16. The method of claims 14, wherein constructing the initial iteration of the surface model includes accounting for an intended implantation location for the patient-specific orthopedic implant. 17. The method of claim 14, further comprising constructing a subsequent iteration of the surface model of the patient-specific orthopedic implant. 18. The method of claim 17, wherein constructing the subsequent iteration of the surface model of the patient-specific orthopedic implant includes a manual review of the subsequent iteration of the surface model and the reconstructed patient-specific bone model to discern if a further iteration of the surface model is required. 19. The method of claim 1, wherein the electronic design file includes at least one of a computer aided design file, a computer numerical control file, and a rapid manufacturing instruction file. 20. The method of claim 1, further comprising generating an electronic design file for a patient-specific implant placement guide using the one or more parameters optimized for the patient-specific orthopedic implant. 21-70. (canceled) | 2,600 |
10,792 | 10,792 | 16,454,884 | 2,643 | Systems and methods for sharing location information during a message conversation are provided. An electronic device displays a message region for displaying a message transcript of messages sent between a first participant and a second participant in a message conversation. The electronic device displays a location-sharing affordance. The electronic device detects a selection of the location-sharing affordance, where detecting a selection of the location-sharing affordance by the first participant comprises detecting a single contact by the first participant. In response to detecting a selection of the location-sharing affordance, the electronic device enables the second participant to obtain the first participant location information and displays a modified location-sharing affordance. | 1. An electronic device, comprising:
a touch-sensitive surface; a display; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for:
displaying, on the display, a message region for displaying a message transcript of messages sent between a first participant and a second participant in a message conversation;
displaying a location-sharing affordance;
detecting a selection of the location-sharing affordance, wherein detecting a selection of the location-sharing affordance by the first participant comprises detecting a single contact by the first participant; and
in response to detecting a selection of the location-sharing affordance:
enabling the second participant to obtain the first participant location information; and
displaying a modified location-sharing affordance. 2. The electronic device of claim 1, wherein enabling the second participant to obtain the first participant location information is performed without adding a new message to the message transcript. 3. The electronic device of claim 1, the one or more programs further including instructions for:
receiving an updated location of the first participant; and in response to receiving an updated location of the first participant, enabling the second participant to obtain the updated location of the first participant. 4. The electronic device of claim 1, wherein displaying a modified location-sharing affordance comprises replacing the location-sharing affordance with the modified location-sharing affordance. 5. The electronic device of claim 1, the one or more programs further including instructions for:
displaying a message compose field, wherein the message compose field comprises the modified location-sharing affordance. 6. The electronic device of claim 5, the one or more programs further including instructions for:
detecting a message composition; and in response to detecting a message composition, discontinuing displaying the modified location-sharing affordance. 7. The electronic device of claim 1, wherein the modified location-sharing affordance is a toggle. 8. A non-transitory computer-readable storage medium comprising one or more programs configured to be executed by one or more processors of an electronic device with a touch-sensitive surface and a display, the one or more programs including instructions for:
displaying, on the display, a message region for displaying a message transcript of messages sent between a first participant and a second participant in a message conversation; displaying a location-sharing affordance; detecting a selection of the location-sharing affordance, wherein detecting a selection of the location-sharing affordance by the first participant comprises detecting a single contact by the first participant; and in response to detecting a selection of the location-sharing affordance:
enabling the second participant to obtain the first participant location information; and
displaying a modified location-sharing affordance. 9. The non-transitory computer-readable storage medium of claim 8, wherein enabling the second participant to obtain the first participant location information is performed without adding a new message to the message transcript. 10. The non-transitory computer-readable storage medium of claim 8, the one or more programs further including instructions for:
receiving an updated location of the first participant; and in response to receiving an updated location of the first participant, enabling the second participant to obtain the updated location of the first participant. 11. The non-transitory computer-readable storage medium of claim 8, wherein displaying a modified location-sharing affordance comprises replacing the location-sharing affordance with the modified location-sharing affordance. 12. The non-transitory computer-readable storage medium of claim 8, the one or more programs further including instructions for:
displaying a message compose field, wherein the message compose field comprises the modified location-sharing affordance. 13. The non-transitory computer-readable storage medium of claim 12, the one or more programs further including instructions for:
detecting a message composition; and in response to detecting a message composition, discontinuing displaying the modified location-sharing affordance. 14. The non-transitory computer-readable storage medium of claim 8, wherein the modified location-sharing affordance is a toggle. 15. A method comprising:
at an electronic device comprising a touch-sensitive surface and a display: displaying, on the display, a message region for displaying a message transcript of messages sent between a first participant and a second participant in a message conversation; displaying a location-sharing affordance; detecting a selection of the location-sharing affordance, wherein detecting a selection of the location-sharing affordance by the first participant comprises detecting a single contact by the first participant; and in response to detecting a selection of the location-sharing affordance:
enabling the second participant to obtain the first participant location information; and
displaying a modified location-sharing affordance. 16. The method of claim 15, wherein enabling the second participant to obtain the first participant location information is performed without adding a new message to the message transcript. 17. The method of claim 15, further comprising:
receiving an updated location of the first participant; and in response to receiving an updated location of the first participant, enabling the second participant to obtain the updated location of the first participant. 18. The method of claim 15, wherein displaying a modified location-sharing affordance comprises replacing the location-sharing affordance with the modified location-sharing affordance. 19. The method of claim 15, further comprising:
displaying a message compose field, wherein the message compose field comprises the modified location-sharing affordance. 20. The method of claim 19, further comprising:
detecting a message composition; and in response to detecting a message composition, discontinuing displaying the modified location-sharing affordance. 21. The method of claim 15, wherein the modified location-sharing affordance is a toggle. | Systems and methods for sharing location information during a message conversation are provided. An electronic device displays a message region for displaying a message transcript of messages sent between a first participant and a second participant in a message conversation. The electronic device displays a location-sharing affordance. The electronic device detects a selection of the location-sharing affordance, where detecting a selection of the location-sharing affordance by the first participant comprises detecting a single contact by the first participant. In response to detecting a selection of the location-sharing affordance, the electronic device enables the second participant to obtain the first participant location information and displays a modified location-sharing affordance.1. An electronic device, comprising:
a touch-sensitive surface; a display; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for:
displaying, on the display, a message region for displaying a message transcript of messages sent between a first participant and a second participant in a message conversation;
displaying a location-sharing affordance;
detecting a selection of the location-sharing affordance, wherein detecting a selection of the location-sharing affordance by the first participant comprises detecting a single contact by the first participant; and
in response to detecting a selection of the location-sharing affordance:
enabling the second participant to obtain the first participant location information; and
displaying a modified location-sharing affordance. 2. The electronic device of claim 1, wherein enabling the second participant to obtain the first participant location information is performed without adding a new message to the message transcript. 3. The electronic device of claim 1, the one or more programs further including instructions for:
receiving an updated location of the first participant; and in response to receiving an updated location of the first participant, enabling the second participant to obtain the updated location of the first participant. 4. The electronic device of claim 1, wherein displaying a modified location-sharing affordance comprises replacing the location-sharing affordance with the modified location-sharing affordance. 5. The electronic device of claim 1, the one or more programs further including instructions for:
displaying a message compose field, wherein the message compose field comprises the modified location-sharing affordance. 6. The electronic device of claim 5, the one or more programs further including instructions for:
detecting a message composition; and in response to detecting a message composition, discontinuing displaying the modified location-sharing affordance. 7. The electronic device of claim 1, wherein the modified location-sharing affordance is a toggle. 8. A non-transitory computer-readable storage medium comprising one or more programs configured to be executed by one or more processors of an electronic device with a touch-sensitive surface and a display, the one or more programs including instructions for:
displaying, on the display, a message region for displaying a message transcript of messages sent between a first participant and a second participant in a message conversation; displaying a location-sharing affordance; detecting a selection of the location-sharing affordance, wherein detecting a selection of the location-sharing affordance by the first participant comprises detecting a single contact by the first participant; and in response to detecting a selection of the location-sharing affordance:
enabling the second participant to obtain the first participant location information; and
displaying a modified location-sharing affordance. 9. The non-transitory computer-readable storage medium of claim 8, wherein enabling the second participant to obtain the first participant location information is performed without adding a new message to the message transcript. 10. The non-transitory computer-readable storage medium of claim 8, the one or more programs further including instructions for:
receiving an updated location of the first participant; and in response to receiving an updated location of the first participant, enabling the second participant to obtain the updated location of the first participant. 11. The non-transitory computer-readable storage medium of claim 8, wherein displaying a modified location-sharing affordance comprises replacing the location-sharing affordance with the modified location-sharing affordance. 12. The non-transitory computer-readable storage medium of claim 8, the one or more programs further including instructions for:
displaying a message compose field, wherein the message compose field comprises the modified location-sharing affordance. 13. The non-transitory computer-readable storage medium of claim 12, the one or more programs further including instructions for:
detecting a message composition; and in response to detecting a message composition, discontinuing displaying the modified location-sharing affordance. 14. The non-transitory computer-readable storage medium of claim 8, wherein the modified location-sharing affordance is a toggle. 15. A method comprising:
at an electronic device comprising a touch-sensitive surface and a display: displaying, on the display, a message region for displaying a message transcript of messages sent between a first participant and a second participant in a message conversation; displaying a location-sharing affordance; detecting a selection of the location-sharing affordance, wherein detecting a selection of the location-sharing affordance by the first participant comprises detecting a single contact by the first participant; and in response to detecting a selection of the location-sharing affordance:
enabling the second participant to obtain the first participant location information; and
displaying a modified location-sharing affordance. 16. The method of claim 15, wherein enabling the second participant to obtain the first participant location information is performed without adding a new message to the message transcript. 17. The method of claim 15, further comprising:
receiving an updated location of the first participant; and in response to receiving an updated location of the first participant, enabling the second participant to obtain the updated location of the first participant. 18. The method of claim 15, wherein displaying a modified location-sharing affordance comprises replacing the location-sharing affordance with the modified location-sharing affordance. 19. The method of claim 15, further comprising:
displaying a message compose field, wherein the message compose field comprises the modified location-sharing affordance. 20. The method of claim 19, further comprising:
detecting a message composition; and in response to detecting a message composition, discontinuing displaying the modified location-sharing affordance. 21. The method of claim 15, wherein the modified location-sharing affordance is a toggle. | 2,600 |
10,793 | 10,793 | 16,088,182 | 2,646 | The solution presented herein is directed to communication devices having a plurality of circuits that may be configured into a plurality of different receiver configurations (or classes), where each receiver configuration uses a different technique for processing a received signal. The solution presented herein enables the communication device to select at least one receiver configuration/class for processing received signals. To that end, a performance metric is determined for each receiver configuration in a subset of receiver configurations using at least one of a received signal, a signal strength determined for the corresponding receiver configuration, and an interference level determined for the corresponding receiver configuration. At least one of the receiver configurations in the subset is selected responsive to the determined performance metrics, and in some cases also in response to a scheduled amount of data. | 1-38. (canceled) 39. A method of selecting one or more receiver configurations for a communication device comprising a plurality of circuits configurable into a plurality of different receiver configurations, the method comprising:
selecting a subset of the plurality of different receiver configurations responsive to an availability of one or more resources of the communication device; determining a performance metric for each receiver configuration in the subset of the plurality of the different receiver configurations using a received signal and/or a signal strength determined for the corresponding receiver configuration and/or an interference level determined for the corresponding receiver configuration; determining a scheduled amount of data to be received from the received signal; selecting at least one of the receiver configurations in the subset responsive to the determined performance metrics and the schedule amount of data; and configuring the communication device to use the at least one selected receiver configuration to process signals received by the communication device. 40. The method of claim 39 wherein the one or more resources comprise at least one of a number of clock cycles required to process the scheduled amount of data using the corresponding receiver configuration. 41. The method of claim 39 wherein the method further comprises selecting the subset of the plurality of different receiver configurations responsive to a complexity of each receiver configuration. 42. The method of claim 39 wherein determining the performance metric comprises determining a power consumption of the corresponding receiver configuration and/or a latency associated with the corresponding receiver configuration and/or a channel capacity and/or a throughput and/or a signal-to-interference and noise ratio for each receiver configuration in the subset using the received signal and/or the corresponding signal strength and/or the corresponding interference level. 43. The method of claim 39:
further comprising determining a channel rank responsive to the received signal; wherein determining the performance metric comprises determining the performance metric for each receiver configuration in the subset using the channel rank. 44. The method of claim 39 wherein selecting a least one of the receiver configurations comprises selecting the receiver configuration having the best performance metric for the amount of scheduled data. 45. The method of claim 39 wherein selecting at least one of the receiver configurations further comprises selecting the receiver configuration responsive to at least one of a complexity of each receiver configuration for the amount of scheduled data and one or more resources available to the communication device for each receiver configuration for the amount of scheduled data. 46. The method of claim 39 wherein:
selecting at least one of the receiver configurations comprises selecting two or more of the receiver configurations in the subset responsive to the determined performance metrics and the scheduled amount of data; and
configuring the communication device to use the selected receiver configuration comprises configuring the communication device to use the two or more selected receiver configurations to process signals received by the communication device. 47. The method of claim 39 wherein the plurality of receiver configurations comprises any combination of:
a maximum ratio combining receiver configuration;
a two antenna interference rejection combining receiver configuration;
a four antenna interference rejection combining receiver configuration;
a single user multiple input, multiple output receiver configuration;
a two antenna network assisted interference cancellation and suppression receiver configuration;
a four antenna network assisted interference cancellation and suppression receiver configuration; or
a common reference signal interference cancellation receiver configuration. 48. The method of claim 39:
further comprising obtaining one or more transmission parameters; wherein selecting at least one of the receiver configurations comprises selecting at least one of the receiver configurations in the subset responsive to the determined performance metrics, the scheduled amount of data, and the obtained one or more transmission parameters. 49. A communication device comprising:
a reception circuit comprising plurality of circuits configurable into a plurality of different receiver configurations; and one or more processing circuits configured to:
select a subset of the plurality of different receiver configurations responsive to an availability of one or more resources of the communication device;
determine a performance metric for each receiver configuration in the subset of the plurality of the different receiver configurations using a received signal and/or a signal strength determined for the corresponding receiver configuration and/or an interference level determined for the corresponding receiver configuration;
determine a scheduled amount of data to be received from the received signal;
select at least one of the receiver configurations in the subset responsive to the determined performance metrics and the schedule amount of data; and
configure the communication device to use the at least one selected receiver configuration to process signals received by the communication device. 50. The communication device of claim 49 wherein the one or more resources comprise at least one of a number of clock cycles required to process the scheduled amount of data using the corresponding receiver configuration. 51. The communication device of claim 49 wherein the one or more processing circuits are further configured to select the subset of the plurality of different receiver configurations responsive to a complexity of each receiver configuration. 52. The communication device of claim 49 wherein the one or more processing circuits determine the performance metric by determining a power consumption of the corresponding receiver configuration and/or a latency associated with the corresponding receiver configuration and/or a channel capacity and/or a throughput and/or a signal-to-interference and noise ratio for each receiver configuration in the subset using the received signal and/or the corresponding signal strength and/or the corresponding interference level. 53. The communication device of claim 49 wherein:
the one or more processing circuits are further configured to determine a channel rank responsive to the received signal;
the one or more processing circuits determine the performance metric by determining the performance metric for each receiver configuration in the subset using the channel rank. 54. The communication device of claim 49 wherein the one or more processing circuits select a least one of the receiver configurations by selecting the receiver configuration having the best performance metric for the amount of scheduled data. 55. The communication device of claim 49 wherein the one or more processing circuits select at least one of the receiver configurations by selecting the receiver configuration responsive to at least one of a complexity of each receiver configuration for the amount of scheduled data and one or more resources available to the communication device for each receiver configuration for the amount of scheduled data. 56. The communication device of claim 49 wherein the one or more processing circuits:
select at least one of the receiver configurations by selecting two or more of the receiver configurations in the subset responsive to the determined performance metrics and the scheduled amount of data; and
configure the communication device to use the selected receiver configuration by configuring the communication device to use the two or more selected receiver configurations to process signals received by the communication device. 57. The communication device of claim 49 wherein the plurality of receiver configurations comprises any combination of:
a maximum ratio combining receiver configuration;
a two antenna interference rejection combining receiver configuration;
a four antenna interference rejection combining receiver configuration;
a single user multiple input, multiple output receiver configuration;
a two antenna network assisted interference cancellation and suppression receiver configuration;
a four antenna network assisted interference cancellation and suppression receiver configuration; or
a common reference signal interference cancellation receiver configuration. 58. The communication device of claim 49 wherein:
the one or more processing circuits are further configured to obtain one or more transmission parameters; and
wherein the one or more processing circuits select at least one of the receiver configurations by selecting at least one of the receiver configurations in the subset responsive to the determined performance metrics, the scheduled amount of data, and the obtained one or more transmission parameters. 59. A computer program product stored in a non-transitory computer readable medium for controlling a processor in a communication device comprising a plurality of circuits configurable into a plurality of different receiver configurations, the computer program product comprising software instructions which, when run on the processor, causes the processor to:
select a subset of the plurality of different receiver configurations responsive to an availability of one or more resources of the communication device; determine a performance metric for each receiver configuration in the subset of the plurality of the different receiver configurations using a received signal and/or a signal strength determined for the corresponding receiver configuration and/or an interference level determined for the corresponding receiver configuration; determine a scheduled amount of data to be received from the received signal; select at least one of the receiver configurations in the subset responsive to the determined performance metrics and the schedule amount of data; and configure the communication device to use the at least one selected receiver configuration to process signals received by the communication device. | The solution presented herein is directed to communication devices having a plurality of circuits that may be configured into a plurality of different receiver configurations (or classes), where each receiver configuration uses a different technique for processing a received signal. The solution presented herein enables the communication device to select at least one receiver configuration/class for processing received signals. To that end, a performance metric is determined for each receiver configuration in a subset of receiver configurations using at least one of a received signal, a signal strength determined for the corresponding receiver configuration, and an interference level determined for the corresponding receiver configuration. At least one of the receiver configurations in the subset is selected responsive to the determined performance metrics, and in some cases also in response to a scheduled amount of data.1-38. (canceled) 39. A method of selecting one or more receiver configurations for a communication device comprising a plurality of circuits configurable into a plurality of different receiver configurations, the method comprising:
selecting a subset of the plurality of different receiver configurations responsive to an availability of one or more resources of the communication device; determining a performance metric for each receiver configuration in the subset of the plurality of the different receiver configurations using a received signal and/or a signal strength determined for the corresponding receiver configuration and/or an interference level determined for the corresponding receiver configuration; determining a scheduled amount of data to be received from the received signal; selecting at least one of the receiver configurations in the subset responsive to the determined performance metrics and the schedule amount of data; and configuring the communication device to use the at least one selected receiver configuration to process signals received by the communication device. 40. The method of claim 39 wherein the one or more resources comprise at least one of a number of clock cycles required to process the scheduled amount of data using the corresponding receiver configuration. 41. The method of claim 39 wherein the method further comprises selecting the subset of the plurality of different receiver configurations responsive to a complexity of each receiver configuration. 42. The method of claim 39 wherein determining the performance metric comprises determining a power consumption of the corresponding receiver configuration and/or a latency associated with the corresponding receiver configuration and/or a channel capacity and/or a throughput and/or a signal-to-interference and noise ratio for each receiver configuration in the subset using the received signal and/or the corresponding signal strength and/or the corresponding interference level. 43. The method of claim 39:
further comprising determining a channel rank responsive to the received signal; wherein determining the performance metric comprises determining the performance metric for each receiver configuration in the subset using the channel rank. 44. The method of claim 39 wherein selecting a least one of the receiver configurations comprises selecting the receiver configuration having the best performance metric for the amount of scheduled data. 45. The method of claim 39 wherein selecting at least one of the receiver configurations further comprises selecting the receiver configuration responsive to at least one of a complexity of each receiver configuration for the amount of scheduled data and one or more resources available to the communication device for each receiver configuration for the amount of scheduled data. 46. The method of claim 39 wherein:
selecting at least one of the receiver configurations comprises selecting two or more of the receiver configurations in the subset responsive to the determined performance metrics and the scheduled amount of data; and
configuring the communication device to use the selected receiver configuration comprises configuring the communication device to use the two or more selected receiver configurations to process signals received by the communication device. 47. The method of claim 39 wherein the plurality of receiver configurations comprises any combination of:
a maximum ratio combining receiver configuration;
a two antenna interference rejection combining receiver configuration;
a four antenna interference rejection combining receiver configuration;
a single user multiple input, multiple output receiver configuration;
a two antenna network assisted interference cancellation and suppression receiver configuration;
a four antenna network assisted interference cancellation and suppression receiver configuration; or
a common reference signal interference cancellation receiver configuration. 48. The method of claim 39:
further comprising obtaining one or more transmission parameters; wherein selecting at least one of the receiver configurations comprises selecting at least one of the receiver configurations in the subset responsive to the determined performance metrics, the scheduled amount of data, and the obtained one or more transmission parameters. 49. A communication device comprising:
a reception circuit comprising plurality of circuits configurable into a plurality of different receiver configurations; and one or more processing circuits configured to:
select a subset of the plurality of different receiver configurations responsive to an availability of one or more resources of the communication device;
determine a performance metric for each receiver configuration in the subset of the plurality of the different receiver configurations using a received signal and/or a signal strength determined for the corresponding receiver configuration and/or an interference level determined for the corresponding receiver configuration;
determine a scheduled amount of data to be received from the received signal;
select at least one of the receiver configurations in the subset responsive to the determined performance metrics and the schedule amount of data; and
configure the communication device to use the at least one selected receiver configuration to process signals received by the communication device. 50. The communication device of claim 49 wherein the one or more resources comprise at least one of a number of clock cycles required to process the scheduled amount of data using the corresponding receiver configuration. 51. The communication device of claim 49 wherein the one or more processing circuits are further configured to select the subset of the plurality of different receiver configurations responsive to a complexity of each receiver configuration. 52. The communication device of claim 49 wherein the one or more processing circuits determine the performance metric by determining a power consumption of the corresponding receiver configuration and/or a latency associated with the corresponding receiver configuration and/or a channel capacity and/or a throughput and/or a signal-to-interference and noise ratio for each receiver configuration in the subset using the received signal and/or the corresponding signal strength and/or the corresponding interference level. 53. The communication device of claim 49 wherein:
the one or more processing circuits are further configured to determine a channel rank responsive to the received signal;
the one or more processing circuits determine the performance metric by determining the performance metric for each receiver configuration in the subset using the channel rank. 54. The communication device of claim 49 wherein the one or more processing circuits select a least one of the receiver configurations by selecting the receiver configuration having the best performance metric for the amount of scheduled data. 55. The communication device of claim 49 wherein the one or more processing circuits select at least one of the receiver configurations by selecting the receiver configuration responsive to at least one of a complexity of each receiver configuration for the amount of scheduled data and one or more resources available to the communication device for each receiver configuration for the amount of scheduled data. 56. The communication device of claim 49 wherein the one or more processing circuits:
select at least one of the receiver configurations by selecting two or more of the receiver configurations in the subset responsive to the determined performance metrics and the scheduled amount of data; and
configure the communication device to use the selected receiver configuration by configuring the communication device to use the two or more selected receiver configurations to process signals received by the communication device. 57. The communication device of claim 49 wherein the plurality of receiver configurations comprises any combination of:
a maximum ratio combining receiver configuration;
a two antenna interference rejection combining receiver configuration;
a four antenna interference rejection combining receiver configuration;
a single user multiple input, multiple output receiver configuration;
a two antenna network assisted interference cancellation and suppression receiver configuration;
a four antenna network assisted interference cancellation and suppression receiver configuration; or
a common reference signal interference cancellation receiver configuration. 58. The communication device of claim 49 wherein:
the one or more processing circuits are further configured to obtain one or more transmission parameters; and
wherein the one or more processing circuits select at least one of the receiver configurations by selecting at least one of the receiver configurations in the subset responsive to the determined performance metrics, the scheduled amount of data, and the obtained one or more transmission parameters. 59. A computer program product stored in a non-transitory computer readable medium for controlling a processor in a communication device comprising a plurality of circuits configurable into a plurality of different receiver configurations, the computer program product comprising software instructions which, when run on the processor, causes the processor to:
select a subset of the plurality of different receiver configurations responsive to an availability of one or more resources of the communication device; determine a performance metric for each receiver configuration in the subset of the plurality of the different receiver configurations using a received signal and/or a signal strength determined for the corresponding receiver configuration and/or an interference level determined for the corresponding receiver configuration; determine a scheduled amount of data to be received from the received signal; select at least one of the receiver configurations in the subset responsive to the determined performance metrics and the schedule amount of data; and configure the communication device to use the at least one selected receiver configuration to process signals received by the communication device. | 2,600 |
10,794 | 10,794 | 16,219,820 | 2,616 | Described herein is a technique for performing ray-triangle intersection test in a manner that produces watertight results. The technique involves translating the coordinates of the triangle such that the origin is at the origin of the ray. The technique involves projecting the coordinate system into the viewspace of the ray. The technique then involves calculating barycentric coordinates and interpolating the barycentric coordinates to get a time of intersect. The signs of the barycentric coordinates indicate whether a hit occurs. The above calculations are performed with a non-directed floating point rounding mode to provide watertightness. A non-directed rounding mode is one in which the mantissa of a rounded number is rounded in a manner that is not dependent on the sign of the number. | 1. A method for detecting a hit between a ray and a triangle, the method comprising:
projecting, into a viewspace of the ray, vertices of the triangle, by transforming the vertices of the triangle and a vertex representative of a direction of the ray, into a coordinate system in which the ray direction has x and y components of 0 and each of the vertices and the ray have z components that are unmodified by the coordinate transformation unit; determining barycentric coordinates that describe the location of the point of intersection of the ray relative to the vertices of the triangle in two-dimensional space, wherein determining the barycentric coordinates is performed using a non-directed rounding mode; and interpolating the barycentric coordinates to generate a numerator and a denominator for a time of intersection of the ray with the triangle. 2. The method of claim 1, wherein:
the non-directed rounding mode comprises a floating point rounding mode in which the mantissa of the barycentric coordinates and/or of intermediate values used to calculate the barycentric coordinates is rounded in a manner that is not dependent on sign. 3. The method of claim 2, wherein:
the non-directed rounding mode comprises a round towards zero mode in which the mantissa of the barycentric coordinates and/or of intermediate values used to calculate the barycentric coordinates is rounded such that after rounding, the mantissa has a smaller magnitude than before rounding. 4. The method of claim 2, wherein the non-directed rounding mode comprises a round to nearest equal mode in which the mantissa of the barycentric coordinates and/or of intermediate values used to calculate the barycentric coordinates is rounded to the nearest even number. 5. The method of claim 1, wherein the non-directed rounding mode does not include a directed rounding mode that comprises a floating point rounding mode in which the mantissa of the barycentric coordinates and/or of intermediate values used to calculate the barycentric coordinates is rounded such that the magnitude of the mantissa is either increased or decreased depending on sign. 6. The method of claim 5, wherein the directed rounding mode includes a round to positive infinity mode or a round to negative infinity mode. 7. The method of claim 1, wherein transforming the vertices of the triangle and the vertex representation of the direction of the ray into the coordinate system comprises performing floating point calculations with a non-directed rounding mode. 8. The method of claim 1, wherein determining the barycentric coordinates includes a step that calculates a barycentric coordinate as CxBy−BxCy, where Cx and Cy are x and y coordinates of one of the vertices that bounds the edge associated with the barycentric coordinate and Bx and By are x and y coordinates of another of the vertices that bounds the edge associated with the barycentric coordinates. 9. The method of claim 8, wherein determining the barycentric coordinates further comprises rounding the product of CxBy according to a non-directed rounding mode, rounding the product of BxCy according to a non-directed rounding mode, and rounding the difference of CxBy−BxCy according to a non-directed rounding mode. 10. A compute unit comprising:
a processing unit configured to request a test of an intersection between a ray and a triangle; and a ray intersection test unit configured to perform the test by: projecting, into a viewspace of the ray, vertices of the triangle, by transforming the vertices of the triangle and a vertex representative of a direction of the ray, into a coordinate system in which the ray direction has x and y components of 0 and each of the vertices and the ray have z components that are unmodified by the coordinate transformation unit; determining barycentric coordinates that describe the location of the point of intersection of the ray relative to the vertices of the triangle in two-dimensional space, wherein determining the barycentric coordinates is performed using a non-directed rounding mode; and interpolating the barycentric coordinates to generate a numerator and a denominator for a time of intersection of the ray with the triangle. 11. The compute unit of claim 10, wherein:
the non-directed rounding mode comprises a floating point rounding mode in which the mantissa of the barycentric coordinates and/or of intermediate values used to calculate the barycentric coordinates is rounded in a manner that is not dependent on sign. 12. The compute unit of claim 10, wherein:
the non-directed rounding mode comprises a round towards zero mode in which the mantissa of the barycentric coordinates and/or of intermediate values used to calculate the barycentric coordinates is rounded such that after rounding, the mantissa has a smaller magnitude than before rounding. 13. The compute unit of claim 11, wherein the non-directed rounding mode comprises a round to nearest equal mode in which the mantissa of the barycentric coordinates and/or of intermediate values used to calculate the barycentric coordinates is rounded to the nearest even number. 14. The compute unit of claim 10, wherein the non-directed rounding mode does not include a directed rounding mode that comprises a floating point rounding mode in which the mantissa of the barycentric coordinates and/or of intermediate values used to calculate the barycentric coordinates is rounded such that the magnitude of the mantissa is either increased or decreased depending on sign. 15. The compute unit of claim 14, wherein the directed rounding mode includes a round to positive infinity mode or a round to negative infinity mode. 16. The compute unit of claim 10, wherein transforming the vertices of the triangle and the vertex representation of the direction of the ray into the coordinate system comprises performing floating point calculations with a non-directed rounding mode. 17. The compute unit of claim 10, wherein determining the barycentric coordinates includes a step that calculates a barycentric coordinate as CxBy−BxCy, where Cx and Cy are x and y coordinates of one of the vertices that bounds the edge associated with the barycentric coordinate and Bx and By are x and y coordinates of another of the vertices that bounds the edge associated with the barycentric coordinates. 18. The compute unit of claim 17, wherein determining the barycentric coordinates further comprises rounding the product of CxBy according to a non-directed rounding mode, rounding the product of BxCy according to a non-directed rounding mode, and rounding the difference of CxBy−BxCy according to a non-directed rounding mode. 19. A computing system comprising:
a central processing unit configured to transmit a shader program to an accelerated processing device for execution; and the accelerated processing device, including a compute unit, the compute unit comprising:
a processing unit configured to execute the shader program to request a test of an intersection between a ray and a triangle; and
a ray intersection test unit configured to perform the test by:
projecting, into a viewspace of the ray, vertices of the triangle, by transforming the vertices of the triangle and a vertex representative of a direction of the ray, into a coordinate system in which the ray direction has x and y components of 0 and each of the vertices and the ray have z components that are unmodified by the coordinate transformation unit;
determining barycentric coordinates that describe the location of the point of intersection of the ray relative to the vertices of the triangle in two-dimensional space, wherein determining the barycentric coordinates is performed using a non-directed rounding mode; and
interpolating the barycentric coordinates to generate a numerator and a denominator for a time of intersection of the ray with the triangle. 20. The computing system of claim 19, wherein:
the non-directed rounding mode comprises a floating point rounding mode in which the mantissa of the barycentric coordinates and/or of intermediate values used to calculate the barycentric coordinates is rounded in a manner that is not dependent on sign. | Described herein is a technique for performing ray-triangle intersection test in a manner that produces watertight results. The technique involves translating the coordinates of the triangle such that the origin is at the origin of the ray. The technique involves projecting the coordinate system into the viewspace of the ray. The technique then involves calculating barycentric coordinates and interpolating the barycentric coordinates to get a time of intersect. The signs of the barycentric coordinates indicate whether a hit occurs. The above calculations are performed with a non-directed floating point rounding mode to provide watertightness. A non-directed rounding mode is one in which the mantissa of a rounded number is rounded in a manner that is not dependent on the sign of the number.1. A method for detecting a hit between a ray and a triangle, the method comprising:
projecting, into a viewspace of the ray, vertices of the triangle, by transforming the vertices of the triangle and a vertex representative of a direction of the ray, into a coordinate system in which the ray direction has x and y components of 0 and each of the vertices and the ray have z components that are unmodified by the coordinate transformation unit; determining barycentric coordinates that describe the location of the point of intersection of the ray relative to the vertices of the triangle in two-dimensional space, wherein determining the barycentric coordinates is performed using a non-directed rounding mode; and interpolating the barycentric coordinates to generate a numerator and a denominator for a time of intersection of the ray with the triangle. 2. The method of claim 1, wherein:
the non-directed rounding mode comprises a floating point rounding mode in which the mantissa of the barycentric coordinates and/or of intermediate values used to calculate the barycentric coordinates is rounded in a manner that is not dependent on sign. 3. The method of claim 2, wherein:
the non-directed rounding mode comprises a round towards zero mode in which the mantissa of the barycentric coordinates and/or of intermediate values used to calculate the barycentric coordinates is rounded such that after rounding, the mantissa has a smaller magnitude than before rounding. 4. The method of claim 2, wherein the non-directed rounding mode comprises a round to nearest equal mode in which the mantissa of the barycentric coordinates and/or of intermediate values used to calculate the barycentric coordinates is rounded to the nearest even number. 5. The method of claim 1, wherein the non-directed rounding mode does not include a directed rounding mode that comprises a floating point rounding mode in which the mantissa of the barycentric coordinates and/or of intermediate values used to calculate the barycentric coordinates is rounded such that the magnitude of the mantissa is either increased or decreased depending on sign. 6. The method of claim 5, wherein the directed rounding mode includes a round to positive infinity mode or a round to negative infinity mode. 7. The method of claim 1, wherein transforming the vertices of the triangle and the vertex representation of the direction of the ray into the coordinate system comprises performing floating point calculations with a non-directed rounding mode. 8. The method of claim 1, wherein determining the barycentric coordinates includes a step that calculates a barycentric coordinate as CxBy−BxCy, where Cx and Cy are x and y coordinates of one of the vertices that bounds the edge associated with the barycentric coordinate and Bx and By are x and y coordinates of another of the vertices that bounds the edge associated with the barycentric coordinates. 9. The method of claim 8, wherein determining the barycentric coordinates further comprises rounding the product of CxBy according to a non-directed rounding mode, rounding the product of BxCy according to a non-directed rounding mode, and rounding the difference of CxBy−BxCy according to a non-directed rounding mode. 10. A compute unit comprising:
a processing unit configured to request a test of an intersection between a ray and a triangle; and a ray intersection test unit configured to perform the test by: projecting, into a viewspace of the ray, vertices of the triangle, by transforming the vertices of the triangle and a vertex representative of a direction of the ray, into a coordinate system in which the ray direction has x and y components of 0 and each of the vertices and the ray have z components that are unmodified by the coordinate transformation unit; determining barycentric coordinates that describe the location of the point of intersection of the ray relative to the vertices of the triangle in two-dimensional space, wherein determining the barycentric coordinates is performed using a non-directed rounding mode; and interpolating the barycentric coordinates to generate a numerator and a denominator for a time of intersection of the ray with the triangle. 11. The compute unit of claim 10, wherein:
the non-directed rounding mode comprises a floating point rounding mode in which the mantissa of the barycentric coordinates and/or of intermediate values used to calculate the barycentric coordinates is rounded in a manner that is not dependent on sign. 12. The compute unit of claim 10, wherein:
the non-directed rounding mode comprises a round towards zero mode in which the mantissa of the barycentric coordinates and/or of intermediate values used to calculate the barycentric coordinates is rounded such that after rounding, the mantissa has a smaller magnitude than before rounding. 13. The compute unit of claim 11, wherein the non-directed rounding mode comprises a round to nearest equal mode in which the mantissa of the barycentric coordinates and/or of intermediate values used to calculate the barycentric coordinates is rounded to the nearest even number. 14. The compute unit of claim 10, wherein the non-directed rounding mode does not include a directed rounding mode that comprises a floating point rounding mode in which the mantissa of the barycentric coordinates and/or of intermediate values used to calculate the barycentric coordinates is rounded such that the magnitude of the mantissa is either increased or decreased depending on sign. 15. The compute unit of claim 14, wherein the directed rounding mode includes a round to positive infinity mode or a round to negative infinity mode. 16. The compute unit of claim 10, wherein transforming the vertices of the triangle and the vertex representation of the direction of the ray into the coordinate system comprises performing floating point calculations with a non-directed rounding mode. 17. The compute unit of claim 10, wherein determining the barycentric coordinates includes a step that calculates a barycentric coordinate as CxBy−BxCy, where Cx and Cy are x and y coordinates of one of the vertices that bounds the edge associated with the barycentric coordinate and Bx and By are x and y coordinates of another of the vertices that bounds the edge associated with the barycentric coordinates. 18. The compute unit of claim 17, wherein determining the barycentric coordinates further comprises rounding the product of CxBy according to a non-directed rounding mode, rounding the product of BxCy according to a non-directed rounding mode, and rounding the difference of CxBy−BxCy according to a non-directed rounding mode. 19. A computing system comprising:
a central processing unit configured to transmit a shader program to an accelerated processing device for execution; and the accelerated processing device, including a compute unit, the compute unit comprising:
a processing unit configured to execute the shader program to request a test of an intersection between a ray and a triangle; and
a ray intersection test unit configured to perform the test by:
projecting, into a viewspace of the ray, vertices of the triangle, by transforming the vertices of the triangle and a vertex representative of a direction of the ray, into a coordinate system in which the ray direction has x and y components of 0 and each of the vertices and the ray have z components that are unmodified by the coordinate transformation unit;
determining barycentric coordinates that describe the location of the point of intersection of the ray relative to the vertices of the triangle in two-dimensional space, wherein determining the barycentric coordinates is performed using a non-directed rounding mode; and
interpolating the barycentric coordinates to generate a numerator and a denominator for a time of intersection of the ray with the triangle. 20. The computing system of claim 19, wherein:
the non-directed rounding mode comprises a floating point rounding mode in which the mantissa of the barycentric coordinates and/or of intermediate values used to calculate the barycentric coordinates is rounded in a manner that is not dependent on sign. | 2,600 |
10,795 | 10,795 | 16,238,415 | 2,637 | An optical network communication system includes an optical hub, an optical distribution center, at least one fiber segment, and at least two end users. The optical hub includes an intelligent configuration unit configured to monitor and multiplex at least two different optical signals into a single multiplexed heterogeneous signal. The optical distribution center is configured to individually separate the at least two different optical signals from the multiplexed heterogeneous signal. The at least one fiber segment connects the optical hub and the optical distribution center, and is configured to receive the multiplexed heterogeneous signal from the optical hub and distribute the multiplexed heterogeneous signal to the optical distribution center. The at least two end users each include a downstream receiver configured to receive one of the respective separated optical signals from the optical distribution center. | 1. An optical network communication system, comprising:
an optical hub including an intelligent configuration unit configured to monitor and multiplex at least two different optical signals into a single multiplexed heterogeneous signal; an optical distribution center configured to individually separate the at least two different optical signals from the multiplexed heterogeneous signal; at least one fiber segment connecting the optical hub and the optical distribution center, the at least one fiber segment configured to receive the multiplexed heterogeneous signal from the optical hub and distribute the multiplexed heterogeneous signal to the optical distribution center; and at least two end users, each including a downstream receiver configured to receive one of the respective separated optical signals from the optical distribution center. 2. The system of claim 1, wherein the intelligent configuration unit comprises a processor and a memory, and an optical multiplexer. 3. The system of claim 2, wherein the intelligent configuration unit further comprises an optical multiplexer. 4. The system of claim 2, wherein the intelligent configuration unit further comprises at least one of a control interface and a communication interface to receive from and send information to an optical multiplexer. 5. The system of claim 1, wherein the optical distribution center comprises a node optical demultiplexer configured to demultiplex the multiplexed heterogeneous signal. 6. The system of claim 1, wherein the optical hub comprises at least two downstream transmitters, each configured to transmit one of the at least two different optical signals, respectively. 7. The system of claim 6,
wherein each of the at least two end users further includes an upstream transmitter, wherein the optical distribution center further comprises a node optical multiplexer, and wherein the optical hub further comprises at least two upstream receivers configured to receive a different optical signal from different ones of the transmitters of the at least two end users, respectively. 8. The system of claim 6, wherein the intelligent configuration unit is further configured to multiplex the at least two different optical signals from the at least two downstream transmitters. 9. The system of claim 1, wherein the at least two different optical signals include two or more of an analog signal, an intensity modulated direct detection signal, a differential modulated signal, and a coherent signal. 10. The system of claim 1, wherein the at least two end users comprise at least two of a customer device, customer premises, a business user, and an optical network unit. 11. The system of claim 1, further configured to implement coherent dense wavelength division multiplexing with a passive optical network architecture. 12. The system of claim 11,
wherein the at least two end users include at least N subscribers, and wherein the system comprises at least two fiber segments for each N subscribers. 13. The system of claim 1, further configured to implement wavelength filtering and injection locking. 14. The system of claim 13,
wherein the at least two end users include at least N subscribers, and wherein the system comprises at least three fiber segments for each 2N subscribers. 15. A method of distributing heterogeneous wavelength signals over a fiber segment of an optical network, comprising the steps of:
monitoring at least two different optical carriers from at least two different transmitters, respectively; analyzing one or more characteristics of the fiber segment; determining one or more parameters of the at least two different optical carriers; and assigning a wavelength spectrum to each of the at least two different optical carriers according to the one or more analyzed fiber segment characteristics and the one or more determined optical carrier parameters. 16. The method of claim 15, further comprising, after the step of assigning, multiplexing the at least two different optical carriers to the fiber segment according to the respective assigned wavelength spectra. 17. The method of claim 15, wherein the at least two different optical carriers include two or more of an analog signal, an intensity modulated direct detection signal, a differential modulated signal, and a coherent signal. 18. The method of claim 15, wherein the fiber segment characteristics include one or more of fiber type, fiber length, implementation of amplification and/or loss devices, implementation of wavelength filters or splitters, and fiber distribution network topology. 19. The method of claim 15, wherein the optical carrier parameters include one or more of individual carrier optical power levels, aggregate carrier power, number of optical carriers, signal wavelength, wavelength spacing among carriers, modulation format, modulation bandwidth, carrier configurability, channel coding/decoding, polarization multiplexing, forward error correction, and carrier tenability. 20. An optical distribution center apparatus, comprising:
an input optical interface for communication with an optical hub; an output optical interface for communication with one or more end user devices configured to process optical signals; a wavelength filter for separating a downstream heterogeneous optical signal from the input optical interface into a plurality of downstream homogenous optical signals; and a downstream optical switch for distributing the plurality of downstream homogeneous optical signals from the wavelength filter to the output optical interface in response to a first control signal from the optical hub. 21. The apparatus of claim 20, wherein the wavelength filter comprises at least one of a wavelength division multiplexing grating and a cyclic arrayed waveguide grating. 22. The apparatus of claim 20, wherein the downstream optical switch is an N×N optical switch configured to associate particular ones of the plurality of downstream homogeneous optical signals with respective ones of the one or more end user devices. 23. The apparatus of claim 20, wherein the first control signal is received from an intelligent configuration unit disposed within the optical hub. 24. The apparatus of claim 20, further comprising:
an upstream optical switch for distributing a plurality of upstream homogeneous optical signals collected from the output optical interface in response to a second control signal from the optical hub; and an optical combiner for aggregating the distributed plurality of upstream homogenous optical signals into a heterogeneous upstream optical signal to the input optical interface. 25. The apparatus of claim 24, wherein the optical combiner comprises at least one of a wavelength division multiplexing grating and a passive optical splitter. 26. The apparatus of claim 24, wherein the upstream optical switch is an N×N optical switch. 27. The apparatus of claim 24, wherein the second control signal is a counterpart command of the first control signal. 28. The apparatus of claim 24, wherein the optical distribution center is configured to receive the first and second control signals separately from the input optical interface. 29. The apparatus of claim 24, further comprising a hybrid fiber coaxial portion in communication with the output optical interface. 30. The apparatus of claim 24, wherein the second control signal is received from an intelligent configuration unit disposed within the optical hub. | An optical network communication system includes an optical hub, an optical distribution center, at least one fiber segment, and at least two end users. The optical hub includes an intelligent configuration unit configured to monitor and multiplex at least two different optical signals into a single multiplexed heterogeneous signal. The optical distribution center is configured to individually separate the at least two different optical signals from the multiplexed heterogeneous signal. The at least one fiber segment connects the optical hub and the optical distribution center, and is configured to receive the multiplexed heterogeneous signal from the optical hub and distribute the multiplexed heterogeneous signal to the optical distribution center. The at least two end users each include a downstream receiver configured to receive one of the respective separated optical signals from the optical distribution center.1. An optical network communication system, comprising:
an optical hub including an intelligent configuration unit configured to monitor and multiplex at least two different optical signals into a single multiplexed heterogeneous signal; an optical distribution center configured to individually separate the at least two different optical signals from the multiplexed heterogeneous signal; at least one fiber segment connecting the optical hub and the optical distribution center, the at least one fiber segment configured to receive the multiplexed heterogeneous signal from the optical hub and distribute the multiplexed heterogeneous signal to the optical distribution center; and at least two end users, each including a downstream receiver configured to receive one of the respective separated optical signals from the optical distribution center. 2. The system of claim 1, wherein the intelligent configuration unit comprises a processor and a memory, and an optical multiplexer. 3. The system of claim 2, wherein the intelligent configuration unit further comprises an optical multiplexer. 4. The system of claim 2, wherein the intelligent configuration unit further comprises at least one of a control interface and a communication interface to receive from and send information to an optical multiplexer. 5. The system of claim 1, wherein the optical distribution center comprises a node optical demultiplexer configured to demultiplex the multiplexed heterogeneous signal. 6. The system of claim 1, wherein the optical hub comprises at least two downstream transmitters, each configured to transmit one of the at least two different optical signals, respectively. 7. The system of claim 6,
wherein each of the at least two end users further includes an upstream transmitter, wherein the optical distribution center further comprises a node optical multiplexer, and wherein the optical hub further comprises at least two upstream receivers configured to receive a different optical signal from different ones of the transmitters of the at least two end users, respectively. 8. The system of claim 6, wherein the intelligent configuration unit is further configured to multiplex the at least two different optical signals from the at least two downstream transmitters. 9. The system of claim 1, wherein the at least two different optical signals include two or more of an analog signal, an intensity modulated direct detection signal, a differential modulated signal, and a coherent signal. 10. The system of claim 1, wherein the at least two end users comprise at least two of a customer device, customer premises, a business user, and an optical network unit. 11. The system of claim 1, further configured to implement coherent dense wavelength division multiplexing with a passive optical network architecture. 12. The system of claim 11,
wherein the at least two end users include at least N subscribers, and wherein the system comprises at least two fiber segments for each N subscribers. 13. The system of claim 1, further configured to implement wavelength filtering and injection locking. 14. The system of claim 13,
wherein the at least two end users include at least N subscribers, and wherein the system comprises at least three fiber segments for each 2N subscribers. 15. A method of distributing heterogeneous wavelength signals over a fiber segment of an optical network, comprising the steps of:
monitoring at least two different optical carriers from at least two different transmitters, respectively; analyzing one or more characteristics of the fiber segment; determining one or more parameters of the at least two different optical carriers; and assigning a wavelength spectrum to each of the at least two different optical carriers according to the one or more analyzed fiber segment characteristics and the one or more determined optical carrier parameters. 16. The method of claim 15, further comprising, after the step of assigning, multiplexing the at least two different optical carriers to the fiber segment according to the respective assigned wavelength spectra. 17. The method of claim 15, wherein the at least two different optical carriers include two or more of an analog signal, an intensity modulated direct detection signal, a differential modulated signal, and a coherent signal. 18. The method of claim 15, wherein the fiber segment characteristics include one or more of fiber type, fiber length, implementation of amplification and/or loss devices, implementation of wavelength filters or splitters, and fiber distribution network topology. 19. The method of claim 15, wherein the optical carrier parameters include one or more of individual carrier optical power levels, aggregate carrier power, number of optical carriers, signal wavelength, wavelength spacing among carriers, modulation format, modulation bandwidth, carrier configurability, channel coding/decoding, polarization multiplexing, forward error correction, and carrier tenability. 20. An optical distribution center apparatus, comprising:
an input optical interface for communication with an optical hub; an output optical interface for communication with one or more end user devices configured to process optical signals; a wavelength filter for separating a downstream heterogeneous optical signal from the input optical interface into a plurality of downstream homogenous optical signals; and a downstream optical switch for distributing the plurality of downstream homogeneous optical signals from the wavelength filter to the output optical interface in response to a first control signal from the optical hub. 21. The apparatus of claim 20, wherein the wavelength filter comprises at least one of a wavelength division multiplexing grating and a cyclic arrayed waveguide grating. 22. The apparatus of claim 20, wherein the downstream optical switch is an N×N optical switch configured to associate particular ones of the plurality of downstream homogeneous optical signals with respective ones of the one or more end user devices. 23. The apparatus of claim 20, wherein the first control signal is received from an intelligent configuration unit disposed within the optical hub. 24. The apparatus of claim 20, further comprising:
an upstream optical switch for distributing a plurality of upstream homogeneous optical signals collected from the output optical interface in response to a second control signal from the optical hub; and an optical combiner for aggregating the distributed plurality of upstream homogenous optical signals into a heterogeneous upstream optical signal to the input optical interface. 25. The apparatus of claim 24, wherein the optical combiner comprises at least one of a wavelength division multiplexing grating and a passive optical splitter. 26. The apparatus of claim 24, wherein the upstream optical switch is an N×N optical switch. 27. The apparatus of claim 24, wherein the second control signal is a counterpart command of the first control signal. 28. The apparatus of claim 24, wherein the optical distribution center is configured to receive the first and second control signals separately from the input optical interface. 29. The apparatus of claim 24, further comprising a hybrid fiber coaxial portion in communication with the output optical interface. 30. The apparatus of claim 24, wherein the second control signal is received from an intelligent configuration unit disposed within the optical hub. | 2,600 |
10,796 | 10,796 | 14,867,597 | 2,627 | One embodiment provides a method, including: identifying a defined display region; receiving, on an off screen input device, user input; scaling, using a processor, the user input based on the defined display region; and displaying, on a display device, the scaled user input within the defined display region. Other aspects are described and claimed. | 1. A method, comprising:
identifying a defined display region; receiving, on an off screen input device, user input; scaling, using a processor, the user input based on the defined display region; and displaying, on a display device, the scaled user input within the defined display region. 2. The method of claim 1, wherein the user input is drawing input. 3. The method of claim 1, further comprising receiving user input that identifies the defined display region. 4. The method of claim 1, further comprising receiving user input that identifies a scaling factor. 5. The method of claim 1, wherein the defined display region is automatically identified. 6. The method of claim 1, wherein the defined display region is set by default. 7. The method of claim 1, wherein the defined display region is application specific. 8. The method of claim 1, wherein the off screen input device is selected from the group consisting of: a surface acoustic wave device, resistive device, capacitive device, infrared grid device, optical device, induction device, and acoustic pulse device. 9. The method of claim 1, wherein the user input is scaled based on a size associated with the user input. 10. The method of claim 1, wherein the scaling is performed as the user input is received. 11. An information handling device, comprising:
a display device; a processor operatively coupled to the off screen input device and the display device; and a memory device that stores instructions executable by the processor to: identify a defined display region; receive off screen user input; scale the user input based on the defined display region; and display, on the display device, the scaled user input within the defined display region. 12. The information handling device of claim 11, wherein the user input is drawing input. 13. The information handling device of claim 11, wherein the instructions are further executed by the processor to receive user input that identifies the defined display region. 14. The information handling device of claim 1, wherein the instructions are further executed by the processor to receive user input that identifies a scaling factor. 15. The information handling device of claim 11, wherein the defined display region is automatically identified. 16. The information handling device of claim 11, wherein the defined display region is set by default. 17. The information handling device of claim 11, wherein the defined display region is application specific. 18. The information handling device of claim 11, wherein the off screen input device is selected from the group consisting of: a surface acoustic wave device, resistive device, capacitive device, infrared grid device, optical device, induction device, and acoustic pulse device. 19. The information handling device of claim 11, wherein the user input is scaled based a size associated with the user input. 20. A product, comprising:
a storage device having code stored therewith, the code being executable by a processor and comprising: code that identifies a defined display region; code that receives off screen user input; code that scales the user input based on the defined display region; and code that displays, on a display device, the scaled user input within the defined display region. 21. An information handling device, comprising:
an off screen input device, a display device; a processor operatively coupled to the off screen input device and the display device; and a memory device that stores instructions executable by the processor to: identify a defined display region; receive off screen user input from the off screen input device; scale the user input based on the defined display region; and display, on the display device, the scaled user input within the defined display region. | One embodiment provides a method, including: identifying a defined display region; receiving, on an off screen input device, user input; scaling, using a processor, the user input based on the defined display region; and displaying, on a display device, the scaled user input within the defined display region. Other aspects are described and claimed.1. A method, comprising:
identifying a defined display region; receiving, on an off screen input device, user input; scaling, using a processor, the user input based on the defined display region; and displaying, on a display device, the scaled user input within the defined display region. 2. The method of claim 1, wherein the user input is drawing input. 3. The method of claim 1, further comprising receiving user input that identifies the defined display region. 4. The method of claim 1, further comprising receiving user input that identifies a scaling factor. 5. The method of claim 1, wherein the defined display region is automatically identified. 6. The method of claim 1, wherein the defined display region is set by default. 7. The method of claim 1, wherein the defined display region is application specific. 8. The method of claim 1, wherein the off screen input device is selected from the group consisting of: a surface acoustic wave device, resistive device, capacitive device, infrared grid device, optical device, induction device, and acoustic pulse device. 9. The method of claim 1, wherein the user input is scaled based on a size associated with the user input. 10. The method of claim 1, wherein the scaling is performed as the user input is received. 11. An information handling device, comprising:
a display device; a processor operatively coupled to the off screen input device and the display device; and a memory device that stores instructions executable by the processor to: identify a defined display region; receive off screen user input; scale the user input based on the defined display region; and display, on the display device, the scaled user input within the defined display region. 12. The information handling device of claim 11, wherein the user input is drawing input. 13. The information handling device of claim 11, wherein the instructions are further executed by the processor to receive user input that identifies the defined display region. 14. The information handling device of claim 1, wherein the instructions are further executed by the processor to receive user input that identifies a scaling factor. 15. The information handling device of claim 11, wherein the defined display region is automatically identified. 16. The information handling device of claim 11, wherein the defined display region is set by default. 17. The information handling device of claim 11, wherein the defined display region is application specific. 18. The information handling device of claim 11, wherein the off screen input device is selected from the group consisting of: a surface acoustic wave device, resistive device, capacitive device, infrared grid device, optical device, induction device, and acoustic pulse device. 19. The information handling device of claim 11, wherein the user input is scaled based a size associated with the user input. 20. A product, comprising:
a storage device having code stored therewith, the code being executable by a processor and comprising: code that identifies a defined display region; code that receives off screen user input; code that scales the user input based on the defined display region; and code that displays, on a display device, the scaled user input within the defined display region. 21. An information handling device, comprising:
an off screen input device, a display device; a processor operatively coupled to the off screen input device and the display device; and a memory device that stores instructions executable by the processor to: identify a defined display region; receive off screen user input from the off screen input device; scale the user input based on the defined display region; and display, on the display device, the scaled user input within the defined display region. | 2,600 |
10,797 | 10,797 | 15,323,323 | 2,622 | A vehicular display apparatus that displays an image on a windshield of a vehicle has an attention target detector configured to detect an attention target to which attention of a driver of the vehicle needs to be drawn, and calculate a distance from the attention target to the vehicle, and a display controller configured to perform display control that displays an attention mark on the windshield in a superimposed manner such that, from a point of view of the driver, the attention mark is displayed close to the attention target detected by the attention target detector, the attention mark being displayed to draw the attention of the driver to the attention target. The display controller sets a base point at a position of the attention target on the windshield, sets a display position of the attention mark at a position which is a predetermined distance away from the base point. | 1. A vehicular display apparatus that displays an image on a windshield of a vehicle, the vehicular display apparatus comprising:
an attention target detector configured to detect an attention target to which attention of a driver of the vehicle needs to be drawn, and calculate a distance from the attention target to the vehicle; and a display controller configured to perform display control that displays an attention mark on the windshield in a superimposed manner such that, from a point of view of the driver, the attention mark is displayed close to the attention target detected by the attention target detector, the attention mark being displayed to draw the attention of the driver to the attention target, wherein the display controller sets a base point at a position of the attention target on the windshield, sets a display position of the attention mark at a position which is a predetermined distance away from the base point, changes the display position of the attention mark by changing the predetermined distance according to the distance from the attention target to the vehicle, and corrects the display position of the attention mark according to a time difference between detection of the attention target and display of the attention mark. 2. The vehicular display apparatus according to claim 1,
wherein the display controller sets the display position of the attention mark at a position which is the predetermined distance away from the base point and is below the attention target. 3. The vehicular display apparatus according to claim 1,
wherein the display controller sets the display position of the attention mark at a position which is the predetermined distance away from the base point and is horizontally next to the attention target. 4. The vehicular display apparatus according to claim 1,
wherein the display controller sets the display position of the attention mark at a position which is the predetermined distance away from the base point and is above the attention target. 5. (canceled) 6. A vehicular display method performed by a vehicular display apparatus that displays an image on a windshield of a vehicle, the vehicular display method comprising:
detecting an attention target to which attention of a driver of the vehicle needs to be drawn, and calculating a distance from the attention target to the vehicle; and performing display control for displaying an attention mark on the windshield in a superimposed manner such, that from a point of view of the driver, the attention mark is displayed close to the attention target, the attention mark being displayed to draw the attention of the driver to the attention target, wherein the display control of the attention mark sets a base point at a position of the attention target on the windshield, sets a display position of the attention mark at a position which is a predetermined distance away from the base point, changes the display position of the attention mark by changing the predetermined distance according to the distance from the attention target to the vehicle, and corrects the display position of the attention mark according to a time difference between detection of the attention target and display of the attention mark. | A vehicular display apparatus that displays an image on a windshield of a vehicle has an attention target detector configured to detect an attention target to which attention of a driver of the vehicle needs to be drawn, and calculate a distance from the attention target to the vehicle, and a display controller configured to perform display control that displays an attention mark on the windshield in a superimposed manner such that, from a point of view of the driver, the attention mark is displayed close to the attention target detected by the attention target detector, the attention mark being displayed to draw the attention of the driver to the attention target. The display controller sets a base point at a position of the attention target on the windshield, sets a display position of the attention mark at a position which is a predetermined distance away from the base point.1. A vehicular display apparatus that displays an image on a windshield of a vehicle, the vehicular display apparatus comprising:
an attention target detector configured to detect an attention target to which attention of a driver of the vehicle needs to be drawn, and calculate a distance from the attention target to the vehicle; and a display controller configured to perform display control that displays an attention mark on the windshield in a superimposed manner such that, from a point of view of the driver, the attention mark is displayed close to the attention target detected by the attention target detector, the attention mark being displayed to draw the attention of the driver to the attention target, wherein the display controller sets a base point at a position of the attention target on the windshield, sets a display position of the attention mark at a position which is a predetermined distance away from the base point, changes the display position of the attention mark by changing the predetermined distance according to the distance from the attention target to the vehicle, and corrects the display position of the attention mark according to a time difference between detection of the attention target and display of the attention mark. 2. The vehicular display apparatus according to claim 1,
wherein the display controller sets the display position of the attention mark at a position which is the predetermined distance away from the base point and is below the attention target. 3. The vehicular display apparatus according to claim 1,
wherein the display controller sets the display position of the attention mark at a position which is the predetermined distance away from the base point and is horizontally next to the attention target. 4. The vehicular display apparatus according to claim 1,
wherein the display controller sets the display position of the attention mark at a position which is the predetermined distance away from the base point and is above the attention target. 5. (canceled) 6. A vehicular display method performed by a vehicular display apparatus that displays an image on a windshield of a vehicle, the vehicular display method comprising:
detecting an attention target to which attention of a driver of the vehicle needs to be drawn, and calculating a distance from the attention target to the vehicle; and performing display control for displaying an attention mark on the windshield in a superimposed manner such, that from a point of view of the driver, the attention mark is displayed close to the attention target, the attention mark being displayed to draw the attention of the driver to the attention target, wherein the display control of the attention mark sets a base point at a position of the attention target on the windshield, sets a display position of the attention mark at a position which is a predetermined distance away from the base point, changes the display position of the attention mark by changing the predetermined distance according to the distance from the attention target to the vehicle, and corrects the display position of the attention mark according to a time difference between detection of the attention target and display of the attention mark. | 2,600 |
10,798 | 10,798 | 12,868,859 | 2,651 | A telepresence system that includes a portable telepresence apparatus coupled to a remote control station. The telepresence apparatus comprises a monitor, a camera, a speaker, a microphone and a viewfinder screen coupled to a housing. The view finder screen allows the user to view the image being captured by the camera. The portable telepresence apparatus is a hand held device that can be moved by a holder of the device in response to audio commands from the remote station. The telepresence apparatus can be used by medical personnel to remotely view a patient in a fast and efficient manner. | 1. A portable telepresence apparatus that is adapted to be coupled to a remote station that has a station monitor, a station camera, a station speaker and a station microphone, comprising:
a housing; a first camera coupled to said housing; a monitor that is coupled to said housing and is adapted to display images captured by the station camera; a speaker that is coupled to said housing and is adapted to generate a sound provided through the station microphone; a microphone coupled to said housing; a battery coupled to said housing; a wireless transceiver coupled to said housing; and a viewfinder screen coupled to said housing. 2. The portable telepresence apparatus of claim 1, further comprising a second camera coupled to said housing, said first camera being located on a first face of said housing that includes said monitor and said second camera being location on a second face of said housing that includes said viewfinder screen. 3. The portable telepresence apparatus of claim 1, wherein said viewfinder screen includes at least one touch screen function that can vary an image captured by said first camera. 4. The portable telepresence apparatus of claim 1, further comprising a motion sensing device attached to said housing. 5. The portable telepresence apparatus of claim 4, wherein said motion sensing device is utilized to correct an image displayed by said monitor. 6. The portable telepresence apparatus of claim 4, wherein said motion sensing device is utilized to correct an image provided to the remote station. 7. The portable telepresence apparatus of claim 1, wherein said monitor includes a graphical user interface that allows a user to vary an audio characteristic. 8. The portable telepresence apparatus of claim 1, wherein said housing is configured to be placed on a surface in an upright position. 9. The portable telepresence apparatus of claim 1, wherein said remote station monitor displays hardware icons and depicts a break in a communication link between hardware devices. 10. The portable telepresence apparatus of claim 1, further comprising an actuator system that can move the first camera and is controlled by the remote station. 11. The portable telepresence apparatus of claim 1, further comprising a GPS apparatus. 12. A portable telepresence apparatus that is adapted to be coupled to a remote station that has a station monitor, a station camera, a station speaker and a station microphone, comprising:
a housing; a camera coupled to said housing; a monitor that is coupled to said housing and is adapted to display images captured by the station camera; a speaker that is coupled to said housing and is adapted to generate a sound provided through the station microphone; a microphone coupled to said housing; a battery coupled to said housing; a wireless transceiver coupled to said housing; and a memory device for capturing at least one image prior to establishing communication that is transmitted to the remote station after the remote station establishes a communication with the portable telepresence apparatus. 13. The portable telepresence apparatus of claim 12, further comprising at least one input that allows a user to vary an input characteristic of the portable telepresence apparatus before the remote station establishes the communication with the portable telepresence apparatus. 14. A portable telepresence apparatus that is adapted to be coupled to a remote station that has a station monitor, a station camera, a station speaker and a station microphone, comprising:
a housing; a first camera coupled to said housing; a monitor that is coupled to said housing and is adapted to display images captured by the station camera; a speaker that is coupled to said housing and is adapted to generate a sound provided through the station microphone; a microphone coupled to said housing; a battery coupled to said housing; a wireless transceiver coupled to said housing; and a motion sensing device coupled to said housing. 15. The portable telepresence apparatus of claim 14, wherein said motion sensing device is utilized to correct an image displayed by said monitor. 16. The portable telepresence apparatus of claim 14, wherein said motion sensing device is utilized to correct an image provided to the remote station. 17. A portable telepresence apparatus that is adapted to be coupled to a remote station that has a station monitor, a station camera, a station speaker and a station microphone, comprising:
a housing; a first camera coupled to said housing; a monitor that is coupled to said housing and is adapted to display images captured by the station camera; a speaker that is coupled to said housing and is adapted to generate a sound provided through the station microphone; a microphone coupled to said housing; a battery coupled to said housing; and a wireless transceiver coupled to said housing, wherein said wireless transceiver transmits one type of data on a lower bandwidth and transmits a second type of data on a higher bandwidth network. 18. A method for providing a remote medical consultation, comprising:
setting up a portable telepresence apparatus to view a patient, the portable telepresence apparatus includes a camera, a monitor, a speaker and a microphone; capturing an image of the patient with the portable telepresence apparatus; linking the portable telepresence apparatus to a remote station that includes a remote station camera that can capture an image of a healthcare worker operating the remote station, a remote station monitor, a remote station speaker and a remote station microphone; transmitting the image of the healthcare worker to the portable telepresence apparatus; displaying the healthcare worker image on the monitor; transmitting the patient image to the remote station; displaying the patient image on the remote station monitor; and transmitting an audio command from the remote station to the portable telepresence apparatus. 19. The method of claim 18, further comprising storing the patient image and transmitting the stored patient image after the remote station is linked with the portable telepresence apparatus. 20. The method of claim 18, wherein the patient is being transported while the camera captures the patient image. 21. The method of claim 20, further comprising terminating the link between the remote station and the portable telepresence apparatus and continuing to capture and store an image of the patient with the portable telepresence apparatus. 22. The method of claim 20, further comprising moving the portable telepresence apparatus to a first hospital. 23. The method of claim 22, further comprising moving the patient to a second hospital. 24. The method of claim 23, wherein the portable telepresence apparatus is moved with the patient to the second hospital. 25. The method of claim 24, wherein the portable telepresence apparatus transmits an image of the patient to the remote station while the patient is being moved to the second hospital. 26. The method of claim 25, wherein the remote station is located at the second hospital. 27. The method of claim 18, further comprising moving the portable telepresence apparatus to a home. 28. The method of claim 27, further comprising moving the patient to a medical facility. 29. The method of claim 28, further comprising storing data inputted into the portable telepresence apparatus and transmitting the data to the remote station. 30. The method of claim 28, further comprising attaching a medical instrument to the portable telepresence apparatus and obtaining patient data through the medical instrument. 31. A method for obtaining a remotely captured image, comprising:
linking a portable telepresence apparatus to a remote station, the portable telepresence apparatus includes a first camera, a monitor, a speaker and a microphone, the remote station includes a remote station camera, a remote station monitor, a remote station speaker and a remote station microphone; transmitting an image that is captured by the first camera to the remote station; transmitting an audio instruction from the remote station to the portable telepresence apparatus to a user holding the portable telepresence apparatus; and moving the portable telepresence apparatus by the user holding the portable telepresence apparatus. 32. The method of claim 31, further comprising viewing the image captured by the camera through a viewfinder screen on the portable telepresence apparatus. 33. The method of claim 32, further comprising capturing an image of the user holding the portable telepresence apparatus with a second camera and transmitting the image of the user to the remote station. | A telepresence system that includes a portable telepresence apparatus coupled to a remote control station. The telepresence apparatus comprises a monitor, a camera, a speaker, a microphone and a viewfinder screen coupled to a housing. The view finder screen allows the user to view the image being captured by the camera. The portable telepresence apparatus is a hand held device that can be moved by a holder of the device in response to audio commands from the remote station. The telepresence apparatus can be used by medical personnel to remotely view a patient in a fast and efficient manner.1. A portable telepresence apparatus that is adapted to be coupled to a remote station that has a station monitor, a station camera, a station speaker and a station microphone, comprising:
a housing; a first camera coupled to said housing; a monitor that is coupled to said housing and is adapted to display images captured by the station camera; a speaker that is coupled to said housing and is adapted to generate a sound provided through the station microphone; a microphone coupled to said housing; a battery coupled to said housing; a wireless transceiver coupled to said housing; and a viewfinder screen coupled to said housing. 2. The portable telepresence apparatus of claim 1, further comprising a second camera coupled to said housing, said first camera being located on a first face of said housing that includes said monitor and said second camera being location on a second face of said housing that includes said viewfinder screen. 3. The portable telepresence apparatus of claim 1, wherein said viewfinder screen includes at least one touch screen function that can vary an image captured by said first camera. 4. The portable telepresence apparatus of claim 1, further comprising a motion sensing device attached to said housing. 5. The portable telepresence apparatus of claim 4, wherein said motion sensing device is utilized to correct an image displayed by said monitor. 6. The portable telepresence apparatus of claim 4, wherein said motion sensing device is utilized to correct an image provided to the remote station. 7. The portable telepresence apparatus of claim 1, wherein said monitor includes a graphical user interface that allows a user to vary an audio characteristic. 8. The portable telepresence apparatus of claim 1, wherein said housing is configured to be placed on a surface in an upright position. 9. The portable telepresence apparatus of claim 1, wherein said remote station monitor displays hardware icons and depicts a break in a communication link between hardware devices. 10. The portable telepresence apparatus of claim 1, further comprising an actuator system that can move the first camera and is controlled by the remote station. 11. The portable telepresence apparatus of claim 1, further comprising a GPS apparatus. 12. A portable telepresence apparatus that is adapted to be coupled to a remote station that has a station monitor, a station camera, a station speaker and a station microphone, comprising:
a housing; a camera coupled to said housing; a monitor that is coupled to said housing and is adapted to display images captured by the station camera; a speaker that is coupled to said housing and is adapted to generate a sound provided through the station microphone; a microphone coupled to said housing; a battery coupled to said housing; a wireless transceiver coupled to said housing; and a memory device for capturing at least one image prior to establishing communication that is transmitted to the remote station after the remote station establishes a communication with the portable telepresence apparatus. 13. The portable telepresence apparatus of claim 12, further comprising at least one input that allows a user to vary an input characteristic of the portable telepresence apparatus before the remote station establishes the communication with the portable telepresence apparatus. 14. A portable telepresence apparatus that is adapted to be coupled to a remote station that has a station monitor, a station camera, a station speaker and a station microphone, comprising:
a housing; a first camera coupled to said housing; a monitor that is coupled to said housing and is adapted to display images captured by the station camera; a speaker that is coupled to said housing and is adapted to generate a sound provided through the station microphone; a microphone coupled to said housing; a battery coupled to said housing; a wireless transceiver coupled to said housing; and a motion sensing device coupled to said housing. 15. The portable telepresence apparatus of claim 14, wherein said motion sensing device is utilized to correct an image displayed by said monitor. 16. The portable telepresence apparatus of claim 14, wherein said motion sensing device is utilized to correct an image provided to the remote station. 17. A portable telepresence apparatus that is adapted to be coupled to a remote station that has a station monitor, a station camera, a station speaker and a station microphone, comprising:
a housing; a first camera coupled to said housing; a monitor that is coupled to said housing and is adapted to display images captured by the station camera; a speaker that is coupled to said housing and is adapted to generate a sound provided through the station microphone; a microphone coupled to said housing; a battery coupled to said housing; and a wireless transceiver coupled to said housing, wherein said wireless transceiver transmits one type of data on a lower bandwidth and transmits a second type of data on a higher bandwidth network. 18. A method for providing a remote medical consultation, comprising:
setting up a portable telepresence apparatus to view a patient, the portable telepresence apparatus includes a camera, a monitor, a speaker and a microphone; capturing an image of the patient with the portable telepresence apparatus; linking the portable telepresence apparatus to a remote station that includes a remote station camera that can capture an image of a healthcare worker operating the remote station, a remote station monitor, a remote station speaker and a remote station microphone; transmitting the image of the healthcare worker to the portable telepresence apparatus; displaying the healthcare worker image on the monitor; transmitting the patient image to the remote station; displaying the patient image on the remote station monitor; and transmitting an audio command from the remote station to the portable telepresence apparatus. 19. The method of claim 18, further comprising storing the patient image and transmitting the stored patient image after the remote station is linked with the portable telepresence apparatus. 20. The method of claim 18, wherein the patient is being transported while the camera captures the patient image. 21. The method of claim 20, further comprising terminating the link between the remote station and the portable telepresence apparatus and continuing to capture and store an image of the patient with the portable telepresence apparatus. 22. The method of claim 20, further comprising moving the portable telepresence apparatus to a first hospital. 23. The method of claim 22, further comprising moving the patient to a second hospital. 24. The method of claim 23, wherein the portable telepresence apparatus is moved with the patient to the second hospital. 25. The method of claim 24, wherein the portable telepresence apparatus transmits an image of the patient to the remote station while the patient is being moved to the second hospital. 26. The method of claim 25, wherein the remote station is located at the second hospital. 27. The method of claim 18, further comprising moving the portable telepresence apparatus to a home. 28. The method of claim 27, further comprising moving the patient to a medical facility. 29. The method of claim 28, further comprising storing data inputted into the portable telepresence apparatus and transmitting the data to the remote station. 30. The method of claim 28, further comprising attaching a medical instrument to the portable telepresence apparatus and obtaining patient data through the medical instrument. 31. A method for obtaining a remotely captured image, comprising:
linking a portable telepresence apparatus to a remote station, the portable telepresence apparatus includes a first camera, a monitor, a speaker and a microphone, the remote station includes a remote station camera, a remote station monitor, a remote station speaker and a remote station microphone; transmitting an image that is captured by the first camera to the remote station; transmitting an audio instruction from the remote station to the portable telepresence apparatus to a user holding the portable telepresence apparatus; and moving the portable telepresence apparatus by the user holding the portable telepresence apparatus. 32. The method of claim 31, further comprising viewing the image captured by the camera through a viewfinder screen on the portable telepresence apparatus. 33. The method of claim 32, further comprising capturing an image of the user holding the portable telepresence apparatus with a second camera and transmitting the image of the user to the remote station. | 2,600 |
10,799 | 10,799 | 15,863,361 | 2,667 | A vehicle information display system includes a digital license plate attachable to a vehicle and having a display able to present electronically readable visual information. This electronically readable visual information is usable to facilitate provision of services, including but not limited to vehicle rental or providing vehicle access to authorized service providers. | 1. A vehicle information display system, comprising:
a digital license plate attachable to a vehicle and having a display able to present electronically readable visual information; and wherein the electronically readable visual information is usable to facilitate provision of services. 2. The vehicle information display system of claim 1, wherein electronically readable visual information further comprises at least one of text, symbols, colors, and barcodes. 3. The vehicle information display system of claim 1, wherein electronically readable visual information further comprises a two-dimensional barcode readable with a smartphone having a camera. 4. The vehicle information display system of claim 1, wherein electronically readable visual information further comprises a two-dimensional QR code readable with a smartphone having a camera. 5. The vehicle information display system of claim 1, wherein the display is bistable. 6. The vehicle information display system of claim 1, wherein the display is remotely updateable. 7. The vehicle information display system of claim 1, wherein services are vehicle related. 8. The vehicle information display system of claim 1, wherein services further comprise vehicle rental. 9. The vehicle information display system of claim 1, wherein services further comprise initiating operation of selected vehicle components by direction of the digital license plate, including at least one of vehicle start, vehicle stop, vehicle trunk open, vehicle gas cap release, door open, door close, vehicle hood open, and trunk open. 10. The vehicle information display system of claim 1, wherein the digital license plate further comprises a camera able detect and act on presented electronically readable visual information. 11. A method of interacting with a vehicle, comprising the steps of:
locating a digital license plate attachable to a vehicle and having a display able to present electronically readable visual information; electronically reading the electronically readable visual information is usable to facilitate provision of services. 12. The method of interacting with a vehicle of claim 1, wherein electronically readable visual information further comprises at least one of text, symbols, colors, and barcodes. 13. The method of interacting with a vehicle of claim 1, wherein electronically readable visual information further comprises a two-dimensional barcode readable with a smartphone having a camera. 14. The method of interacting with a vehicle of claim 1, wherein electronically readable visual information further comprises a two-dimensional QR code readable with a smartphone having a camera. 15. The method of interacting with a vehicle of claim 1, wherein the display is bistable. 16. The method of interacting with a vehicle of claim 1, wherein the display is remotely updateable. 17. The method of interacting with a vehicle of claim 1, wherein services further comprise vehicle rental. 18. The method of interacting with a vehicle of claim 1, wherein services further comprise initiating operation of selected vehicle components by direction of the digital license plate, including at least one of vehicle start, vehicle stop, vehicle trunk open, vehicle gas cap release, door open, door close, vehicle hood open, and trunk open. 19. The method of interacting with a vehicle of claim 1, wherein the digital license plate further comprises a camera able detect and act on presented electronically readable visual information. 20. The method of interacting with a vehicle of claim 1, further comprising the steps of using a smartphone to read the electronically readable visual information;
receiving control authorization from a remote system connectable to the remote system; and operating the vehicle using the smartphone. | A vehicle information display system includes a digital license plate attachable to a vehicle and having a display able to present electronically readable visual information. This electronically readable visual information is usable to facilitate provision of services, including but not limited to vehicle rental or providing vehicle access to authorized service providers.1. A vehicle information display system, comprising:
a digital license plate attachable to a vehicle and having a display able to present electronically readable visual information; and wherein the electronically readable visual information is usable to facilitate provision of services. 2. The vehicle information display system of claim 1, wherein electronically readable visual information further comprises at least one of text, symbols, colors, and barcodes. 3. The vehicle information display system of claim 1, wherein electronically readable visual information further comprises a two-dimensional barcode readable with a smartphone having a camera. 4. The vehicle information display system of claim 1, wherein electronically readable visual information further comprises a two-dimensional QR code readable with a smartphone having a camera. 5. The vehicle information display system of claim 1, wherein the display is bistable. 6. The vehicle information display system of claim 1, wherein the display is remotely updateable. 7. The vehicle information display system of claim 1, wherein services are vehicle related. 8. The vehicle information display system of claim 1, wherein services further comprise vehicle rental. 9. The vehicle information display system of claim 1, wherein services further comprise initiating operation of selected vehicle components by direction of the digital license plate, including at least one of vehicle start, vehicle stop, vehicle trunk open, vehicle gas cap release, door open, door close, vehicle hood open, and trunk open. 10. The vehicle information display system of claim 1, wherein the digital license plate further comprises a camera able detect and act on presented electronically readable visual information. 11. A method of interacting with a vehicle, comprising the steps of:
locating a digital license plate attachable to a vehicle and having a display able to present electronically readable visual information; electronically reading the electronically readable visual information is usable to facilitate provision of services. 12. The method of interacting with a vehicle of claim 1, wherein electronically readable visual information further comprises at least one of text, symbols, colors, and barcodes. 13. The method of interacting with a vehicle of claim 1, wherein electronically readable visual information further comprises a two-dimensional barcode readable with a smartphone having a camera. 14. The method of interacting with a vehicle of claim 1, wherein electronically readable visual information further comprises a two-dimensional QR code readable with a smartphone having a camera. 15. The method of interacting with a vehicle of claim 1, wherein the display is bistable. 16. The method of interacting with a vehicle of claim 1, wherein the display is remotely updateable. 17. The method of interacting with a vehicle of claim 1, wherein services further comprise vehicle rental. 18. The method of interacting with a vehicle of claim 1, wherein services further comprise initiating operation of selected vehicle components by direction of the digital license plate, including at least one of vehicle start, vehicle stop, vehicle trunk open, vehicle gas cap release, door open, door close, vehicle hood open, and trunk open. 19. The method of interacting with a vehicle of claim 1, wherein the digital license plate further comprises a camera able detect and act on presented electronically readable visual information. 20. The method of interacting with a vehicle of claim 1, further comprising the steps of using a smartphone to read the electronically readable visual information;
receiving control authorization from a remote system connectable to the remote system; and operating the vehicle using the smartphone. | 2,600 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.